WorldWideScience

Sample records for computational method combined

  1. Realization of the Evristic Combination Methods by Means of Computer Graphics

    Directory of Open Access Journals (Sweden)

    S. A. Novoselov

    2012-01-01

    Full Text Available The paper looks at the ways of enhancing and stimulating the creative activity and initiative of pedagogic students – the prospective specialists called for educating and upbringing socially and professionally competent, originally thinking, versatile personalities. For developing their creative abilities the author recommends introducing the heuristic combination methods, applied for engineering creativity facilitation; associative-synectic technology; and computer graphics tools. The paper contains the comparative analysis of the main heuristic method operations and the computer graphics redactor in creating a visual composition. The examples of implementing the heuristic combination methods are described along with the extracts of the laboratory classes designed for creativity and its motivation developments. The approbation of the given method in the several universities confirms the prospects of enhancing the students’ learning and creative activities. 

  2. Fast computation of the characteristics method on vector computers

    International Nuclear Information System (INIS)

    Kugo, Teruhiko

    2001-11-01

    Fast computation of the characteristics method to solve the neutron transport equation in a heterogeneous geometry has been studied. Two vector computation algorithms; an odd-even sweep (OES) method and an independent sequential sweep (ISS) method have been developed and their efficiency to a typical fuel assembly calculation has been investigated. For both methods, a vector computation is 15 times faster than a scalar computation. From a viewpoint of comparison between the OES and ISS methods, the followings are found: 1) there is a small difference in a computation speed, 2) the ISS method shows a faster convergence and 3) the ISS method saves about 80% of computer memory size compared with the OES method. It is, therefore, concluded that the ISS method is superior to the OES method as a vectorization method. In the vector computation, a table-look-up method to reduce computation time of an exponential function saves only 20% of a whole computation time. Both the coarse mesh rebalance method and the Aitken acceleration method are effective as acceleration methods for the characteristics method, a combination of them saves 70-80% of outer iterations compared with a free iteration. (author)

  3. 4th Workshop on Combinations of Intelligent Methods and Applications

    CERN Document Server

    Palade, Vasile; Prentzas, Jim

    2016-01-01

    This volume includes extended and revised versions of the papers presented at the 4th Workshop on “Combinations of Intelligent Methods and Applications” (CIMA 2014) which was intended to become a forum for exchanging experience and ideas among researchers and practitioners dealing with combinations of different intelligent methods in Artificial Intelligence. The aim is to create integrated or hybrid methods that benefit from each of their components. Some of the existing presented efforts combine soft computing methods (fuzzy logic, neural networks and genetic algorithms). Another stream of efforts integrates case-based reasoning or machine learning with soft-computing methods. Some of the combinations have been more widely explored, like neuro-symbolic methods, neuro-fuzzy methods and methods combining rule-based and case-based reasoning. CIMA 2014 was held in conjunction with the 26th IEEE International Conference on Tools with Artificial Intelligence (ICTAI 2014). .

  4. A combined vector potential-scalar potential method for FE computation of 3D magnetic fields in electrical devices with iron cores

    Science.gov (United States)

    Wang, R.; Demerdash, N. A.

    1991-01-01

    A method of combined use of magnetic vector potential based finite-element (FE) formulations and magnetic scalar potential (MSP) based formulations for computation of three-dimensional magnetostatic fields is introduced. In this method, the curl-component of the magnetic field intensity is computed by a reduced magnetic vector potential. This field intensity forms the basic of a forcing function for a global magnetic scalar potential solution over the entire volume of the region. This method allows one to include iron portions sandwiched in between conductors within partitioned current-carrying subregions. The method is most suited for large-scale global-type 3-D magnetostatic field computations in electrical devices, and in particular rotating electric machinery.

  5. STADIC: a computer code for combining probability distributions

    International Nuclear Information System (INIS)

    Cairns, J.J.; Fleming, K.N.

    1977-03-01

    The STADIC computer code uses a Monte Carlo simulation technique for combining probability distributions. The specific function for combination of the input distribution is defined by the user by introducing the appropriate FORTRAN statements to the appropriate subroutine. The code generates a Monte Carlo sampling from each of the input distributions and combines these according to the user-supplied function to provide, in essence, a random sampling of the combined distribution. When the desired number of samples is obtained, the output routine calculates the mean, standard deviation, and confidence limits for the resultant distribution. This method of combining probability distributions is particularly useful in cases where analytical approaches are either too difficult or undefined

  6. Computational methods for data evaluation and assimilation

    CERN Document Server

    Cacuci, Dan Gabriel

    2013-01-01

    Data evaluation and data combination require the use of a wide range of probability theory concepts and tools, from deductive statistics mainly concerning frequencies and sample tallies to inductive inference for assimilating non-frequency data and a priori knowledge. Computational Methods for Data Evaluation and Assimilation presents interdisciplinary methods for integrating experimental and computational information. This self-contained book shows how the methods can be applied in many scientific and engineering areas. After presenting the fundamentals underlying the evaluation of experiment

  7. Electromagnetic field computation by network methods

    CERN Document Server

    Felsen, Leopold B; Russer, Peter

    2009-01-01

    This monograph proposes a systematic and rigorous treatment of electromagnetic field representations in complex structures. The book presents new strong models by combining important computational methods. This is the last book of the late Leopold Felsen.

  8. [Combine fats products: methodic opportunities of it identification].

    Science.gov (United States)

    Viktorova, E V; Kulakova, S N; Mikhaĭlov, N A

    2006-01-01

    At present time very topical problem is falsification of milk fat. The number of methods was considered to detection of milk fat authention and possibilities his difference from combined fat products. The analysis of modern approaches to valuation of milk fat authention has showed that the main method for detection of fat nature is gas chromatography analysis. The computer method of express identification of fat products is proposed for quick getting of information about accessory of examine fat to nature milk or combined fat product.

  9. Providing full point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer

    Energy Technology Data Exchange (ETDEWEB)

    Archer, Charles J.; Faraj, Daniel A.; Inglett, Todd A.; Ratterman, Joseph D.

    2018-01-30

    Methods, apparatus, and products are disclosed for providing full point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer, each compute node connected to each adjacent compute node in the global combining network through a link, that include: receiving a network packet in a compute node, the network packet specifying a destination compute node; selecting, in dependence upon the destination compute node, at least one of the links for the compute node along which to forward the network packet toward the destination compute node; and forwarding the network packet along the selected link to the adjacent compute node connected to the compute node through the selected link.

  10. Combinatorial methods with computer applications

    CERN Document Server

    Gross, Jonathan L

    2007-01-01

    Combinatorial Methods with Computer Applications provides in-depth coverage of recurrences, generating functions, partitions, and permutations, along with some of the most interesting graph and network topics, design constructions, and finite geometries. Requiring only a foundation in discrete mathematics, it can serve as the textbook in a combinatorial methods course or in a combined graph theory and combinatorics course.After an introduction to combinatorics, the book explores six systematic approaches within a comprehensive framework: sequences, solving recurrences, evaluating summation exp

  11. Combined methods for elliptic equations with singularities, interfaces and infinities

    CERN Document Server

    Li, Zi Cai

    1998-01-01

    In this book the author sets out to answer two important questions: 1. Which numerical methods may be combined together? 2. How can different numerical methods be matched together? In doing so the author presents a number of useful combinations, for instance, the combination of various FEMs, the combinations of FEM-FDM, REM-FEM, RGM-FDM, etc. The combined methods have many advantages over single methods: high accuracy of solutions, less CPU time, less computer storage, easy coupling with singularities as well as the complicated boundary conditions. Since coupling techniques are essential to combinations, various matching strategies among different methods are carefully discussed. The author provides the matching rules so that optimal convergence, even superconvergence, and optimal stability can be achieved, and also warns of the matching pitfalls to avoid. Audience: The book is intended for both mathematicians and engineers and may be used as text for advanced students.

  12. Computational Fluid Dynamics Analysis Method Developed for Rocket-Based Combined Cycle Engine Inlet

    Science.gov (United States)

    1997-01-01

    Renewed interest in hypersonic propulsion systems has led to research programs investigating combined cycle engines that are designed to operate efficiently across the flight regime. The Rocket-Based Combined Cycle Engine is a propulsion system under development at the NASA Lewis Research Center. This engine integrates a high specific impulse, low thrust-to-weight, airbreathing engine with a low-impulse, high thrust-to-weight rocket. From takeoff to Mach 2.5, the engine operates as an air-augmented rocket. At Mach 2.5, the engine becomes a dual-mode ramjet; and beyond Mach 8, the rocket is turned back on. One Rocket-Based Combined Cycle Engine variation known as the "Strut-Jet" concept is being investigated jointly by NASA Lewis, the U.S. Air Force, Gencorp Aerojet, General Applied Science Labs (GASL), and Lockheed Martin Corporation. Work thus far has included wind tunnel experiments and computational fluid dynamics (CFD) investigations with the NPARC code. The CFD method was initiated by modeling the geometry of the Strut-Jet with the GRIDGEN structured grid generator. Grids representing a subscale inlet model and the full-scale demonstrator geometry were constructed. These grids modeled one-half of the symmetric inlet flow path, including the precompression plate, diverter, center duct, side duct, and combustor. After the grid generation, full Navier-Stokes flow simulations were conducted with the NPARC Navier-Stokes code. The Chien low-Reynolds-number k-e turbulence model was employed to simulate the high-speed turbulent flow. Finally, the CFD solutions were postprocessed with a Fortran code. This code provided wall static pressure distributions, pitot pressure distributions, mass flow rates, and internal drag. These results were compared with experimental data from a subscale inlet test for code validation; then they were used to help evaluate the demonstrator engine net thrust.

  13. Combining Acceleration Techniques for Low-Dose X-Ray Cone Beam Computed Tomography Image Reconstruction.

    Science.gov (United States)

    Huang, Hsuan-Ming; Hsiao, Ing-Tsung

    2017-01-01

    Over the past decade, image quality in low-dose computed tomography has been greatly improved by various compressive sensing- (CS-) based reconstruction methods. However, these methods have some disadvantages including high computational cost and slow convergence rate. Many different speed-up techniques for CS-based reconstruction algorithms have been developed. The purpose of this paper is to propose a fast reconstruction framework that combines a CS-based reconstruction algorithm with several speed-up techniques. First, total difference minimization (TDM) was implemented using the soft-threshold filtering (STF). Second, we combined TDM-STF with the ordered subsets transmission (OSTR) algorithm for accelerating the convergence. To further speed up the convergence of the proposed method, we applied the power factor and the fast iterative shrinkage thresholding algorithm to OSTR and TDM-STF, respectively. Results obtained from simulation and phantom studies showed that many speed-up techniques could be combined to greatly improve the convergence speed of a CS-based reconstruction algorithm. More importantly, the increased computation time (≤10%) was minor as compared to the acceleration provided by the proposed method. In this paper, we have presented a CS-based reconstruction framework that combines several acceleration techniques. Both simulation and phantom studies provide evidence that the proposed method has the potential to satisfy the requirement of fast image reconstruction in practical CT.

  14. The Extrapolation-Accelerated Multilevel Aggregation Method in PageRank Computation

    Directory of Open Access Journals (Sweden)

    Bing-Yuan Pu

    2013-01-01

    Full Text Available An accelerated multilevel aggregation method is presented for calculating the stationary probability vector of an irreducible stochastic matrix in PageRank computation, where the vector extrapolation method is its accelerator. We show how to periodically combine the extrapolation method together with the multilevel aggregation method on the finest level for speeding up the PageRank computation. Detailed numerical results are given to illustrate the behavior of this method, and comparisons with the typical methods are also made.

  15. Computer Animation Based on Particle Methods

    Directory of Open Access Journals (Sweden)

    Rafal Wcislo

    1999-01-01

    Full Text Available The paper presents the main issues of a computer animation of a set of elastic macroscopic objects based on the particle method. The main assumption of the generated animations is to achieve very realistic movements in a scene observed on the computer display. The objects (solid bodies interact mechanically with each other, The movements and deformations of solids are calculated using the particle method. Phenomena connected with the behaviour of solids in the gravitational field, their defomtations caused by collisions and interactions with the optional liquid medium are simulated. The simulation ofthe liquid is performed using the cellular automata method. The paper presents both simulation schemes (particle method and cellular automata rules an the method of combining them in the single animation program. ln order to speed up the execution of the program the parallel version based on the network of workstation was developed. The paper describes the methods of the parallelization and it considers problems of load-balancing, collision detection, process synchronization and distributed control of the animation.

  16. Developing a multimodal biometric authentication system using soft computing methods.

    Science.gov (United States)

    Malcangi, Mario

    2015-01-01

    Robust personal authentication is becoming ever more important in computer-based applications. Among a variety of methods, biometric offers several advantages, mainly in embedded system applications. Hard and soft multi-biometric, combined with hard and soft computing methods, can be applied to improve the personal authentication process and to generalize the applicability. This chapter describes the embedded implementation of a multi-biometric (voiceprint and fingerprint) multimodal identification system based on hard computing methods (DSP) for feature extraction and matching, an artificial neural network (ANN) for soft feature pattern matching, and a fuzzy logic engine (FLE) for data fusion and decision.

  17. Automated Extraction of Cranial Landmarks from Computed Tomography Data using a Combined Method of Knowledge and Pattern Based Approaches

    Directory of Open Access Journals (Sweden)

    Roshan N. RAJAPAKSE

    2016-03-01

    Full Text Available Accurate identification of anatomical structures from medical imaging data is a significant and critical function in the medical domain. Past studies in this context have mainly utilized two main approaches, the knowledge and learning methodologies based methods. Further, most of previous reported studies have focused on identification of landmarks from lateral X-ray Computed Tomography (CT data, particularly in the field of orthodontics. However, this study focused on extracting cranial landmarks from large sets of cross sectional CT slices using a combined method of the two aforementioned approaches. The proposed method of this study is centered mainly on template data sets, which were created using the actual contour patterns extracted from CT cases for each of the landmarks in consideration. Firstly, these templates were used to devise rules which are a characteristic of the knowledge based method. Secondly, the same template sets were employed to perform template matching related to the learning methodologies approach. The proposed method was tested on two landmarks, the Dorsum sellae and the Pterygoid plate, using CT cases of 5 subjects. The results indicate that, out of the 10 tests, the output images were within the expected range (desired accuracy in 7 instances and acceptable range (near accuracy for 2 instances, thus verifying the effectiveness of the combined template sets centric approach proposed in this study.

  18. Combined computational and experimental approach to improve the assessment of mitral regurgitation by echocardiography.

    Science.gov (United States)

    Sonntag, Simon J; Li, Wei; Becker, Michael; Kaestner, Wiebke; Büsen, Martin R; Marx, Nikolaus; Merhof, Dorit; Steinseifer, Ulrich

    2014-05-01

    Mitral regurgitation (MR) is one of the most frequent valvular heart diseases. To assess MR severity, color Doppler imaging (CDI) is the clinical standard. However, inadequate reliability, poor reproducibility and heavy user-dependence are known limitations. A novel approach combining computational and experimental methods is currently under development aiming to improve the quantification. A flow chamber for a circulatory flow loop was developed. Three different orifices were used to mimic variations of MR. The flow field was recorded simultaneously by a 2D Doppler ultrasound transducer and Particle Image Velocimetry (PIV). Computational Fluid Dynamics (CFD) simulations were conducted using the same geometry and boundary conditions. The resulting computed velocity field was used to simulate synthetic Doppler signals. Comparison between PIV and CFD shows a high level of agreement. The simulated CDI exhibits the same characteristics as the recorded color Doppler images. The feasibility of the proposed combination of experimental and computational methods for the investigation of MR is shown and the numerical methods are successfully validated against the experiments. Furthermore, it is discussed how the approach can be used in the long run as a platform to improve the assessment of MR quantification.

  19. Essential numerical computer methods

    CERN Document Server

    Johnson, Michael L

    2010-01-01

    The use of computers and computational methods has become ubiquitous in biological and biomedical research. During the last 2 decades most basic algorithms have not changed, but what has is the huge increase in computer speed and ease of use, along with the corresponding orders of magnitude decrease in cost. A general perception exists that the only applications of computers and computer methods in biological and biomedical research are either basic statistical analysis or the searching of DNA sequence data bases. While these are important applications they only scratch the surface of the current and potential applications of computers and computer methods in biomedical research. The various chapters within this volume include a wide variety of applications that extend far beyond this limited perception. As part of the Reliable Lab Solutions series, Essential Numerical Computer Methods brings together chapters from volumes 210, 240, 321, 383, 384, 454, and 467 of Methods in Enzymology. These chapters provide ...

  20. OT-Combiners Via Secure Computation

    DEFF Research Database (Denmark)

    Harnik, Danny; Ishai, Yuval; Kushilevitz, Eyal

    2008-01-01

    of faulty candidates (t = Ω(n)). Previous OT-combiners required either ω(n) or poly(k) calls to the n candidates, where k is a security parameter, and produced only a single secure OT. We demonstrate the usefulness of the latter result by presenting several applications that are of independent interest......An OT-combiner implements a secure oblivious transfer (OT) protocol using oracle access to n OT-candidates of which at most t may be faulty. We introduce a new general approach for combining OTs by making a simple and modular use of protocols for secure computation. Specifically, we obtain an OT......, strengthen the security, and improve the efficiency of previous OT-combiners. In particular, we obtain the first constant-rate OT-combiners in which the number of secure OTs being produced is a constant fraction of the total number of calls to the OT-candidates, while still tolerating a constant fraction...

  1. Three dimensional magnetic fields in extra high speed modified Lundell alternators computed by a combined vector-scalar magnetic potential finite element method

    Science.gov (United States)

    Demerdash, N. A.; Wang, R.; Secunde, R.

    1992-01-01

    A 3D finite element (FE) approach was developed and implemented for computation of global magnetic fields in a 14.3 kVA modified Lundell alternator. The essence of the new method is the combined use of magnetic vector and scalar potential formulations in 3D FEs. This approach makes it practical, using state of the art supercomputer resources, to globally analyze magnetic fields and operating performances of rotating machines which have truly 3D magnetic flux patterns. The 3D FE-computed fields and machine inductances as well as various machine performance simulations of the 14.3 kVA machine are presented in this paper and its two companion papers.

  2. Haemodynamic imaging of thoracic stent-grafts by computational fluid dynamics (CFD): presentation of a patient-specific method combining magnetic resonance imaging and numerical simulations.

    Science.gov (United States)

    Midulla, Marco; Moreno, Ramiro; Baali, Adil; Chau, Ming; Negre-Salvayre, Anne; Nicoud, Franck; Pruvo, Jean-Pierre; Haulon, Stephan; Rousseau, Hervé

    2012-10-01

    In the last decade, there was been increasing interest in finding imaging techniques able to provide a functional vascular imaging of the thoracic aorta. The purpose of this paper is to present an imaging method combining magnetic resonance imaging (MRI) and computational fluid dynamics (CFD) to obtain a patient-specific haemodynamic analysis of patients treated by thoracic endovascular aortic repair (TEVAR). MRI was used to obtain boundary conditions. MR angiography (MRA) was followed by cardiac-gated cine sequences which covered the whole thoracic aorta. Phase contrast imaging provided the inlet and outlet profiles. A CFD mesh generator was used to model the arterial morphology, and wall movements were imposed according to the cine imaging. CFD runs were processed using the finite volume (FV) method assuming blood as a homogeneous Newtonian fluid. Twenty patients (14 men; mean age 62.2 years) with different aortic lesions were evaluated. Four-dimensional mapping of velocity and wall shear stress were obtained, depicting different patterns of flow (laminar, turbulent, stenosis-like) and local alterations of parietal stress in-stent and along the native aorta. A computational method using a combined approach with MRI appears feasible and seems promising to provide detailed functional analysis of thoracic aorta after stent-graft implantation. • Functional vascular imaging of the thoracic aorta offers new diagnostic opportunities • CFD can model vascular haemodynamics for clinical aortic problems • Combining CFD with MRI offers patient specific method of aortic analysis • Haemodynamic analysis of stent-grafts could improve clinical management and follow-up.

  3. Study on Differential Algebraic Method of Aberrations up to Arbitrary Order for Combined Electromagnetic Focusing Systems

    Institute of Scientific and Technical Information of China (English)

    CHENG Min; TANG Tiantong; YAO Zhenhua; ZHU Jingping

    2001-01-01

    Differential algebraic method is apowerful technique in computer numerical analysisbased on nonstandard analysis and formal series the-ory. It can compute arbitrary high order derivativeswith excellent accuracy. The principle of differentialalgebraic method is applied to calculate high orderaberrations of combined electromagnetic focusing sys-tems. As an example, third-order geometric aberra-tion coefficients of an actual combined electromagneticfocusing system were calculated. The arbitrary highorder aberrations are conveniently calculated by dif-ferential algebraic method and the fifth-order aberra-tion diagrams are given.

  4. A comparison of two analytical evaluation methods for educational computer games for young children

    NARCIS (Netherlands)

    Bekker, M.M.; Baauw, E.; Barendregt, W.

    2008-01-01

    In this paper we describe a comparison of two analytical methods for educational computer games for young children. The methods compared in the study are the Structured Expert Evaluation Method (SEEM) and the Combined Heuristic Evaluation (HE) (based on a combination of Nielsen’s HE and the

  5. A Combined Thermodynamics & Computational Method to Assess Lithium Composition in Anode and Cathode of Lithium Ion Batteries

    International Nuclear Information System (INIS)

    Zhang, Wenyu; Jiang, Lianlian; Van Durmen, Pauline; Saadat, Somaye; Yazami, Rachid

    2016-01-01

    With aim to address the open question of accurate determination of lithium composition in anode and cathode at a defined state of charge (SOC) of lithium ion batteries (LIB), we developed a method combining electrochemical thermodynamic measurements (ETM) and computational data fitting protocol. It is a common knowledge that in a lithium ion battery the SOC of anode and cathode differ from the SOC of the full-cell. Differences are in large part due to irreversible lithium losses within cell and to electrode mass unbalance. This implies that the lithium composition range in anode and in cathode during full charge and discharge cycle in full-cell is different from the composition range achieved in lithium half-cells of anode and cathode over their respective full SOC ranges. To the authors knowledge there is no unequivocal and practical method to determine the actual lithium composition of electrodes in a LIB, hence their SOC. Yet, accurate lithium composition assessment is fundamental not only for understanding the physics of electrodes but also for optimizing cell performances, particularly energy density and cycle life.

  6. Relative conservatisms of combination methods used in response spectrum analyses of nuclear piping systems

    International Nuclear Information System (INIS)

    Gupta, S.; Kustu, O.; Jhaveri, D.P.; Blume, J.A.

    1983-01-01

    The paper presents the conclusions of a comprehensive study that investigated the relative conservatisms represented by various combination techniques. Two approaches were taken for the study, producing mutually consistent results. In the first, 20 representative nuclear piping systems were systematically analyzed using the response spectrum method. The total response was obtained using nine different combination methods. One procedure, using the SRSS method for combining spatial components of response and the 10% method for combining the responses of different modes (which is currently acceptable to the U.S. NRC), was the standard for comparison. Responses computed by the other methods were normalized to this standard method. These response ratios were then used to develop cumulative frequency-distribution curves, which were used to establish the relative conservatism of the methods in a probabilistic sense. In the second approach, 30 single-degree-of-freedom (SDOF) systems that represent different modes of hypothetical piping systems and have natural frequencies varying from 1 Hz to 30 Hz, were analyzed for 276 sets of three-component recorded ground motion. A set of hypothetical systems assuming a variety of modes and frequency ranges was developed. The responses of these systems were computed from the responses of the SDOF systems by combining the spatial response components by algebraic summation and the individual mode responses by the Navy method, or combining both spatial and modal response components using the SRSS method. Probability density functions and cumulative distribution functions were developed for the ratio of the responses obtained by both methods. (orig./HP)

  7. A fast computing method to distinguish the hyperbolic trajectory of an non-autonomous system

    Science.gov (United States)

    Jia, Meng; Fan, Yang-Yu; Tian, Wei-Jian

    2011-03-01

    Attempting to find a fast computing method to DHT (distinguished hyperbolic trajectory), this study first proves that the errors of the stable DHT can be ignored in normal direction when they are computed as the trajectories extend. This conclusion means that the stable flow with perturbation will approach to the real trajectory as it extends over time. Based on this theory and combined with the improved DHT computing method, this paper reports a new fast computing method to DHT, which magnifies the DHT computing speed without decreasing its accuracy. Project supported by the National Natural Science Foundation of China (Grant No. 60872159).

  8. A fast computing method to distinguish the hyperbolic trajectory of an non-autonomous system

    International Nuclear Information System (INIS)

    Jia Meng; Fan Yang-Yu; Tian Wei-Jian

    2011-01-01

    Attempting to find a fast computing method to DHT (distinguished hyperbolic trajectory), this study first proves that the errors of the stable DHT can be ignored in normal direction when they are computed as the trajectories extend. This conclusion means that the stable flow with perturbation will approach to the real trajectory as it extends over time. Based on this theory and combined with the improved DHT computing method, this paper reports a new fast computing method to DHT, which magnifies the DHT computing speed without decreasing its accuracy. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  9. Recent Development in Rigorous Computational Methods in Dynamical Systems

    OpenAIRE

    Arai, Zin; Kokubu, Hiroshi; Pilarczyk, Paweł

    2009-01-01

    We highlight selected results of recent development in the area of rigorous computations which use interval arithmetic to analyse dynamical systems. We describe general ideas and selected details of different ways of approach and we provide specific sample applications to illustrate the effectiveness of these methods. The emphasis is put on a topological approach, which combined with rigorous calculations provides a broad range of new methods that yield mathematically rel...

  10. Combining Archetypes, Ontologies and Formalization Enables Automated Computation of Quality Indicators.

    Science.gov (United States)

    Legaz-García, María Del Carmen; Dentler, Kathrin; Fernández-Breis, Jesualdo Tomás; Cornet, Ronald

    2017-01-01

    ArchMS is a framework that represents clinical information and knowledge using ontologies in OWL, which facilitates semantic interoperability and thereby the exploitation and secondary use of clinical data. However, it does not yet support the automated assessment of quality of care. CLIF is a stepwise method to formalize quality indicators. The method has been implemented in the CLIF tool which supports its users in generating computable queries based on a patient data model which can be based on archetypes. To enable the automated computation of quality indicators using ontologies and archetypes, we tested whether ArchMS and the CLIF tool can be integrated. We successfully automated the process of generating SPARQL queries from quality indicators that have been formalized with CLIF and integrated them into ArchMS. Hence, ontologies and archetypes can be combined for the execution of formalized quality indicators.

  11. Using AMDD method for Database Design in Mobile Cloud Computing Systems

    OpenAIRE

    Silviu Claudiu POPA; Mihai-Constantin AVORNICULUI; Vasile Paul BRESFELEAN

    2013-01-01

    The development of the technologies of wireless telecommunications gave birth of new kinds of e-commerce, the so called Mobile e-Commerce or m-Commerce. Mobile Cloud Computing (MCC) represents a new IT research area that combines mobile computing and cloud compu-ting techniques. Behind a cloud mobile commerce system there is a database containing all necessary information for transactions. By means of Agile Model Driven Development (AMDD) method, we are able to achieve many benefits that smoo...

  12. Expert judgement combination using moment methods

    International Nuclear Information System (INIS)

    Wisse, Bram; Bedford, Tim; Quigley, John

    2008-01-01

    Moment methods have been employed in decision analysis, partly to avoid the computational burden that decision models involving continuous probability distributions can suffer from. In the Bayes linear (BL) methodology prior judgements about uncertain quantities are specified using expectation (rather than probability) as the fundamental notion. BL provides a strong foundation for moment methods, rooted in work of De Finetti and Goldstein. The main objective of this paper is to discuss in what way expert assessments of moments can be combined, in a non-Bayesian way, to construct a prior assessment. We show that the linear pool can be justified in an analogous but technically different way to linear pools for probability assessments, and that this linear pool has a very convenient property: a linear pool of experts' assessments of moments is coherent if each of the experts has given coherent assessments. To determine the weights of the linear pool we give a method of performance based weighting analogous to Cooke's classical model and explore its properties. Finally, we compare its performance with the classical model on data gathered in applications of the classical model

  13. A fast combination method in DSmT and its application to recommender system.

    Directory of Open Access Journals (Sweden)

    Yilin Dong

    Full Text Available In many applications involving epistemic uncertainties usually modeled by belief functions, it is often necessary to approximate general (non-Bayesian basic belief assignments (BBAs to subjective probabilities (called Bayesian BBAs. This necessity occurs if one needs to embed the fusion result in a system based on the probabilistic framework and Bayesian inference (e.g. tracking systems, or if one needs to make a decision in the decision making problems. In this paper, we present a new fast combination method, called modified rigid coarsening (MRC, to obtain the final Bayesian BBAs based on hierarchical decomposition (coarsening of the frame of discernment. Regarding this method, focal elements with probabilities are coarsened efficiently to reduce computational complexity in the process of combination by using disagreement vector and a simple dichotomous approach. In order to prove the practicality of our approach, this new approach is applied to combine users' soft preferences in recommender systems (RSs. Additionally, in order to make a comprehensive performance comparison, the proportional conflict redistribution rule #6 (PCR6 is regarded as a baseline in a range of experiments. According to the results of experiments, MRC is more effective in accuracy of recommendations compared to original Rigid Coarsening (RC method and comparable in computational time.

  14. A fast combination method in DSmT and its application to recommender system.

    Science.gov (United States)

    Dong, Yilin; Li, Xinde; Liu, Yihai

    2018-01-01

    In many applications involving epistemic uncertainties usually modeled by belief functions, it is often necessary to approximate general (non-Bayesian) basic belief assignments (BBAs) to subjective probabilities (called Bayesian BBAs). This necessity occurs if one needs to embed the fusion result in a system based on the probabilistic framework and Bayesian inference (e.g. tracking systems), or if one needs to make a decision in the decision making problems. In this paper, we present a new fast combination method, called modified rigid coarsening (MRC), to obtain the final Bayesian BBAs based on hierarchical decomposition (coarsening) of the frame of discernment. Regarding this method, focal elements with probabilities are coarsened efficiently to reduce computational complexity in the process of combination by using disagreement vector and a simple dichotomous approach. In order to prove the practicality of our approach, this new approach is applied to combine users' soft preferences in recommender systems (RSs). Additionally, in order to make a comprehensive performance comparison, the proportional conflict redistribution rule #6 (PCR6) is regarded as a baseline in a range of experiments. According to the results of experiments, MRC is more effective in accuracy of recommendations compared to original Rigid Coarsening (RC) method and comparable in computational time.

  15. Computational Quantum Mechanics for Materials Engineers The EMTO Method and Applications

    CERN Document Server

    Vitos, L

    2007-01-01

    Traditionally, new materials have been developed by empirically correlating their chemical composition, and the manufacturing processes used to form them, with their properties. Until recently, metallurgists have not used quantum theory for practical purposes. However, the development of modern density functional methods means that today, computational quantum mechanics can help engineers to identify and develop novel materials. Computational Quantum Mechanics for Materials Engineers describes new approaches to the modelling of disordered alloys that combine the most efficient quantum-level th

  16. Experimental substantiation of combined methods for designing processes for the commercial preparation of gas at gas condensate fields

    Energy Technology Data Exchange (ETDEWEB)

    Gurevich, G R; Karlinskii, E D; Posypkina, T V

    1977-04-01

    An analysis is made of the possibility of using two analytical methods for studying vapor--liquid equilibrium of hydrocarbon mixtures that are used in designing the separation of natural gas and the stabilization of condensate--the Chao and Sider method, which uses computations by equilibrium constants. A combined computational method is proposed for describing a unified process of natural gas separation and condensate stabilization. The method of preparing the original data for the computation of the separation and stabilization processes can be significantly simplified. 10 references, 1 table.

  17. On the complexity of a combined homotopy interior method for convex programming

    Science.gov (United States)

    Yu, Bo; Xu, Qing; Feng, Guochen

    2007-03-01

    In [G.C. Feng, Z.H. Lin, B. Yu, Existence of an interior pathway to a Karush-Kuhn-Tucker point of a nonconvex programming problem, Nonlinear Anal. 32 (1998) 761-768; G.C. Feng, B. Yu, Combined homotopy interior point method for nonlinear programming problems, in: H. Fujita, M. Yamaguti (Eds.), Advances in Numerical Mathematics, Proceedings of the Second Japan-China Seminar on Numerical Mathematics, Lecture Notes in Numerical and Applied Analysis, vol. 14, Kinokuniya, Tokyo, 1995, pp. 9-16; Z.H. Lin, B. Yu, G.C. Feng, A combined homotopy interior point method for convex programming problem, Appl. Math. Comput. 84 (1997) 193-211.], a combined homotopy was constructed for solving non-convex programming and convex programming with weaker conditions, without assuming the logarithmic barrier function to be strictly convex and the solution set to be bounded. It was proven that a smooth interior path from an interior point of the feasible set to a K-K-T point of the problem exists. This shows that combined homotopy interior point methods can solve the problem that commonly used interior point methods cannot solveE However, so far, there is no result on its complexity, even for linear programming. The main difficulty is that the objective function is not monotonically decreasing on the combined homotopy path. In this paper, by taking a piecewise technique, under commonly used conditions, polynomiality of a combined homotopy interior point method is given for convex nonlinear programming.

  18. Studies of the Raman Spectra of Cyclic and Acyclic Molecules: Combination and Prediction Spectrum Methods

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Taijin; Assary, Rajeev S.; Marshall, Christopher L.; Gosztola, David J.; Curtiss, Larry A.; Stair, Peter C.

    2012-04-02

    A combination of Raman spectroscopy and density functional methods was employed to investigate the spectral features of selected molecules: furfural, 5-hydroxymethyl furfural (HMF), methanol, acetone, acetic acid, and levulinic acid. The computed spectra and measured spectra are in excellent agreement, consistent with previous studies. Using the combination and prediction spectrum method (CPSM), we were able to predict the important spectral features of two platform chemicals, HMF and levulinic acid.The results have shown that CPSM is a useful alternative method for predicting vibrational spectra of complex molecules in the biomass transformation process.

  19. Numerical methods in matrix computations

    CERN Document Server

    Björck, Åke

    2015-01-01

    Matrix algorithms are at the core of scientific computing and are indispensable tools in most applications in engineering. This book offers a comprehensive and up-to-date treatment of modern methods in matrix computation. It uses a unified approach to direct and iterative methods for linear systems, least squares and eigenvalue problems. A thorough analysis of the stability, accuracy, and complexity of the treated methods is given. Numerical Methods in Matrix Computations is suitable for use in courses on scientific computing and applied technical areas at advanced undergraduate and graduate level. A large bibliography is provided, which includes both historical and review papers as well as recent research papers. This makes the book useful also as a reference and guide to further study and research work. Åke Björck is a professor emeritus at the Department of Mathematics, Linköping University. He is a Fellow of the Society of Industrial and Applied Mathematics.

  20. Thin Cloud Detection Method by Linear Combination Model of Cloud Image

    Science.gov (United States)

    Liu, L.; Li, J.; Wang, Y.; Xiao, Y.; Zhang, W.; Zhang, S.

    2018-04-01

    The existing cloud detection methods in photogrammetry often extract the image features from remote sensing images directly, and then use them to classify images into cloud or other things. But when the cloud is thin and small, these methods will be inaccurate. In this paper, a linear combination model of cloud images is proposed, by using this model, the underlying surface information of remote sensing images can be removed. So the cloud detection result can become more accurate. Firstly, the automatic cloud detection program in this paper uses the linear combination model to split the cloud information and surface information in the transparent cloud images, then uses different image features to recognize the cloud parts. In consideration of the computational efficiency, AdaBoost Classifier was introduced to combine the different features to establish a cloud classifier. AdaBoost Classifier can select the most effective features from many normal features, so the calculation time is largely reduced. Finally, we selected a cloud detection method based on tree structure and a multiple feature detection method using SVM classifier to compare with the proposed method, the experimental data shows that the proposed cloud detection program in this paper has high accuracy and fast calculation speed.

  1. Ensemble approach combining multiple methods improves human transcription start site prediction

    LENUS (Irish Health Repository)

    Dineen, David G

    2010-11-30

    Abstract Background The computational prediction of transcription start sites is an important unsolved problem. Some recent progress has been made, but many promoters, particularly those not associated with CpG islands, are still difficult to locate using current methods. These methods use different features and training sets, along with a variety of machine learning techniques and result in different prediction sets. Results We demonstrate the heterogeneity of current prediction sets, and take advantage of this heterogeneity to construct a two-level classifier (\\'Profisi Ensemble\\') using predictions from 7 programs, along with 2 other data sources. Support vector machines using \\'full\\' and \\'reduced\\' data sets are combined in an either\\/or approach. We achieve a 14% increase in performance over the current state-of-the-art, as benchmarked by a third-party tool. Conclusions Supervised learning methods are a useful way to combine predictions from diverse sources.

  2. Microscopes and computers combined for analysis of chromosomes

    Science.gov (United States)

    Butler, J. W.; Butler, M. K.; Stroud, A. N.

    1969-01-01

    Scanning machine CHLOE, developed for photographic use, is combined with a digital computer to obtain quantitative and statistically significant data on chromosome shapes, distribution, density, and pairing. CHLOE permits data acquisition about a chromosome complement to be obtained two times faster than by manual pairing.

  3. Pair Programming as a Modern Method of Teaching Computer Science

    Directory of Open Access Journals (Sweden)

    Irena Nančovska Šerbec

    2008-10-01

    Full Text Available At the Faculty of Education, University of Ljubljana we educate future computer science teachers. Beside didactical, pedagogical, mathematical and other interdisciplinary knowledge, students gain knowledge and skills of programming that are crucial for computer science teachers. For all courses, the main emphasis is the absorption of professional competences, related to the teaching profession and the programming profile. The latter are selected according to the well-known document, the ACM Computing Curricula. The professional knowledge is therefore associated and combined with the teaching knowledge and skills. In the paper we present how to achieve competences related to programming by using different didactical models (semiotic ladder, cognitive objectives taxonomy, problem solving and modern teaching method “pair programming”. Pair programming differs from standard methods (individual work, seminars, projects etc.. It belongs to the extreme programming as a discipline of software development and is known to have positive effects on teaching first programming language. We have experimentally observed pair programming in the introductory programming course. The paper presents and analyzes the results of using this method: the aspects of satisfaction during programming and the level of gained knowledge. The results are in general positive and demonstrate the promising usage of this teaching method.

  4. A rapid method for the computation of equilibrium chemical composition of air to 15000 K

    Science.gov (United States)

    Prabhu, Ramadas K.; Erickson, Wayne D.

    1988-01-01

    A rapid computational method has been developed to determine the chemical composition of equilibrium air to 15000 K. Eleven chemically reacting species, i.e., O2, N2, O, NO, N, NO+, e-, N+, O+, Ar, and Ar+ are included. The method involves combining algebraically seven nonlinear equilibrium equations and four linear elemental mass balance and charge neutrality equations. Computational speeds for determining the equilibrium chemical composition are significantly faster than the often used free energy minimization procedure. Data are also included from which the thermodynamic properties of air can be computed. A listing of the computer program together with a set of sample results are included.

  5. Computational methods in drug discovery

    Directory of Open Access Journals (Sweden)

    Sumudu P. Leelananda

    2016-12-01

    Full Text Available The process for drug discovery and development is challenging, time consuming and expensive. Computer-aided drug discovery (CADD tools can act as a virtual shortcut, assisting in the expedition of this long process and potentially reducing the cost of research and development. Today CADD has become an effective and indispensable tool in therapeutic development. The human genome project has made available a substantial amount of sequence data that can be used in various drug discovery projects. Additionally, increasing knowledge of biological structures, as well as increasing computer power have made it possible to use computational methods effectively in various phases of the drug discovery and development pipeline. The importance of in silico tools is greater than ever before and has advanced pharmaceutical research. Here we present an overview of computational methods used in different facets of drug discovery and highlight some of the recent successes. In this review, both structure-based and ligand-based drug discovery methods are discussed. Advances in virtual high-throughput screening, protein structure prediction methods, protein–ligand docking, pharmacophore modeling and QSAR techniques are reviewed.

  6. Computer aided system for parametric design of combination die

    Science.gov (United States)

    Naranje, Vishal G.; Hussein, H. M. A.; Kumar, S.

    2017-09-01

    In this paper, a computer aided system for parametric design of combination dies is presented. The system is developed using knowledge based system technique of artificial intelligence. The system is capable to design combination dies for production of sheet metal parts having punching and cupping operations. The system is coded in Visual Basic and interfaced with AutoCAD software. The low cost of the proposed system will help die designers of small and medium scale sheet metal industries for design of combination dies for similar type of products. The proposed system is capable to reduce design time and efforts of die designers for design of combination dies.

  7. Computing Nash equilibria through computational intelligence methods

    Science.gov (United States)

    Pavlidis, N. G.; Parsopoulos, K. E.; Vrahatis, M. N.

    2005-03-01

    Nash equilibrium constitutes a central solution concept in game theory. The task of detecting the Nash equilibria of a finite strategic game remains a challenging problem up-to-date. This paper investigates the effectiveness of three computational intelligence techniques, namely, covariance matrix adaptation evolution strategies, particle swarm optimization, as well as, differential evolution, to compute Nash equilibria of finite strategic games, as global minima of a real-valued, nonnegative function. An issue of particular interest is to detect more than one Nash equilibria of a game. The performance of the considered computational intelligence methods on this problem is investigated using multistart and deflection.

  8. Moving finite elements: A continuously adaptive method for computational fluid dynamics

    International Nuclear Information System (INIS)

    Glasser, A.H.; Miller, K.; Carlson, N.

    1991-01-01

    Moving Finite Elements (MFE), a recently developed method for computational fluid dynamics, promises major advances in the ability of computers to model the complex behavior of liquids, gases, and plasmas. Applications of computational fluid dynamics occur in a wide range of scientifically and technologically important fields. Examples include meteorology, oceanography, global climate modeling, magnetic and inertial fusion energy research, semiconductor fabrication, biophysics, automobile and aircraft design, industrial fluid processing, chemical engineering, and combustion research. The improvements made possible by the new method could thus have substantial economic impact. Moving Finite Elements is a moving node adaptive grid method which has a tendency to pack the grid finely in regions where it is most needed at each time and to leave it coarse elsewhere. It does so in a manner which is simple and automatic, and does not require a large amount of human ingenuity to apply it to each particular problem. At the same time, it often allows the time step to be large enough to advance a moving shock by many shock thicknesses in a single time step, moving the grid smoothly with the solution and minimizing the number of time steps required for the whole problem. For 2D problems (two spatial variables) the grid is composed of irregularly shaped and irregularly connected triangles which are very flexible in their ability to adapt to the evolving solution. While other adaptive grid methods have been developed which share some of these desirable properties, this is the only method which combines them all. In many cases, the method can save orders of magnitude of computing time, equivalent to several generations of advancing computer hardware

  9. Numerical Methods for Stochastic Computations A Spectral Method Approach

    CERN Document Server

    Xiu, Dongbin

    2010-01-01

    The first graduate-level textbook to focus on fundamental aspects of numerical methods for stochastic computations, this book describes the class of numerical methods based on generalized polynomial chaos (gPC). These fast, efficient, and accurate methods are an extension of the classical spectral methods of high-dimensional random spaces. Designed to simulate complex systems subject to random inputs, these methods are widely used in many areas of computer science and engineering. The book introduces polynomial approximation theory and probability theory; describes the basic theory of gPC meth

  10. Monitoring of facial stress during space flight: Optical computer recognition combining discriminative and generative methods

    Science.gov (United States)

    Dinges, David F.; Venkataraman, Sundara; McGlinchey, Eleanor L.; Metaxas, Dimitris N.

    2007-02-01

    Astronauts are required to perform mission-critical tasks at a high level of functional capability throughout spaceflight. Stressors can compromise their ability to do so, making early objective detection of neurobehavioral problems in spaceflight a priority. Computer optical approaches offer a completely unobtrusive way to detect distress during critical operations in space flight. A methodology was developed and a study completed to determine whether optical computer recognition algorithms could be used to discriminate facial expressions during stress induced by performance demands. Stress recognition from a facial image sequence is a subject that has not received much attention although it is an important problem for many applications beyond space flight (security, human-computer interaction, etc.). This paper proposes a comprehensive method to detect stress from facial image sequences by using a model-based tracker. The image sequences were captured as subjects underwent a battery of psychological tests under high- and low-stress conditions. A cue integration-based tracking system accurately captured the rigid and non-rigid parameters of different parts of the face (eyebrows, lips). The labeled sequences were used to train the recognition system, which consisted of generative (hidden Markov model) and discriminative (support vector machine) parts that yield results superior to using either approach individually. The current optical algorithm methods performed at a 68% accuracy rate in an experimental study of 60 healthy adults undergoing periods of high-stress versus low-stress performance demands. Accuracy and practical feasibility of the technique is being improved further with automatic multi-resolution selection for the discretization of the mask, and automated face detection and mask initialization algorithms.

  11. Fluid-Induced Vibration Analysis for Reactor Internals Using Computational FSI Method

    Energy Technology Data Exchange (ETDEWEB)

    Moon, Jong Sung; Yi, Kun Woo; Sung, Ki Kwang; Im, In Young; Choi, Taek Sang [KEPCO E and C, Daejeon (Korea, Republic of)

    2013-10-15

    This paper introduces a fluid-induced vibration analysis method which calculates the response of the RVI to both deterministic and random loads at once and utilizes more realistic pressure distribution using the computational Fluid Structure Interaction (FSI) method. As addressed above, the FIV analysis for the RVI was carried out using the computational FSI method. This method calculates the response to deterministic and random turbulence loads at once. This method is also a simple and integrative method to get structural dynamic responses of reactor internals to various flow-induced loads. Because the analysis of this paper omitted the bypass flow region and Inner Barrel Assembly (IBA) due to the limitation of computer resources, it is necessary to find an effective way to consider all regions in the RV for the FIV analysis in the future. Reactor coolant flow makes Reactor Vessel Internals (RVI) vibrate and may affect the structural integrity of them. U. S. NRC Regulatory Guide 1.20 requires the Comprehensive Vibration Assessment Program (CVAP) to verify the structural integrity of the RVI for Fluid-Induced Vibration (FIV). The hydraulic forces on the RVI of OPR1000 and APR1400 were computed from the hydraulic formulas and the CVAP measurements in Palo Verde Unit 1 and Yonggwang Unit 4 for the structural vibration analyses. In this method, the hydraulic forces were divided into deterministic and random turbulence loads and were used for the excitation forces of the separate structural analyses. These forces are applied to the finite element model and the responses to them were combined into the resultant stresses.

  12. Class of reconstructed discontinuous Galerkin methods in computational fluid dynamics

    International Nuclear Information System (INIS)

    Luo, Hong; Xia, Yidong; Nourgaliev, Robert

    2011-01-01

    A class of reconstructed discontinuous Galerkin (DG) methods is presented to solve compressible flow problems on arbitrary grids. The idea is to combine the efficiency of the reconstruction methods in finite volume methods and the accuracy of the DG methods to obtain a better numerical algorithm in computational fluid dynamics. The beauty of the resulting reconstructed discontinuous Galerkin (RDG) methods is that they provide a unified formulation for both finite volume and DG methods, and contain both classical finite volume and standard DG methods as two special cases of the RDG methods, and thus allow for a direct efficiency comparison. Both Green-Gauss and least-squares reconstruction methods and a least-squares recovery method are presented to obtain a quadratic polynomial representation of the underlying linear discontinuous Galerkin solution on each cell via a so-called in-cell reconstruction process. The devised in-cell reconstruction is aimed to augment the accuracy of the discontinuous Galerkin method by increasing the order of the underlying polynomial solution. These three reconstructed discontinuous Galerkin methods are used to compute a variety of compressible flow problems on arbitrary meshes to assess their accuracy. The numerical experiments demonstrate that all three reconstructed discontinuous Galerkin methods can significantly improve the accuracy of the underlying second-order DG method, although the least-squares reconstructed DG method provides the best performance in terms of both accuracy, efficiency, and robustness. (author)

  13. Computational Combination of the Optical Properties of Fenestration Layers at High Directional Resolution

    Directory of Open Access Journals (Sweden)

    Lars Oliver Grobe

    2017-03-01

    Full Text Available Complex fenestration systems typically comprise co-planar, clear and scattering layers. As there are many ways to combine layers in fenestration systems, a common approach in building simulation is to store optical properties separate for each layer. System properties are then computed employing a fast matrix formalism, often based on a directional basis devised by JHKlems comprising 145 incident and 145 outgoing directions. While this low directional resolution is found sufficient to predict illuminance and solar gains, it is too coarse to replicate the effects of directionality in the generation of imagery. For increased accuracy, a modification of the matrix formalism is proposed. The tensor-tree format of RADIANCE, employing an algorithm subdividing the hemisphere at variable resolutions, replaces the directional basis. The utilization of the tensor-tree with interfaces to simulation software allows sharing and re-use of data. The light scattering properties of two exemplary fenestration systems as computed employing the matrix formalism at variable resolution show good accordance with the results of ray-tracing. Computation times are reduced to 0.4% to 2.5% compared to ray-tracing through co-planar layers. Imagery computed employing the method illustrates the effect of directional resolution. The method is supposed to foster research in the field of daylighting, as well as applications in planning and design.

  14. Interval sampling methods and measurement error: a computer simulation.

    Science.gov (United States)

    Wirth, Oliver; Slaven, James; Taylor, Matthew A

    2014-01-01

    A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments. © Society for the Experimental Analysis of Behavior.

  15. Non-coding RNA detection methods combined to improve usability, reproducibility and precision

    Directory of Open Access Journals (Sweden)

    Kreikemeyer Bernd

    2010-09-01

    Full Text Available Abstract Background Non-coding RNAs gain more attention as their diverse roles in many cellular processes are discovered. At the same time, the need for efficient computational prediction of ncRNAs increases with the pace of sequencing technology. Existing tools are based on various approaches and techniques, but none of them provides a reliable ncRNA detector yet. Consequently, a natural approach is to combine existing tools. Due to a lack of standard input and output formats combination and comparison of existing tools is difficult. Also, for genomic scans they often need to be incorporated in detection workflows using custom scripts, which decreases transparency and reproducibility. Results We developed a Java-based framework to integrate existing tools and methods for ncRNA detection. This framework enables users to construct transparent detection workflows and to combine and compare different methods efficiently. We demonstrate the effectiveness of combining detection methods in case studies with the small genomes of Escherichia coli, Listeria monocytogenes and Streptococcus pyogenes. With the combined method, we gained 10% to 20% precision for sensitivities from 30% to 80%. Further, we investigated Streptococcus pyogenes for novel ncRNAs. Using multiple methods--integrated by our framework--we determined four highly probable candidates. We verified all four candidates experimentally using RT-PCR. Conclusions We have created an extensible framework for practical, transparent and reproducible combination and comparison of ncRNA detection methods. We have proven the effectiveness of this approach in tests and by guiding experiments to find new ncRNAs. The software is freely available under the GNU General Public License (GPL, version 3 at http://www.sbi.uni-rostock.de/moses along with source code, screen shots, examples and tutorial material.

  16. Computational methods for fluid dynamics

    CERN Document Server

    Ferziger, Joel H

    2002-01-01

    In its 3rd revised and extended edition the book offers an overview of the techniques used to solve problems in fluid mechanics on computers and describes in detail those most often used in practice. Included are advanced methods in computational fluid dynamics, like direct and large-eddy simulation of turbulence, multigrid methods, parallel computing, moving grids, structured, block-structured and unstructured boundary-fitted grids, free surface flows. The 3rd edition contains a new section dealing with grid quality and an extended description of discretization methods. The book shows common roots and basic principles for many different methods. The book also contains a great deal of practical advice for code developers and users, it is designed to be equally useful to beginners and experts. The issues of numerical accuracy, estimation and reduction of numerical errors are dealt with in detail, with many examples. A full-feature user-friendly demo-version of a commercial CFD software has been added, which ca...

  17. Methods for computing color anaglyphs

    Science.gov (United States)

    McAllister, David F.; Zhou, Ya; Sullivan, Sophia

    2010-02-01

    A new computation technique is presented for calculating pixel colors in anaglyph images. The method depends upon knowing the RGB spectral distributions of the display device and the transmission functions of the filters in the viewing glasses. It requires the solution of a nonlinear least-squares program for each pixel in a stereo pair and is based on minimizing color distances in the CIEL*a*b* uniform color space. The method is compared with several techniques for computing anaglyphs including approximation in CIE space using the Euclidean and Uniform metrics, the Photoshop method and its variants, and a method proposed by Peter Wimmer. We also discuss the methods of desaturation and gamma correction for reducing retinal rivalry.

  18. Work in process level definition: a method based on computer simulation and electre tri

    Directory of Open Access Journals (Sweden)

    Isaac Pergher

    2014-09-01

    Full Text Available This paper proposes a method for defining the levels of work in progress (WIP in productive environments managed by constant work in process (CONWIP policies. The proposed method combines the approaches of Computer Simulation and Electre TRI to support estimation of the adequate level of WIP and is presented in eighteen steps. The paper also presents an application example, performed on a metalworking company. The research method is based on Computer Simulation, supported by quantitative data analysis. The main contribution of the paper is its provision of a structured way to define inventories according to demand. With this method, the authors hope to contribute to the establishment of better capacity plans in production environments.

  19. Automated Generation of User Guidance by Combining Computation and Deduction

    Directory of Open Access Journals (Sweden)

    Walther Neuper

    2012-02-01

    Full Text Available Herewith, a fairly old concept is published for the first time and named "Lucas Interpretation". This has been implemented in a prototype, which has been proved useful in educational practice and has gained academic relevance with an emerging generation of educational mathematics assistants (EMA based on Computer Theorem Proving (CTP. Automated Theorem Proving (ATP, i.e. deduction, is the most reliable technology used to check user input. However ATP is inherently weak in automatically generating solutions for arbitrary problems in applied mathematics. This weakness is crucial for EMAs: when ATP checks user input as incorrect and the learner gets stuck then the system should be able to suggest possible next steps. The key idea of Lucas Interpretation is to compute the steps of a calculation following a program written in a novel CTP-based programming language, i.e. computation provides the next steps. User guidance is generated by combining deduction and computation: the latter is performed by a specific language interpreter, which works like a debugger and hands over control to the learner at breakpoints, i.e. tactics generating the steps of calculation. The interpreter also builds up logical contexts providing ATP with the data required for checking user input, thus combining computation and deduction. The paper describes the concepts underlying Lucas Interpretation so that open questions can adequately be addressed, and prerequisites for further work are provided.

  20. Combining discrete equations method and upwind downwind-controlled splitting for non-reacting and reacting two-fluid computations

    International Nuclear Information System (INIS)

    Tang, K.

    2012-01-01

    When numerically investigating multiphase phenomena during severe accidents in a reactor system, characteristic lengths of the multi-fluid zone (non-reactive and reactive) are found to be much smaller than the volume of the reactor containment, which makes the direct modeling of the configuration hardly achievable. Alternatively, we propose to consider the physical multiphase mixture zone as an infinitely thin interface. Then, the reactive Riemann solver is inserted into the Reactive Discrete Equations Method (RDEM) to compute high speed combustion waves represented by discontinuous interfaces. An anti-diffusive approach is also coupled with RDEM to accurately simulate reactive interfaces. Increased robustness and efficiency when computing both multiphase interfaces and reacting flows are achieved thanks to an original upwind downwind-controlled splitting method (UDCS). UDCS is capable of accurately solving interfaces on multi-dimensional unstructured meshes, including reacting fronts for both deflagration and detonation configurations. (author)

  1. Numerical computer methods part D

    CERN Document Server

    Johnson, Michael L

    2004-01-01

    The aim of this volume is to brief researchers of the importance of data analysis in enzymology, and of the modern methods that have developed concomitantly with computer hardware. It is also to validate researchers' computer programs with real and synthetic data to ascertain that the results produced are what they expected. Selected Contents: Prediction of protein structure; modeling and studying proteins with molecular dynamics; statistical error in isothermal titration calorimetry; analysis of circular dichroism data; model comparison methods.

  2. Advanced computational electromagnetic methods and applications

    CERN Document Server

    Li, Wenxing; Elsherbeni, Atef; Rahmat-Samii, Yahya

    2015-01-01

    This new resource covers the latest developments in computational electromagnetic methods, with emphasis on cutting-edge applications. This book is designed to extend existing literature to the latest development in computational electromagnetic methods, which are of interest to readers in both academic and industrial areas. The topics include advanced techniques in MoM, FEM and FDTD, spectral domain method, GPU and Phi hardware acceleration, metamaterials, frequency and time domain integral equations, and statistics methods in bio-electromagnetics.

  3. Heat Transfer Computations of Internal Duct Flows With Combined Hydraulic and Thermal Developing Length

    Science.gov (United States)

    Wang, C. R.; Towne, C. E.; Hippensteele, S. A.; Poinsatte, P. E.

    1997-01-01

    This study investigated the Navier-Stokes computations of the surface heat transfer coefficients of a transition duct flow. A transition duct from an axisymmetric cross section to a non-axisymmetric cross section, is usually used to connect the turbine exit to the nozzle. As the gas turbine inlet temperature increases, the transition duct is subjected to the high temperature at the gas turbine exit. The transition duct flow has combined development of hydraulic and thermal entry length. The design of the transition duct required accurate surface heat transfer coefficients. The Navier-Stokes computational method could be used to predict the surface heat transfer coefficients of a transition duct flow. The Proteus three-dimensional Navier-Stokes numerical computational code was used in this study. The code was first studied for the computations of the turbulent developing flow properties within a circular duct and a square duct. The code was then used to compute the turbulent flow properties of a transition duct flow. The computational results of the surface pressure, the skin friction factor, and the surface heat transfer coefficient were described and compared with their values obtained from theoretical analyses or experiments. The comparison showed that the Navier-Stokes computation could predict approximately the surface heat transfer coefficients of a transition duct flow.

  4. Oligomerization of G protein-coupled receptors: computational methods.

    Science.gov (United States)

    Selent, J; Kaczor, A A

    2011-01-01

    Recent research has unveiled the complexity of mechanisms involved in G protein-coupled receptor (GPCR) functioning in which receptor dimerization/oligomerization may play an important role. Although the first high-resolution X-ray structure for a likely functional chemokine receptor dimer has been deposited in the Protein Data Bank, the interactions and mechanisms of dimer formation are not yet fully understood. In this respect, computational methods play a key role for predicting accurate GPCR complexes. This review outlines computational approaches focusing on sequence- and structure-based methodologies as well as discusses their advantages and limitations. Sequence-based approaches that search for possible protein-protein interfaces in GPCR complexes have been applied with success in several studies, but did not yield always consistent results. Structure-based methodologies are a potent complement to sequence-based approaches. For instance, protein-protein docking is a valuable method especially when guided by experimental constraints. Some disadvantages like limited receptor flexibility and non-consideration of the membrane environment have to be taken into account. Molecular dynamics simulation can overcome these drawbacks giving a detailed description of conformational changes in a native-like membrane. Successful prediction of GPCR complexes using computational approaches combined with experimental efforts may help to understand the role of dimeric/oligomeric GPCR complexes for fine-tuning receptor signaling. Moreover, since such GPCR complexes have attracted interest as potential drug target for diverse diseases, unveiling molecular determinants of dimerization/oligomerization can provide important implications for drug discovery.

  5. Optimal steel thickness combined with computed radiography for portal imaging of nasopharyngeal cancer patients

    International Nuclear Information System (INIS)

    Wu Shixiu; Jin Xiance; Xie Congying; Cao Guoquan

    2005-01-01

    The poor image quality of conventional metal screen-film portal imaging system has long been of concern, and various methods have been investigated in an attempt to enhance the quality of portal images. Computed radiography (CR) used in combination with a steel plate displays image enhancement. The optimal thickness of the steel plate had been studied by measuring the modulation transfer function (MTF) characteristics. Portal images of nasopharyngeal carcinoma patients were taken by both a conventional metal screen-film system and this optimal steel and CR plate combination system. Compared with a conventional metal screen-film system, the CR-metal screen system achieves a much higher image contrast. The measured modulation transfer function (MTF) of the CR combination is greater than conventional film-screen portal imaging systems and also results in superior image performance, as demonstrated by receiver operator characteristic (ROC) analysis. This optimal combination steel CR plate portal imaging system is capable of producing high contrast portal images conveniently

  6. Computational Methods in Plasma Physics

    CERN Document Server

    Jardin, Stephen

    2010-01-01

    Assuming no prior knowledge of plasma physics or numerical methods, Computational Methods in Plasma Physics covers the computational mathematics and techniques needed to simulate magnetically confined plasmas in modern magnetic fusion experiments and future magnetic fusion reactors. Largely self-contained, the text presents the basic concepts necessary for the numerical solution of partial differential equations. Along with discussing numerical stability and accuracy, the author explores many of the algorithms used today in enough depth so that readers can analyze their stability, efficiency,

  7. A high-resolution computational localization method for transcranial magnetic stimulation mapping.

    Science.gov (United States)

    Aonuma, Shinta; Gomez-Tames, Jose; Laakso, Ilkka; Hirata, Akimasa; Takakura, Tomokazu; Tamura, Manabu; Muragaki, Yoshihiro

    2018-05-15

    Transcranial magnetic stimulation (TMS) is used for the mapping of brain motor functions. The complexity of the brain deters determining the exact localization of the stimulation site using simplified methods (e.g., the region below the center of the TMS coil) or conventional computational approaches. This study aimed to present a high-precision localization method for a specific motor area by synthesizing computed non-uniform current distributions in the brain for multiple sessions of TMS. Peritumoral mapping by TMS was conducted on patients who had intra-axial brain neoplasms located within or close to the motor speech area. The electric field induced by TMS was computed using realistic head models constructed from magnetic resonance images of patients. A post-processing method was implemented to determine a TMS hotspot by combining the computed electric fields for the coil orientations and positions that delivered high motor-evoked potentials during peritumoral mapping. The method was compared to the stimulation site localized via intraoperative direct brain stimulation and navigated TMS. Four main results were obtained: 1) the dependence of the computed hotspot area on the number of peritumoral measurements was evaluated; 2) the estimated localization of the hand motor area in eight non-affected hemispheres was in good agreement with the position of a so-called "hand-knob"; 3) the estimated hotspot areas were not sensitive to variations in tissue conductivity; and 4) the hand motor areas estimated by this proposal and direct electric stimulation (DES) were in good agreement in the ipsilateral hemisphere of four glioma patients. The TMS localization method was validated by well-known positions of the "hand-knob" in brains for the non-affected hemisphere, and by a hotspot localized via DES during awake craniotomy for the tumor-containing hemisphere. Copyright © 2018 Elsevier Inc. All rights reserved.

  8. USING COMPUTER-BASED TESTING AS ALTERNATIVE ASSESSMENT METHOD OF STUDENT LEARNING IN DISTANCE EDUCATION

    Directory of Open Access Journals (Sweden)

    Amalia SAPRIATI

    2010-04-01

    Full Text Available This paper addresses the use of computer-based testing in distance education, based on the experience of Universitas Terbuka (UT, Indonesia. Computer-based testing has been developed at UT for reasons of meeting the specific needs of distance students as the following: Ø students’ inability to sit for the scheduled test, Ø conflicting test schedules, and Ø students’ flexibility to take examination to improve their grades. In 2004, UT initiated a pilot project in the development of system and program for computer-based testing method. Then in 2005 and 2006 tryouts in the use of computer-based testing methods were conducted in 7 Regional Offices that were considered as having sufficient supporting recourses. The results of the tryouts revealed that students were enthusiastic in taking computer-based tests and they expected that the test method would be provided by UT as alternative to the traditional paper and pencil test method. UT then implemented computer-based testing method in 6 and 12 Regional Offices in 2007 and 2008 respectively. The computer-based testing was administered in the city of the designated Regional Office and was supervised by the Regional Office staff. The development of the computer-based testing was initiated with conducting tests using computers in networked configuration. The system has been continually improved, and it currently uses devices linked to the internet or the World Wide Web. The construction of the test involves the generation and selection of the test items from the item bank collection of the UT Examination Center. Thus the combination of the selected items compromises the test specification. Currently UT has offered 250 courses involving the use of computer-based testing. Students expect that more courses are offered with computer-based testing in Regional Offices within easy access by students.

  9. Computational efficiency for the surface renewal method

    Science.gov (United States)

    Kelley, Jason; Higgins, Chad

    2018-04-01

    Measuring surface fluxes using the surface renewal (SR) method requires programmatic algorithms for tabulation, algebraic calculation, and data quality control. A number of different methods have been published describing automated calibration of SR parameters. Because the SR method utilizes high-frequency (10 Hz+) measurements, some steps in the flux calculation are computationally expensive, especially when automating SR to perform many iterations of these calculations. Several new algorithms were written that perform the required calculations more efficiently and rapidly, and that tested for sensitivity to length of flux averaging period, ability to measure over a large range of lag timescales, and overall computational efficiency. These algorithms utilize signal processing techniques and algebraic simplifications that demonstrate simple modifications that dramatically improve computational efficiency. The results here complement efforts by other authors to standardize a robust and accurate computational SR method. Increased speed of computation time grants flexibility to implementing the SR method, opening new avenues for SR to be used in research, for applied monitoring, and in novel field deployments.

  10. Multiscale methods in turbulent combustion: strategies and computational challenges

    International Nuclear Information System (INIS)

    Echekki, Tarek

    2009-01-01

    A principal challenge in modeling turbulent combustion flows is associated with their complex, multiscale nature. Traditional paradigms in the modeling of these flows have attempted to address this nature through different strategies, including exploiting the separation of turbulence and combustion scales and a reduced description of the composition space. The resulting moment-based methods often yield reasonable predictions of flow and reactive scalars' statistics under certain conditions. However, these methods must constantly evolve to address combustion at different regimes, modes or with dominant chemistries. In recent years, alternative multiscale strategies have emerged, which although in part inspired by the traditional approaches, also draw upon basic tools from computational science, applied mathematics and the increasing availability of powerful computational resources. This review presents a general overview of different strategies adopted for multiscale solutions of turbulent combustion flows. Within these strategies, some specific models are discussed or outlined to illustrate their capabilities and underlying assumptions. These strategies may be classified under four different classes, including (i) closure models for atomistic processes, (ii) multigrid and multiresolution strategies, (iii) flame-embedding strategies and (iv) hybrid large-eddy simulation-low-dimensional strategies. A combination of these strategies and models can potentially represent a robust alternative strategy to moment-based models; but a significant challenge remains in the development of computational frameworks for these approaches as well as their underlying theories. (topical review)

  11. Computational techniques of the simplex method

    CERN Document Server

    Maros, István

    2003-01-01

    Computational Techniques of the Simplex Method is a systematic treatment focused on the computational issues of the simplex method. It provides a comprehensive coverage of the most important and successful algorithmic and implementation techniques of the simplex method. It is a unique source of essential, never discussed details of algorithmic elements and their implementation. On the basis of the book the reader will be able to create a highly advanced implementation of the simplex method which, in turn, can be used directly or as a building block in other solution algorithms.

  12. Computational methods for reversed-field equilibrium

    International Nuclear Information System (INIS)

    Boyd, J.K.; Auerbach, S.P.; Willmann, P.A.; Berk, H.L.; McNamara, B.

    1980-01-01

    Investigating the temporal evolution of reversed-field equilibrium caused by transport processes requires the solution of the Grad-Shafranov equation and computation of field-line-averaged quantities. The technique for field-line averaging and the computation of the Grad-Shafranov equation are presented. Application of Green's function to specify the Grad-Shafranov equation boundary condition is discussed. Hill's vortex formulas used to verify certain computations are detailed. Use of computer software to implement computational methods is described

  13. Advanced scientific computational methods and their applications to nuclear technologies. (4) Overview of scientific computational methods, introduction of continuum simulation methods and their applications (4)

    International Nuclear Information System (INIS)

    Sekimura, Naoto; Okita, Taira

    2006-01-01

    Scientific computational methods have advanced remarkably with the progress of nuclear development. They have played the role of weft connecting each realm of nuclear engineering and then an introductory course of advanced scientific computational methods and their applications to nuclear technologies were prepared in serial form. This is the fourth issue showing the overview of scientific computational methods with the introduction of continuum simulation methods and their applications. Simulation methods on physical radiation effects on materials are reviewed based on the process such as binary collision approximation, molecular dynamics, kinematic Monte Carlo method, reaction rate method and dislocation dynamics. (T. Tanaka)

  14. Free vibration analysis of straight-line beam regarded as distributed system by combining Wittrick-Williams algorithm and transfer dynamic stiffness coefficient method

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Myung Soo; Yang, Kyong Uk [Chonnam National University, Yeosu (Korea, Republic of); Kondou, Takahiro [Kyushu University, Fukuoka (Japan); Bonkobara, Yasuhiro [University of Miyazaki, Miyazaki (Japan)

    2016-03-15

    We developed a method for analyzing the free vibration of a structure regarded as a distributed system, by combining the Wittrick-Williams algorithm and the transfer dynamic stiffness coefficient method. A computational algorithm was formulated for analyzing the free vibration of a straight-line beam regarded as a distributed system, to explain the concept of the developed method. To verify the effectiveness of the developed method, the natural frequencies of straight-line beams were computed using the finite element method, transfer matrix method, transfer dynamic stiffness coefficient method, the exact solution, and the developed method. By comparing the computational results of the developed method with those of the other methods, we confirmed that the developed method exhibited superior performance over the other methods in terms of computational accuracy, cost and user convenience.

  15. Zonal methods and computational fluid dynamics

    International Nuclear Information System (INIS)

    Atta, E.H.

    1985-01-01

    Recent advances in developing numerical algorithms for solving fluid flow problems, and the continuing improvement in the speed and storage of large scale computers have made it feasible to compute the flow field about complex and realistic configurations. Current solution methods involve the use of a hierarchy of mathematical models ranging from the linearized potential equation to the Navier Stokes equations. Because of the increasing complexity of both the geometries and flowfields encountered in practical fluid flow simulation, there is a growing emphasis in computational fluid dynamics on the use of zonal methods. A zonal method is one that subdivides the total flow region into interconnected smaller regions or zones. The flow solutions in these zones are then patched together to establish the global flow field solution. Zonal methods are primarily used either to limit the complexity of the governing flow equations to a localized region or to alleviate the grid generation problems about geometrically complex and multicomponent configurations. This paper surveys the application of zonal methods for solving the flow field about two and three-dimensional configurations. Various factors affecting their accuracy and ease of implementation are also discussed. From the presented review it is concluded that zonal methods promise to be very effective for computing complex flowfields and configurations. Currently there are increasing efforts to improve their efficiency, versatility, and accuracy

  16. Alternate modal combination methods in response spectrum analysis

    International Nuclear Information System (INIS)

    Bezler, P.; Curreri, J.R.; Wang, Y.K.; Gupta, A.K.

    1990-10-01

    In piping analyses using the response spectrum method Square Root of the Sum of the Squares (SRSS) with clustering between closely spaced modes is the combination procedure most commonly used to combine between the modal response components. This procedure is simple to apply and normally yields conservative estimates of the time history results. The purpose of this study is to investigate alternate methods to combine between the modal response components. These methods are mathematically based to properly account for the combination between rigid and flexible modal responses as well as closely spaced modes. The methods are those advanced by Gupta, Hadjian and Lindely-Yow to address rigid response modes and the Double Sum Combination (DSC) method and the Complete Quadratic Combination (CQC) method to account for closely spaced modes. A direct comparison between these methods as well as the SRSS procedure is made by using them to predict the response of six piping systems. The results provided by each method are compared to the corresponding time history estimates of results as well as to each other. The degree of conservatism associated with each method is characterized. 19 refs., 16 figs., 10 tabs

  17. Computational and mathematical methods in brain atlasing.

    Science.gov (United States)

    Nowinski, Wieslaw L

    2017-12-01

    Brain atlases have a wide range of use from education to research to clinical applications. Mathematical methods as well as computational methods and tools play a major role in the process of brain atlas building and developing atlas-based applications. Computational methods and tools cover three areas: dedicated editors for brain model creation, brain navigators supporting multiple platforms, and atlas-assisted specific applications. Mathematical methods in atlas building and developing atlas-aided applications deal with problems in image segmentation, geometric body modelling, physical modelling, atlas-to-scan registration, visualisation, interaction and virtual reality. Here I overview computational and mathematical methods in atlas building and developing atlas-assisted applications, and share my contribution to and experience in this field.

  18. Computational methods in power system analysis

    CERN Document Server

    Idema, Reijer

    2014-01-01

    This book treats state-of-the-art computational methods for power flow studies and contingency analysis. In the first part the authors present the relevant computational methods and mathematical concepts. In the second part, power flow and contingency analysis are treated. Furthermore, traditional methods to solve such problems are compared to modern solvers, developed using the knowledge of the first part of the book. Finally, these solvers are analyzed both theoretically and experimentally, clearly showing the benefits of the modern approach.

  19. Empirical evaluation methods in computer vision

    CERN Document Server

    Christensen, Henrik I

    2002-01-01

    This book provides comprehensive coverage of methods for the empirical evaluation of computer vision techniques. The practical use of computer vision requires empirical evaluation to ensure that the overall system has a guaranteed performance. The book contains articles that cover the design of experiments for evaluation, range image segmentation, the evaluation of face recognition and diffusion methods, image matching using correlation methods, and the performance of medical image processing algorithms. Sample Chapter(s). Foreword (228 KB). Chapter 1: Introduction (505 KB). Contents: Automate

  20. Intravenous catheter training system: computer-based education versus traditional learning methods.

    Science.gov (United States)

    Engum, Scott A; Jeffries, Pamela; Fisher, Lisa

    2003-07-01

    Virtual reality simulators allow trainees to practice techniques without consequences, reduce potential risk associated with training, minimize animal use, and help to develop standards and optimize procedures. Current intravenous (IV) catheter placement training methods utilize plastic arms, however, the lack of variability can diminish the educational stimulus for the student. This study compares the effectiveness of an interactive, multimedia, virtual reality computer IV catheter simulator with a traditional laboratory experience of teaching IV venipuncture skills to both nursing and medical students. A randomized, pretest-posttest experimental design was employed. A total of 163 participants, 70 baccalaureate nursing students and 93 third-year medical students beginning their fundamental skills training were recruited. The students ranged in age from 20 to 55 years (mean 25). Fifty-eight percent were female and 68% percent perceived themselves as having average computer skills (25% declaring excellence). The methods of IV catheter education compared included a traditional method of instruction involving a scripted self-study module which involved a 10-minute videotape, instructor demonstration, and hands-on-experience using plastic mannequin arms. The second method involved an interactive multimedia, commercially made computer catheter simulator program utilizing virtual reality (CathSim). The pretest scores were similar between the computer and the traditional laboratory group. There was a significant improvement in cognitive gains, student satisfaction, and documentation of the procedure with the traditional laboratory group compared with the computer catheter simulator group. Both groups were similar in their ability to demonstrate the skill correctly. CONCLUSIONS; This evaluation and assessment was an initial effort to assess new teaching methodologies related to intravenous catheter placement and their effects on student learning outcomes and behaviors

  1. Helicopter fuselage drag - combined computational fluid dynamics and experimental studies

    Science.gov (United States)

    Batrakov, A.; Kusyumov, A.; Mikhailov, S.; Pakhov, V.; Sungatullin, A.; Valeev, M.; Zherekhov, V.; Barakos, G.

    2015-06-01

    In this paper, wind tunnel experiments are combined with Computational Fluid Dynamics (CFD) aiming to analyze the aerodynamics of realistic fuselage configurations. A development model of the ANSAT aircraft and an early model of the AKTAI light helicopter were employed. Both models were tested at the subsonic wind tunnel of KNRTU-KAI for a range of Reynolds numbers and pitch and yaw angles. The force balance measurements were complemented by particle image velocimetry (PIV) investigations for the cases where the experimental force measurements showed substantial unsteadiness. The CFD results were found to be in fair agreement with the test data and revealed some flow separation at the rear of the fuselages. Once confidence on the CFD method was established, further modifications were introduced to the ANSAT-like fuselage model to demonstrate drag reduction via small shape changes.

  2. Alternate modal combination methods in response spectrum analysis

    International Nuclear Information System (INIS)

    Wang, Y.K.; Bezler, P.

    1989-01-01

    In piping analyses using the response spectrum method Square Root of the Sum of the Squares (SRSS) with clustering between closely spaced modes is the combination procedure most commonly used to combine between the modal response components. This procedure is simple to apply and normally yields conservative estimates of the time history results. The purpose of this study is to investigate alternate methods to combine between the modal response components. These methods are mathematically based to properly account for the combination between rigid and flexible modal responses as well as closely spaced modes. The methods are those advanced by Gupta, Hadjian and Lindley-Yow to address rigid response modes and the Double Sum Combination (DSC) method and the Complete Quadratic Combination (CQC) method to account for closely spaced modes. A direct comparison between these methods as well as the SRSS procedure is made by using them to predict the response of six piping systems. For two piping systems thirty-three earthquake records were considered to account for the impact of variations in the characteristics of the excitation. The results provided by each method are compared to the corresponding time history estimates of results as well as to each other. The degree of conservatism associated with each method is characterized. 7 refs., 4 figs., 2 tabs

  3. A Review of Computational Methods to Predict the Risk of Rupture of Abdominal Aortic Aneurysms

    Directory of Open Access Journals (Sweden)

    Tejas Canchi

    2015-01-01

    Full Text Available Computational methods have played an important role in health care in recent years, as determining parameters that affect a certain medical condition is not possible in experimental conditions in many cases. Computational fluid dynamics (CFD methods have been used to accurately determine the nature of blood flow in the cardiovascular and nervous systems and air flow in the respiratory system, thereby giving the surgeon a diagnostic tool to plan treatment accordingly. Machine learning or data mining (MLD methods are currently used to develop models that learn from retrospective data to make a prediction regarding factors affecting the progression of a disease. These models have also been successful in incorporating factors such as patient history and occupation. MLD models can be used as a predictive tool to determine rupture potential in patients with abdominal aortic aneurysms (AAA along with CFD-based prediction of parameters like wall shear stress and pressure distributions. A combination of these computer methods can be pivotal in bridging the gap between translational and outcomes research in medicine. This paper reviews the use of computational methods in the diagnosis and treatment of AAA.

  4. High-frequency combination coding-based steady-state visual evoked potential for brain computer interface

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Feng; Zhang, Xin; Xie, Jun; Li, Yeping; Han, Chengcheng; Lili, Li; Wang, Jing [School of Mechanical Engineering, Xi’an Jiaotong University, Xi’an 710049 (China); Xu, Guang-Hua [School of Mechanical Engineering, Xi’an Jiaotong University, Xi’an 710049 (China); State Key Laboratory for Manufacturing Systems Engineering, Xi’an Jiaotong University, Xi’an 710054 (China)

    2015-03-10

    This study presents a new steady-state visual evoked potential (SSVEP) paradigm for brain computer interface (BCI) systems. The goal of this study is to increase the number of targets using fewer stimulation high frequencies, with diminishing subject’s fatigue and reducing the risk of photosensitive epileptic seizures. The new paradigm is High-Frequency Combination Coding-Based High-Frequency Steady-State Visual Evoked Potential (HFCC-SSVEP).Firstly, we studied SSVEP high frequency(beyond 25 Hz)response of SSVEP, whose paradigm is presented on the LED. The SNR (Signal to Noise Ratio) of high frequency(beyond 40 Hz) response is very low, which is been unable to be distinguished through the traditional analysis method; Secondly we investigated the HFCC-SSVEP response (beyond 25 Hz) for 3 frequencies (25Hz, 33.33Hz, and 40Hz), HFCC-SSVEP produces n{sup n} with n high stimulation frequencies through Frequence Combination Code. Further, Animproved Hilbert-huang transform (IHHT)-based variable frequency EEG feature extraction method and a local spectrum extreme target identification algorithmare adopted to extract time-frequency feature of the proposed HFCC-SSVEP response.Linear predictions and fixed sifting (iterating) 10 time is used to overcome the shortage of end effect and stopping criterion,generalized zero-crossing (GZC) is used to compute the instantaneous frequency of the proposed SSVEP respondent signals, the improved HHT-based feature extraction method for the proposed SSVEP paradigm in this study increases recognition efficiency, so as to improve ITR and to increase the stability of the BCI system. what is more, SSVEPs evoked by high-frequency stimuli (beyond 25Hz) minimally diminish subject’s fatigue and prevent safety hazards linked to photo-induced epileptic seizures, So as to ensure the system efficiency and undamaging.This study tests three subjects in order to verify the feasibility of the proposed method.

  5. High-frequency combination coding-based steady-state visual evoked potential for brain computer interface

    International Nuclear Information System (INIS)

    Zhang, Feng; Zhang, Xin; Xie, Jun; Li, Yeping; Han, Chengcheng; Lili, Li; Wang, Jing; Xu, Guang-Hua

    2015-01-01

    This study presents a new steady-state visual evoked potential (SSVEP) paradigm for brain computer interface (BCI) systems. The goal of this study is to increase the number of targets using fewer stimulation high frequencies, with diminishing subject’s fatigue and reducing the risk of photosensitive epileptic seizures. The new paradigm is High-Frequency Combination Coding-Based High-Frequency Steady-State Visual Evoked Potential (HFCC-SSVEP).Firstly, we studied SSVEP high frequency(beyond 25 Hz)response of SSVEP, whose paradigm is presented on the LED. The SNR (Signal to Noise Ratio) of high frequency(beyond 40 Hz) response is very low, which is been unable to be distinguished through the traditional analysis method; Secondly we investigated the HFCC-SSVEP response (beyond 25 Hz) for 3 frequencies (25Hz, 33.33Hz, and 40Hz), HFCC-SSVEP produces n n with n high stimulation frequencies through Frequence Combination Code. Further, Animproved Hilbert-huang transform (IHHT)-based variable frequency EEG feature extraction method and a local spectrum extreme target identification algorithmare adopted to extract time-frequency feature of the proposed HFCC-SSVEP response.Linear predictions and fixed sifting (iterating) 10 time is used to overcome the shortage of end effect and stopping criterion,generalized zero-crossing (GZC) is used to compute the instantaneous frequency of the proposed SSVEP respondent signals, the improved HHT-based feature extraction method for the proposed SSVEP paradigm in this study increases recognition efficiency, so as to improve ITR and to increase the stability of the BCI system. what is more, SSVEPs evoked by high-frequency stimuli (beyond 25Hz) minimally diminish subject’s fatigue and prevent safety hazards linked to photo-induced epileptic seizures, So as to ensure the system efficiency and undamaging.This study tests three subjects in order to verify the feasibility of the proposed method

  6. Computational methods in earthquake engineering

    CERN Document Server

    Plevris, Vagelis; Lagaros, Nikos

    2017-01-01

    This is the third book in a series on Computational Methods in Earthquake Engineering. The purpose of this volume is to bring together the scientific communities of Computational Mechanics and Structural Dynamics, offering a wide coverage of timely issues on contemporary Earthquake Engineering. This volume will facilitate the exchange of ideas in topics of mutual interest and can serve as a platform for establishing links between research groups with complementary activities. The computational aspects are emphasized in order to address difficult engineering problems of great social and economic importance. .

  7. Optimized Runge-Kutta methods with minimal dispersion and dissipation for problems arising from computational acoustics

    International Nuclear Information System (INIS)

    Tselios, Kostas; Simos, T.E.

    2007-01-01

    In this Letter a new explicit fourth-order seven-stage Runge-Kutta method with a combination of minimal dispersion and dissipation error and maximal accuracy and stability limit along the imaginary axes, is developed. This method was produced by a general function that was constructed to satisfy all the above requirements and, from which, all the existing fourth-order six-stage RK methods can be produced. The new method is more efficient than the other optimized methods, for acoustic computations

  8. Recent Progress in First-Principles Methods for Computing the Electronic Structure of Correlated Materials

    Directory of Open Access Journals (Sweden)

    Fredrik Nilsson

    2018-03-01

    Full Text Available Substantial progress has been achieved in the last couple of decades in computing the electronic structure of correlated materials from first principles. This progress has been driven by parallel development in theory and numerical algorithms. Theoretical development in combining ab initio approaches and many-body methods is particularly promising. A crucial role is also played by a systematic method for deriving a low-energy model, which bridges the gap between real and model systems. In this article, an overview is given tracing the development from the LDA+U to the latest progress in combining the G W method and (extended dynamical mean-field theory ( G W +EDMFT. The emphasis is on conceptual and theoretical aspects rather than technical ones.

  9. Computing discharge using the index velocity method

    Science.gov (United States)

    Levesque, Victor A.; Oberg, Kevin A.

    2012-01-01

    Application of the index velocity method for computing continuous records of discharge has become increasingly common, especially since the introduction of low-cost acoustic Doppler velocity meters (ADVMs) in 1997. Presently (2011), the index velocity method is being used to compute discharge records for approximately 470 gaging stations operated and maintained by the U.S. Geological Survey. The purpose of this report is to document and describe techniques for computing discharge records using the index velocity method. Computing discharge using the index velocity method differs from the traditional stage-discharge method by separating velocity and area into two ratings—the index velocity rating and the stage-area rating. The outputs from each of these ratings, mean channel velocity (V) and cross-sectional area (A), are then multiplied together to compute a discharge. For the index velocity method, V is a function of such parameters as streamwise velocity, stage, cross-stream velocity, and velocity head, and A is a function of stage and cross-section shape. The index velocity method can be used at locations where stage-discharge methods are used, but it is especially appropriate when more than one specific discharge can be measured for a specific stage. After the ADVM is selected, installed, and configured, the stage-area rating and the index velocity rating must be developed. A standard cross section is identified and surveyed in order to develop the stage-area rating. The standard cross section should be surveyed every year for the first 3 years of operation and thereafter at a lesser frequency, depending on the susceptibility of the cross section to change. Periodic measurements of discharge are used to calibrate and validate the index rating for the range of conditions experienced at the gaging station. Data from discharge measurements, ADVMs, and stage sensors are compiled for index-rating analysis. Index ratings are developed by means of regression

  10. Analysis of random response of structure with uncertain parameters. Combination of substructure synthesis method and hierarchy method

    International Nuclear Information System (INIS)

    Iwatsubo, Takuzo; Kawamura, Shozo; Mori, Hiroyuki.

    1995-01-01

    In this paper, the method to obtain the random response of a structure with uncertain parameters is proposed. The proposed method is a combination of the substructure synthesis method and the hierarchy method. The concept of the proposed method is that the hierarchy equation of each substructure is obtained using the hierarchy method, and the hierarchy equation of the overall structure is obtained using the substructure synthesis method. Using the proposed method, the reduced order hierarchy equation can be obtained without analyzing the original whole structure. After the calculation of the mean square value of response, the reliability analysis can be carried out based on the first passage problem and Poisson's excursion rate. As a numerical example of structure, a simple piping system is considered. The damping constant of the support is considered as the uncertainty parameter. Then the random response is calculated using the proposed method. As a result, the proposed method is useful to analyze the random response in terms of the accuracy, computer storage and calculation time. (author)

  11. Fibonacci’s Computation Methods vs Modern Algorithms

    Directory of Open Access Journals (Sweden)

    Ernesto Burattini

    2013-12-01

    Full Text Available In this paper we discuss some computational procedures given by Leonardo Pisano Fibonacci in his famous Liber Abaci book, and we propose their translation into a modern language for computers (C ++. Among the other we describe the method of “cross” multiplication, we evaluate its computational complexity in algorithmic terms and we show the output of a C ++ code that describes the development of the method applied to the product of two integers. In a similar way we show the operations performed on fractions introduced by Fibonacci. Thanks to the possibility to reproduce on a computer, the Fibonacci’s different computational procedures, it was possible to identify some calculation errors present in the different versions of the original text.

  12. A hybrid method for the parallel computation of Green's functions

    International Nuclear Information System (INIS)

    Petersen, Dan Erik; Li Song; Stokbro, Kurt; Sorensen, Hans Henrik B.; Hansen, Per Christian; Skelboe, Stig; Darve, Eric

    2009-01-01

    Quantum transport models for nanodevices using the non-equilibrium Green's function method require the repeated calculation of the block tridiagonal part of the Green's and lesser Green's function matrices. This problem is related to the calculation of the inverse of a sparse matrix. Because of the large number of times this calculation needs to be performed, this is computationally very expensive even on supercomputers. The classical approach is based on recurrence formulas which cannot be efficiently parallelized. This practically prevents the solution of large problems with hundreds of thousands of atoms. We propose new recurrences for a general class of sparse matrices to calculate Green's and lesser Green's function matrices which extend formulas derived by Takahashi and others. We show that these recurrences may lead to a dramatically reduced computational cost because they only require computing a small number of entries of the inverse matrix. Then, we propose a parallelization strategy for block tridiagonal matrices which involves a combination of Schur complement calculations and cyclic reduction. It achieves good scalability even on problems of modest size.

  13. New or improved computational methods and advanced reactor design

    International Nuclear Information System (INIS)

    Nakagawa, Masayuki; Takeda, Toshikazu; Ushio, Tadashi

    1997-01-01

    Nuclear computational method has been studied continuously up to date, as a fundamental technology supporting the nuclear development. At present, research on computational method according to new theory and the calculating method thought to be difficult to practise are also continued actively to find new development due to splendid improvement of features of computer. In Japan, many light water type reactors are now in operations, new computational methods are induced for nuclear design, and a lot of efforts are concentrated for intending to more improvement of economics and safety. In this paper, some new research results on the nuclear computational methods and their application to nuclear design of the reactor were described for introducing recent trend of the nuclear design of the reactor. 1) Advancement of the computational method, 2) Reactor core design and management of the light water reactor, and 3) Nuclear design of the fast reactor. (G.K.)

  14. Hydra-Ring: a computational framework to combine failure probabilities

    Science.gov (United States)

    Diermanse, Ferdinand; Roscoe, Kathryn; IJmker, Janneke; Mens, Marjolein; Bouwer, Laurens

    2013-04-01

    This presentation discusses the development of a new computational framework for the safety assessment of flood defence systems: Hydra-Ring. Hydra-Ring computes the failure probability of a flood defence system, which is composed of a number of elements (e.g., dike segments, dune segments or hydraulic structures), taking all relevant uncertainties explicitly into account. This is a major step forward in comparison with the current Dutch practice in which the safety assessment is done separately per individual flood defence section. The main advantage of the new approach is that it will result in a more balanced prioratization of required mitigating measures ('more value for money'). Failure of the flood defence system occurs if any element within the system fails. Hydra-Ring thus computes and combines failure probabilities of the following elements: - Failure mechanisms: A flood defence system can fail due to different failure mechanisms. - Time periods: failure probabilities are first computed for relatively small time scales (assessment of flood defense systems, Hydra-Ring can also be used to derive fragility curves, to asses the efficiency of flood mitigating measures, and to quantify the impact of climate change and land subsidence on flood risk. Hydra-Ring is being developed in the context of the Dutch situation. However, the computational concept is generic and the model is set up in such a way that it can be applied to other areas as well. The presentation will focus on the model concept and probabilistic computation techniques.

  15. Computational electromagnetic methods for transcranial magnetic stimulation

    Science.gov (United States)

    Gomez, Luis J.

    Transcranial magnetic stimulation (TMS) is a noninvasive technique used both as a research tool for cognitive neuroscience and as a FDA approved treatment for depression. During TMS, coils positioned near the scalp generate electric fields and activate targeted brain regions. In this thesis, several computational electromagnetics methods that improve the analysis, design, and uncertainty quantification of TMS systems were developed. Analysis: A new fast direct technique for solving the large and sparse linear system of equations (LSEs) arising from the finite difference (FD) discretization of Maxwell's quasi-static equations was developed. Following a factorization step, the solver permits computation of TMS fields inside realistic brain models in seconds, allowing for patient-specific real-time usage during TMS. The solver is an alternative to iterative methods for solving FD LSEs, often requiring run-times of minutes. A new integral equation (IE) method for analyzing TMS fields was developed. The human head is highly-heterogeneous and characterized by high-relative permittivities (107). IE techniques for analyzing electromagnetic interactions with such media suffer from high-contrast and low-frequency breakdowns. The novel high-permittivity and low-frequency stable internally combined volume-surface IE method developed. The method not only applies to the analysis of high-permittivity objects, but it is also the first IE tool that is stable when analyzing highly-inhomogeneous negative permittivity plasmas. Design: TMS applications call for electric fields to be sharply focused on regions that lie deep inside the brain. Unfortunately, fields generated by present-day Figure-8 coils stimulate relatively large regions near the brain surface. An optimization method for designing single feed TMS coil-arrays capable of producing more localized and deeper stimulation was developed. Results show that the coil-arrays stimulate 2.4 cm into the head while stimulating 3

  16. Computation of beam quality parameters for Mo/Mo, Mo/Rh, Rh/Rh, and W/Al target/filter combinations in mammography

    International Nuclear Information System (INIS)

    Kharrati, Hedi; Zarrad, Boubaker

    2003-01-01

    A computer program was implemented to predict mammography x-ray beam parameters in the range 20-40 kV for Mo/Mo, Mo/Rh, Rh/Rh, and W/Al target/filter combinations. The computation method used to simulate mammography x-ray spectra is based on the Boone et al. model. The beam quality parameters such as the half-value layer (HVL), the homogeneity coefficient (HC), and the average photon energy were computed by simulating the interaction of the spectrum photons with matter. The checking of this computation was done using a comparison of the results with published data and measured values obtained at the Netherlands Metrology Institute Van Swinden Laboratorium, National Institute of Standards and Technology, and International Atomic Energy Agency. The predicted values with a mean deviation of 3.3% of HVL, 3.7% of HC, and 1.5% of average photon energy show acceptable agreement with published data and measurements for all target/filter combinations in the 23-40 kV range. The accuracy of this computation can be considered clinically acceptable and can allow an appreciable estimation for the beam quality parameters

  17. Advanced scientific computational methods and their applications of nuclear technologies. (1) Overview of scientific computational methods, introduction of continuum simulation methods and their applications (1)

    International Nuclear Information System (INIS)

    Oka, Yoshiaki; Okuda, Hiroshi

    2006-01-01

    Scientific computational methods have advanced remarkably with the progress of nuclear development. They have played the role of weft connecting each realm of nuclear engineering and then an introductory course of advanced scientific computational methods and their applications to nuclear technologies were prepared in serial form. This is the first issue showing their overview and introduction of continuum simulation methods. Finite element method as their applications is also reviewed. (T. Tanaka)

  18. Water demand forecasting: review of soft computing methods.

    Science.gov (United States)

    Ghalehkhondabi, Iman; Ardjmand, Ehsan; Young, William A; Weckman, Gary R

    2017-07-01

    Demand forecasting plays a vital role in resource management for governments and private companies. Considering the scarcity of water and its inherent constraints, demand management and forecasting in this domain are critically important. Several soft computing techniques have been developed over the last few decades for water demand forecasting. This study focuses on soft computing methods of water consumption forecasting published between 2005 and 2015. These methods include artificial neural networks (ANNs), fuzzy and neuro-fuzzy models, support vector machines, metaheuristics, and system dynamics. Furthermore, it was discussed that while in short-term forecasting, ANNs have been superior in many cases, but it is still very difficult to pick a single method as the overall best. According to the literature, various methods and their hybrids are applied to water demand forecasting. However, it seems soft computing has a lot more to contribute to water demand forecasting. These contribution areas include, but are not limited, to various ANN architectures, unsupervised methods, deep learning, various metaheuristics, and ensemble methods. Moreover, it is found that soft computing methods are mainly used for short-term demand forecasting.

  19. Combined magnetic vector-scalar potential finite element computation of 3D magnetic field and performance of modified Lundell alternators in Space Station applications. Ph.D. Thesis

    Science.gov (United States)

    Wang, Ren H.

    1991-01-01

    A method of combined use of magnetic vector potential (MVP) based finite element (FE) formulations and magnetic scalar potential (MSP) based FE formulations for computation of three-dimensional (3D) magnetostatic fields is developed. This combined MVP-MSP 3D-FE method leads to considerable reduction by nearly a factor of 3 in the number of unknowns in comparison to the number of unknowns which must be computed in global MVP based FE solutions. This method allows one to incorporate portions of iron cores sandwiched in between coils (conductors) in current-carrying regions. Thus, it greatly simplifies the geometries of current carrying regions (in comparison with the exclusive MSP based methods) in electric machinery applications. A unique feature of this approach is that the global MSP solution is single valued in nature, that is, no branch cut is needed. This is again a superiority over the exclusive MSP based methods. A Newton-Raphson procedure with a concept of an adaptive relaxation factor was developed and successfully used in solving the 3D-FE problem with magnetic material anisotropy and nonlinearity. Accordingly, this combined MVP-MSP 3D-FE method is most suited for solution of large scale global type magnetic field computations in rotating electric machinery with very complex magnetic circuit geometries, as well as nonlinear and anisotropic material properties.

  20. Introduction Of Computational Materials Science

    International Nuclear Information System (INIS)

    Lee, Jun Geun

    2006-08-01

    This book gives, descriptions of computer simulation, computational materials science, typical three ways of computational materials science, empirical methods ; molecular dynamics such as potential energy, Newton's equation of motion, data production and analysis of results, quantum mechanical methods like wave equation, approximation, Hartree method, and density functional theory, dealing of solid such as pseudopotential method, tight-binding methods embedded atom method, Car-Parrinello method and combination simulation.

  1. Computer Drawing Method for Operating Characteristic Curve of PV Power Plant Array Unit

    Science.gov (United States)

    Tan, Jianbin

    2018-02-01

    According to the engineering design of large-scale grid-connected photovoltaic power stations and the research and development of many simulation and analysis systems, it is necessary to draw a good computer graphics of the operating characteristic curves of photovoltaic array elements and to propose a good segmentation non-linear interpolation algorithm. In the calculation method, Component performance parameters as the main design basis, the computer can get 5 PV module performances. At the same time, combined with the PV array series and parallel connection, the computer drawing of the performance curve of the PV array unit can be realized. At the same time, the specific data onto the module of PV development software can be calculated, and the good operation of PV array unit can be improved on practical application.

  2. A hybrid computation method for determining fluctuations of temperature in branched structures

    International Nuclear Information System (INIS)

    Czomber, L.

    1982-01-01

    A hybrid computation method for determining temperature fluctuations at discrete points of slab like geometries is developed on the basis of a new formulation of the finite difference method. For this purpose, a new finite difference method is combined with an exact solution of the heat equation within the range of values of the Laplace transformation. Whereas the exact solution can be applied to arbitraryly large ranges, the finite difference formulation is given for structural ranges which need finer discretization. The boundary conditions of the exact solution are substituted by finite difference terms for the boundary residual flow or an internal heat source, depending on the problem. The resulting system of conditional equations contains only the node parameters of the finite difference method. (orig.) [de

  3. The Variance-covariance Method using IOWGA Operator for Tourism Forecast Combination

    Directory of Open Access Journals (Sweden)

    Liangping Wu

    2014-08-01

    Full Text Available Three combination methods commonly used in tourism forecasting are the simple average method, the variance-covariance method and the discounted MSFE method. These methods assign the different weights that can not change at each time point to each individual forecasting model. In this study, we introduce the IOWGA operator combination method which can overcome the defect of previous three combination methods into tourism forecasting. Moreover, we further investigate the performance of the four combination methods through the theoretical evaluation and the forecasting evaluation. The results of the theoretical evaluation show that the IOWGA operator combination method obtains extremely well performance and outperforms the other forecast combination methods. Furthermore, the IOWGA operator combination method can be of well forecast performance and performs almost the same to the variance-covariance combination method for the forecasting evaluation. The IOWGA operator combination method mainly reflects the maximization of improving forecasting accuracy and the variance-covariance combination method mainly reflects the decrease of the forecast error. For future research, it may be worthwhile introducing and examining other new combination methods that may improve forecasting accuracy or employing other techniques to control the time for updating the weights in combined forecasts.

  4. Simulation of processes of water aerosol coagulation-condensation growth using a combination of methods of groups and fractions

    International Nuclear Information System (INIS)

    Alexander G Godizov; Alexander D Efanov; Alexander A Lukianov; Olga V Supotnitskaya

    2005-01-01

    Full text of publication follows: To describe the phenomena involving aerosol, the model in lumped parameters is used, which is based on the kinetic integral-differential equation for the function of particle distribution of size and content of soluble and insoluble impurities with sources and collision integrals. By the function of particle size distribution, the integral parameters of aerosol can be determined: water content (mass of condensed moisture in a unit of volume), dust content (mass of insoluble condensation nuclei in a unit of volume), calculational concentration and the mean radius of particles. In the aerosol transfer problem being considered, the thermodynamic fields are the external data obtained with a thermal-hydraulic computer code. For numerical simulation of the kinetic equation describing aerosol behavior in coagulation-condensation processes, a hybrid method is used, which combines the method of groups and the method of fractions. To solve the complete equation of aerosol transfer, the method of fractions is used. The integral equation describing aerosol coagulation is solved by means of the group method. The group method based on the representation of particle size distribution in terms of a linear combination of δ-functions with time-dependent arguments makes it possible to calculate the integral parameters of spectrum: the moments of distribution function at a small number of groups. The test calculations were performed by giving the particle spectrum as a lognormal distribution and Γ- function. The hybrid method combined with the thermal-hydraulic computer code enables one to simulate volume condensation of steam at varying thermal-hydraulic conditions. (authors)

  5. Electromagnetic computation methods for lightning surge protection studies

    CERN Document Server

    Baba, Yoshihiro

    2016-01-01

    This book is the first to consolidate current research and to examine the theories of electromagnetic computation methods in relation to lightning surge protection. The authors introduce and compare existing electromagnetic computation methods such as the method of moments (MOM), the partial element equivalent circuit (PEEC), the finite element method (FEM), the transmission-line modeling (TLM) method, and the finite-difference time-domain (FDTD) method. The application of FDTD method to lightning protection studies is a topic that has matured through many practical applications in the past decade, and the authors explain the derivation of Maxwell's equations required by the FDTD, and modeling of various electrical components needed in computing lightning electromagnetic fields and surges with the FDTD method. The book describes the application of FDTD method to current and emerging problems of lightning surge protection of continuously more complex installations, particularly in critical infrastructures of e...

  6. Computation of Aerodynamic Noise Radiated from Ducted Tail Rotor Using Boundary Element Method

    Directory of Open Access Journals (Sweden)

    Yunpeng Ma

    2017-01-01

    Full Text Available A detailed aerodynamic performance of a ducted tail rotor in hover has been numerically studied using CFD technique. The general governing equations of turbulent flow around ducted tail rotor are given and directly solved by using finite volume discretization and Runge-Kutta time integration. The calculations of the lift characteristics of the ducted tail rotor can be obtained. In order to predict the aerodynamic noise, a hybrid method combining computational aeroacoustic with boundary element method (BEM has been proposed. The computational steps include the following: firstly, the unsteady flow around rotor is calculated using the CFD method to get the noise source information; secondly, the radiate sound pressure is calculated using the acoustic analogy Curle equation in the frequency domain; lastly, the scattering effect of the duct wall on the propagation of the sound wave is presented using an acoustic thin-body BEM. The aerodynamic results and the calculated sound pressure levels are compared with the known technique for validation. The sound pressure directivity and scattering effect are shown to demonstrate the validity and applicability of the method.

  7. Efficient Discovery of Novel Multicomponent Mixtures for Hydrogen Storage: A Combined Computational/Experimental Approach

    Energy Technology Data Exchange (ETDEWEB)

    Wolverton, Christopher [Northwestern Univ., Evanston, IL (United States). Dept. of Materials Science and Engineering; Ozolins, Vidvuds [Univ. of California, Los Angeles, CA (United States). Dept. of Materials Science and Engineering; Kung, Harold H. [Northwestern Univ., Evanston, IL (United States). Dept. of Chemical and Biological Engineering; Yang, Jun [Ford Scientific Research Lab., Dearborn, MI (United States); Hwang, Sonjong [California Inst. of Technology (CalTech), Pasadena, CA (United States). Dept. of Chemistry and Chemical Engineering; Shore, Sheldon [The Ohio State Univ., Columbus, OH (United States). Dept. of Chemistry and Biochemistry

    2016-11-28

    The objective of the proposed program is to discover novel mixed hydrides for hydrogen storage, which enable the DOE 2010 system-level goals. Our goal is to find a material that desorbs 8.5 wt.% H2 or more at temperatures below 85°C. The research program will combine first-principles calculations of reaction thermodynamics and kinetics with material and catalyst synthesis, testing, and characterization. We will combine materials from distinct categories (e.g., chemical and complex hydrides) to form novel multicomponent reactions. Systems to be studied include mixtures of complex hydrides and chemical hydrides [e.g. LiNH2+NH3BH3] and nitrogen-hydrogen based borohydrides [e.g. Al(BH4)3(NH3)3]. The 2010 and 2015 FreedomCAR/DOE targets for hydrogen storage systems are very challenging, and cannot be met with existing materials. The vast majority of the work to date has delineated materials into various classes, e.g., complex and metal hydrides, chemical hydrides, and sorbents. However, very recent studies indicate that mixtures of storage materials, particularly mixtures between various classes, hold promise to achieve technological attributes that materials within an individual class cannot reach. Our project involves a systematic, rational approach to designing novel multicomponent mixtures of materials with fast hydrogenation/dehydrogenation kinetics and favorable thermodynamics using a combination of state-of-the-art scientific computing and experimentation. We will use the accurate predictive power of first-principles modeling to understand the thermodynamic and microscopic kinetic processes involved in hydrogen release and uptake and to design new material/catalyst systems with improved properties. Detailed characterization and atomic-scale catalysis experiments will elucidate the effect of dopants and nanoscale catalysts in achieving fast kinetics and reversibility. And

  8. Methods and statistics for combining motif match scores.

    Science.gov (United States)

    Bailey, T L; Gribskov, M

    1998-01-01

    Position-specific scoring matrices are useful for representing and searching for protein sequence motifs. A sequence family can often be described by a group of one or more motifs, and an effective search must combine the scores for matching a sequence to each of the motifs in the group. We describe three methods for combining match scores and estimating the statistical significance of the combined scores and evaluate the search quality (classification accuracy) and the accuracy of the estimate of statistical significance of each. The three methods are: 1) sum of scores, 2) sum of reduced variates, 3) product of score p-values. We show that method 3) is superior to the other two methods in both regards, and that combining motif scores indeed gives better search accuracy. The MAST sequence homology search algorithm utilizing the product of p-values scoring method is available for interactive use and downloading at URL http:/(/)www.sdsc.edu/MEME.

  9. Computing thermal Wigner densities with the phase integration method

    International Nuclear Information System (INIS)

    Beutier, J.; Borgis, D.; Vuilleumier, R.; Bonella, S.

    2014-01-01

    We discuss how the Phase Integration Method (PIM), recently developed to compute symmetrized time correlation functions [M. Monteferrante, S. Bonella, and G. Ciccotti, Mol. Phys. 109, 3015 (2011)], can be adapted to sampling/generating the thermal Wigner density, a key ingredient, for example, in many approximate schemes for simulating quantum time dependent properties. PIM combines a path integral representation of the density with a cumulant expansion to represent the Wigner function in a form calculable via existing Monte Carlo algorithms for sampling noisy probability densities. The method is able to capture highly non-classical effects such as correlation among the momenta and coordinates parts of the density, or correlations among the momenta themselves. By using alternatives to cumulants, it can also indicate the presence of negative parts of the Wigner density. Both properties are demonstrated by comparing PIM results to those of reference quantum calculations on a set of model problems

  10. Computing thermal Wigner densities with the phase integration method.

    Science.gov (United States)

    Beutier, J; Borgis, D; Vuilleumier, R; Bonella, S

    2014-08-28

    We discuss how the Phase Integration Method (PIM), recently developed to compute symmetrized time correlation functions [M. Monteferrante, S. Bonella, and G. Ciccotti, Mol. Phys. 109, 3015 (2011)], can be adapted to sampling/generating the thermal Wigner density, a key ingredient, for example, in many approximate schemes for simulating quantum time dependent properties. PIM combines a path integral representation of the density with a cumulant expansion to represent the Wigner function in a form calculable via existing Monte Carlo algorithms for sampling noisy probability densities. The method is able to capture highly non-classical effects such as correlation among the momenta and coordinates parts of the density, or correlations among the momenta themselves. By using alternatives to cumulants, it can also indicate the presence of negative parts of the Wigner density. Both properties are demonstrated by comparing PIM results to those of reference quantum calculations on a set of model problems.

  11. Computer Anti-forensics Methods and their Impact on Computer Forensic Investigation

    OpenAIRE

    Pajek, Przemyslaw; Pimenidis, Elias

    2009-01-01

    Electronic crime is very difficult to investigate and prosecute, mainly\\ud due to the fact that investigators have to build their cases based on artefacts left\\ud on computer systems. Nowadays, computer criminals are aware of computer forensics\\ud methods and techniques and try to use countermeasure techniques to efficiently\\ud impede the investigation processes. In many cases investigation with\\ud such countermeasure techniques in place appears to be too expensive, or too\\ud time consuming t...

  12. Confidence Level Computation for Combining Searches with Small Statistics

    OpenAIRE

    Junk, Thomas

    1999-01-01

    This article describes an efficient procedure for computing approximate confidence levels for searches for new particles where the expected signal and background levels are small enough to require the use of Poisson statistics. The results of many independent searches for the same particle may be combined easily, regardless of the discriminating variables which may be measured for the candidate events. The effects of systematic uncertainty in the signal and background models are incorporated ...

  13. Diagnostic performance of combined single photon emission computed tomographic scintimammography and ultrasonography based on computer-aided diagnosis for breast cancer

    International Nuclear Information System (INIS)

    Hwang, Kyung Hoon; Choi, Duck Joo; Choe, Won Sick; Lee, Jun Gu; Kim, Jong Hyo; Lee, Hyung Ji; Om, Kyong Sik; Lee, Byeong Il

    2007-01-01

    We investigated whether the diagnostic performance of SPECT scintimammography (SMM) can be improved by adding computer-aided diagnosis (CAD) of ultrasonography (US). We reviewed breast SPECT SMM images and corresponding US images from 40 patients with breast masses (21 malignant and 19 benign tumors.) The quantitative data of SPECT SMM were obtained as the uptake ratio of lesion to contralateral normal breast. The morphologic features of the breast lesions on US were extracted and quantitated using the automated CAD software program. The diagnostic performance of SPECT SMM and CAD of US alone was determined using receiver operating characteristic (ROC) curve analysis. The best discriminating parameter (D-value) combining SPECT SMM and the CAD of US was created. The sensitivity, specificity and accuracy of combined two diagnostic modalities were compared to those of a single one. Both SPECT SMM and CAD of US showed a relatively good diagnostic performance (area under curve=0.846 and 0.831, respectively). Combining the results of SPECT SMM and CAD of US resulted in improved diagnostic performance (area under curve=0.860), but there was no statistical difference in sensitivity, specificity and accuracy between the combined method and a single modality. It seems that combining the results of SPECT SMM and CAD of breast US do not significantly improve the diagnostic performance for diagnosis of breast cancer, compared with that of SPECT SMM alone. However, SPECT SMM and CAD of US may complement each other in differential diagnosis of breast cancer

  14. Computational and instrumental methods in EPR

    CERN Document Server

    Bender, Christopher J

    2006-01-01

    Computational and Instrumental Methods in EPR Prof. Bender, Fordham University Prof. Lawrence J. Berliner, University of Denver Electron magnetic resonance has been greatly facilitated by the introduction of advances in instrumentation and better computational tools, such as the increasingly widespread use of the density matrix formalism. This volume is devoted to both instrumentation and computation aspects of EPR, while addressing applications such as spin relaxation time measurements, the measurement of hyperfine interaction parameters, and the recovery of Mn(II) spin Hamiltonian parameters via spectral simulation. Key features: Microwave Amplitude Modulation Technique to Measure Spin-Lattice (T1) and Spin-Spin (T2) Relaxation Times Improvement in the Measurement of Spin-Lattice Relaxation Time in Electron Paramagnetic Resonance Quantitative Measurement of Magnetic Hyperfine Parameters and the Physical Organic Chemistry of Supramolecular Systems New Methods of Simulation of Mn(II) EPR Spectra: Single Cryst...

  15. Investigation on human serum albumin and Gum Tragacanth interactions using experimental and computational methods.

    Science.gov (United States)

    Moradi, Sajad; Taran, Mojtaba; Shahlaei, Mohsen

    2018-02-01

    The study on the interaction of human serum albumin and Gum Tragacanth, a biodegradable bio-polymer, has been undertaken. For this purpose, several experimental and computational methods were used. Investigation of thermodynamic parameters and mode of interactions were carried out using Fluorescence spectroscopy in 300 and 310K. Also, a Fourier transformed infrared spectra and synchronous fluorescence spectroscopy was performed. To give detailed insight of possible interactions, docking and molecular dynamic simulations were also applied. Results show that the interaction is based on hydrogen bonding and van der Waals forces. Structural analysis implies on no adverse change in protein conformation during binding of GT. Furthermore, computational methods confirm some evidence on secondary structure enhancement of protein as a presence of combining with Gum Tragacanth. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Determining the performance of a Diffuser Augmented Wind Turbine using a combined CFD/BEM method

    Directory of Open Access Journals (Sweden)

    Kesby Joss E.

    2017-01-01

    Full Text Available Traditionally, the optimisation of a Diffuser Augmented Wind Turbine has focused on maximising power output. However, due to the often less than ideal location of small-scale turbines, cut-in speed and starting time are of equal importance in maximising Annual Energy Production, which is the ultimate goal of any wind turbine design. This paper proposes a method of determining power output, cut-in speed and starting time using a combination of Computational Fluid Dynamics and Blade Element Momentum theory. The proposed method has been validated against published experimental data.

  17. Computational methods for high-energy source shielding

    International Nuclear Information System (INIS)

    Armstrong, T.W.; Cloth, P.; Filges, D.

    1983-01-01

    The computational methods for high-energy radiation transport related to shielding of the SNQ-spallation source are outlined. The basic approach is to couple radiation-transport computer codes which use Monte Carlo methods and discrete ordinates methods. A code system is suggested that incorporates state-of-the-art radiation-transport techniques. The stepwise verification of that system is briefly summarized. The complexity of the resulting code system suggests a more straightforward code specially tailored for thick shield calculations. A short guide line to future development of such a Monte Carlo code is given

  18. Uyghur face recognition method combining 2DDCT with POEM

    Science.gov (United States)

    Yi, Lihamu; Ya, Ermaimaiti

    2017-11-01

    In this paper, in light of the reduced recognition rate and poor robustness of Uyghur face under illumination and partial occlusion, a Uyghur face recognition method combining Two Dimension Discrete Cosine Transform (2DDCT) with Patterns Oriented Edge Magnitudes (POEM) was proposed. Firstly, the Uyghur face images were divided into 8×8 block matrix, and the Uyghur face images after block processing were converted into frequency-domain status using 2DDCT; secondly, the Uyghur face images were compressed to exclude non-sensitive medium frequency parts and non-high frequency parts, so it can reduce the feature dimensions necessary for the Uyghur face images, and further reduce the amount of computation; thirdly, the corresponding POEM histograms of the Uyghur face images were obtained by calculating the feature quantity of POEM; fourthly, the POEM histograms were cascaded together as the texture histogram of the center feature point to obtain the texture features of the Uyghur face feature points; finally, classification of the training samples was carried out using deep learning algorithm. The simulation experiment results showed that the proposed algorithm further improved the recognition rate of the self-built Uyghur face database, and greatly improved the computing speed of the self-built Uyghur face database, and had strong robustness.

  19. Methods for teaching geometric modelling and computer graphics

    Energy Technology Data Exchange (ETDEWEB)

    Rotkov, S.I.; Faitel`son, Yu. Ts.

    1992-05-01

    This paper considers methods for teaching the methods and algorithms of geometric modelling and computer graphics to programmers, designers and users of CAD and computer-aided research systems. There is a bibliography that can be used to prepare lectures and practical classes. 37 refs., 1 tab.

  20. Three-dimensional protein structure prediction: Methods and computational strategies.

    Science.gov (United States)

    Dorn, Márcio; E Silva, Mariel Barbachan; Buriol, Luciana S; Lamb, Luis C

    2014-10-12

    A long standing problem in structural bioinformatics is to determine the three-dimensional (3-D) structure of a protein when only a sequence of amino acid residues is given. Many computational methodologies and algorithms have been proposed as a solution to the 3-D Protein Structure Prediction (3-D-PSP) problem. These methods can be divided in four main classes: (a) first principle methods without database information; (b) first principle methods with database information; (c) fold recognition and threading methods; and (d) comparative modeling methods and sequence alignment strategies. Deterministic computational techniques, optimization techniques, data mining and machine learning approaches are typically used in the construction of computational solutions for the PSP problem. Our main goal with this work is to review the methods and computational strategies that are currently used in 3-D protein prediction. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Hybrid Modeling and Optimization of Manufacturing Combining Artificial Intelligence and Finite Element Method

    CERN Document Server

    Quiza, Ramón; Davim, J Paulo

    2012-01-01

    Artificial intelligence (AI) techniques and the finite element method (FEM) are both powerful computing tools, which are extensively used for modeling and optimizing manufacturing processes. The combination of these tools has resulted in a new flexible and robust approach as several recent studies have shown. This book aims to review the work already done in this field as well as to expose the new possibilities and foreseen trends. The book is expected to be useful for postgraduate students and researchers, working in the area of modeling and optimization of manufacturing processes.

  2. Classical versus Computer Algebra Methods in Elementary Geometry

    Science.gov (United States)

    Pech, Pavel

    2005-01-01

    Computer algebra methods based on results of commutative algebra like Groebner bases of ideals and elimination of variables make it possible to solve complex, elementary and non elementary problems of geometry, which are difficult to solve using a classical approach. Computer algebra methods permit the proof of geometric theorems, automatic…

  3. Comparison of Five Computational Methods for Computing Q Factors in Photonic Crystal Membrane Cavities

    DEFF Research Database (Denmark)

    Novitsky, Andrey; de Lasson, Jakob Rosenkrantz; Frandsen, Lars Hagedorn

    2017-01-01

    Five state-of-the-art computational methods are benchmarked by computing quality factors and resonance wavelengths in photonic crystal membrane L5 and L9 line defect cavities. The convergence of the methods with respect to resolution, degrees of freedom and number of modes is investigated. Specia...

  4. Methods in computed angiotomography of the brain

    International Nuclear Information System (INIS)

    Yamamoto, Yuji; Asari, Shoji; Sadamoto, Kazuhiko.

    1985-01-01

    Authors introduce the methods in computed angiotomography of the brain. Setting of the scan planes and levels and the minimum dose bolus (MinDB) injection of contrast medium are described in detail. These methods are easily and safely employed with the use of already propagated CT scanners. Computed angiotomography is expected for clinical applications in many institutions because of its diagnostic value in screening of cerebrovascular lesions and in demonstrating the relationship between pathological lesions and cerebral vessels. (author)

  5. Variational-moment method for computing magnetohydrodynamic equilibria

    International Nuclear Information System (INIS)

    Lao, L.L.

    1983-08-01

    A fast yet accurate method to compute magnetohydrodynamic equilibria is provided by the variational-moment method, which is similar to the classical Rayleigh-Ritz-Galerkin approximation. The equilibrium solution sought is decomposed into a spectral representation. The partial differential equations describing the equilibrium are then recast into their equivalent variational form and systematically reduced to an optimum finite set of coupled ordinary differential equations. An appropriate spectral decomposition can make the series representing the solution coverge rapidly and hence substantially reduces the amount of computational time involved. The moment method was developed first to compute fixed-boundary inverse equilibria in axisymmetric toroidal geometry, and was demonstrated to be both efficient and accurate. The method since has been generalized to calculate free-boundary axisymmetric equilibria, to include toroidal plasma rotation and pressure anisotropy, and to treat three-dimensional toroidal geometry. In all these formulations, the flux surfaces are assumed to be smooth and nested so that the solutions can be decomposed in Fourier series in inverse coordinates. These recent developments and the advantages and limitations of the moment method are reviewed. The use of alternate coordinates for decomposition is discussed

  6. 26 CFR 1.167(b)-0 - Methods of computing depreciation.

    Science.gov (United States)

    2010-04-01

    ... 26 Internal Revenue 2 2010-04-01 2010-04-01 false Methods of computing depreciation. 1.167(b)-0....167(b)-0 Methods of computing depreciation. (a) In general. Any reasonable and consistently applied method of computing depreciation may be used or continued in use under section 167. Regardless of the...

  7. Computing wave functions in multichannel collisions with non-local potentials using the R-matrix method

    Science.gov (United States)

    Bonitati, Joey; Slimmer, Ben; Li, Weichuan; Potel, Gregory; Nunes, Filomena

    2017-09-01

    The calculable form of the R-matrix method has been previously shown to be a useful tool in approximately solving the Schrodinger equation in nuclear scattering problems. We use this technique combined with the Gauss quadrature for the Lagrange-mesh method to efficiently solve for the wave functions of projectile nuclei in low energy collisions (1-100 MeV) involving an arbitrary number of channels. We include the local Woods-Saxon potential, the non-local potential of Perey and Buck, a Coulomb potential, and a coupling potential to computationally solve for the wave function of two nuclei at short distances. Object oriented programming is used to increase modularity, and parallel programming techniques are introduced to reduce computation time. We conclude that the R-matrix method is an effective method to predict the wave functions of nuclei in scattering problems involving both multiple channels and non-local potentials. Michigan State University iCER ACRES REU.

  8. Interactive Rhythm Learning System by Combining Tablet Computers and Robots

    Directory of Open Access Journals (Sweden)

    Chien-Hsing Chou

    2017-03-01

    Full Text Available This study proposes a percussion learning device that combines tablet computers and robots. This device comprises two systems: a rhythm teaching system, in which users can compose and practice rhythms by using a tablet computer, and a robot performance system. First, teachers compose the rhythm training contents on the tablet computer. Then, the learners practice these percussion exercises by using the tablet computer and a small drum set. The teaching system provides a new and user-friendly score editing interface for composing a rhythm exercise. It also provides a rhythm rating function to facilitate percussion training for children and improve the stability of rhythmic beating. To encourage children to practice percussion exercises, a robotic performance system is used to interact with the children; this system can perform percussion exercises for students to listen to and then help them practice the exercise. This interaction enhances children’s interest and motivation to learn and practice rhythm exercises. The results of experimental course and field trials reveal that the proposed system not only increases students’ interest and efficiency in learning but also helps them in understanding musical rhythms through interaction and composing simple rhythms.

  9. An Accurate liver segmentation method using parallel computing algorithm

    International Nuclear Information System (INIS)

    Elbasher, Eiman Mohammed Khalied

    2014-12-01

    Computed Tomography (CT or CAT scan) is a noninvasive diagnostic imaging procedure that uses a combination of X-rays and computer technology to produce horizontal, or axial, images (often called slices) of the body. A CT scan shows detailed images of any part of the body, including the bones muscles, fat and organs CT scans are more detailed than standard x-rays. CT scans may be done with or without "contrast Contrast refers to a substance taken by mouth and/ or injected into an intravenous (IV) line that causes the particular organ or tissue under study to be seen more clearly. CT scan of the liver and biliary tract are used in the diagnosis of many diseases in the abdomen structures, particularly when another type of examination, such as X-rays, physical examination, and ultra sound is not conclusive. Unfortunately, the presence of noise and artifact in the edges and fine details in the CT images limit the contrast resolution and make diagnostic procedure more difficult. This experimental study was conducted at the College of Medical Radiological Science, Sudan University of Science and Technology and Fidel Specialist Hospital. The sample of study was included 50 patients. The main objective of this research was to study an accurate liver segmentation method using a parallel computing algorithm, and to segment liver and adjacent organs using image processing technique. The main technique of segmentation used in this study was watershed transform. The scope of image processing and analysis applied to medical application is to improve the quality of the acquired image and extract quantitative information from medical image data in an efficient and accurate way. The results of this technique agreed wit the results of Jarritt et al, (2010), Kratchwil et al, (2010), Jover et al, (2011), Yomamoto et al, (1996), Cai et al (1999), Saudha and Jayashree (2010) who used different segmentation filtering based on the methods of enhancing the computed tomography images. Anther

  10. Computational methods for metabolomic data analysis of ion mobility spectrometry data-reviewing the state of the art

    DEFF Research Database (Denmark)

    Hauschild, Anne-Christin; Schneider, Till; Pauling, Josch

    2012-01-01

    that MCC/IMS coupled with sophisticated computational methods has the potential to successfully address a broad range of biomedical questions. While we can solve most of the data pre-processing steps satisfactorily, some computational challenges with statistical learning and model validation remain.......Ion mobility spectrometry combined with multi-capillary columns (MCC/IMS) is a well known technology for detecting volatile organic compounds (VOCs). We may utilize MCC/IMS for scanning human exhaled air, bacterial colonies or cell lines, for example. Thereby we gain information about the human...... of computational approaches for analyzing the huge amount of emerging data sets. Here, we will review the state of the art and highlight existing challenges. First, we address methods for raw data handling, data storage and visualization. Afterwards we will introduce de-noising, peak picking and other pre...

  11. The combined Petrov-Galerkin method with auto-adapting schemes and its applications in numerical resolution of problems with limit layer

    International Nuclear Information System (INIS)

    Silva, R.S.; Galeao, A.C.; Carmo, E.G.D. do

    1989-07-01

    In this paper a new finite element model is constructed combining an r- refinement scheme with the CCAU method. The new formulation gives better approximation for boundary and internal layers compared to the standard CCAU, without increasing computer codes. (author) [pt

  12. An efficient computational method for global sensitivity analysis and its application to tree growth modelling

    International Nuclear Information System (INIS)

    Wu, Qiong-Li; Cournède, Paul-Henry; Mathieu, Amélie

    2012-01-01

    Global sensitivity analysis has a key role to play in the design and parameterisation of functional–structural plant growth models which combine the description of plant structural development (organogenesis and geometry) and functional growth (biomass accumulation and allocation). We are particularly interested in this study in Sobol's method which decomposes the variance of the output of interest into terms due to individual parameters but also to interactions between parameters. Such information is crucial for systems with potentially high levels of non-linearity and interactions between processes, like plant growth. However, the computation of Sobol's indices relies on Monte Carlo sampling and re-sampling, whose costs can be very high, especially when model evaluation is also expensive, as for tree models. In this paper, we thus propose a new method to compute Sobol's indices inspired by Homma–Saltelli, which improves slightly their use of model evaluations, and then derive for this generic type of computational methods an estimator of the error estimation of sensitivity indices with respect to the sampling size. It allows the detailed control of the balance between accuracy and computing time. Numerical tests on a simple non-linear model are convincing and the method is finally applied to a functional–structural model of tree growth, GreenLab, whose particularity is the strong level of interaction between plant functioning and organogenesis. - Highlights: ► We study global sensitivity analysis in the context of functional–structural plant modelling. ► A new estimator based on Homma–Saltelli method is proposed to compute Sobol indices, based on a more balanced re-sampling strategy. ► The estimation accuracy of sensitivity indices for a class of Sobol's estimators can be controlled by error analysis. ► The proposed algorithm is implemented efficiently to compute Sobol indices for a complex tree growth model.

  13. Improved look-up table method of computer-generated holograms.

    Science.gov (United States)

    Wei, Hui; Gong, Guanghong; Li, Ni

    2016-11-10

    Heavy computation load and vast memory requirements are major bottlenecks of computer-generated holograms (CGHs), which are promising and challenging in three-dimensional displays. To solve these problems, an improved look-up table (LUT) method suitable for arbitrarily sampled object points is proposed and implemented on a graphics processing unit (GPU) whose reconstructed object quality is consistent with that of the coherent ray-trace (CRT) method. The concept of distance factor is defined, and the distance factors are pre-computed off-line and stored in a look-up table. The results show that while reconstruction quality close to that of the CRT method is obtained, the on-line computation time is dramatically reduced compared with the LUT method on the GPU and the memory usage is lower than that of the novel-LUT considerably. Optical experiments are carried out to validate the effectiveness of the proposed method.

  14. Methods and experimental techniques in computer engineering

    CERN Document Server

    Schiaffonati, Viola

    2014-01-01

    Computing and science reveal a synergic relationship. On the one hand, it is widely evident that computing plays an important role in the scientific endeavor. On the other hand, the role of scientific method in computing is getting increasingly important, especially in providing ways to experimentally evaluate the properties of complex computing systems. This book critically presents these issues from a unitary conceptual and methodological perspective by addressing specific case studies at the intersection between computing and science. The book originates from, and collects the experience of, a course for PhD students in Information Engineering held at the Politecnico di Milano. Following the structure of the course, the book features contributions from some researchers who are working at the intersection between computing and science.

  15. Combined methods of tolerance increasing for embedded SRAM

    Science.gov (United States)

    Shchigorev, L. A.; Shagurin, I. I.

    2016-10-01

    The abilities of combined use of different methods of fault tolerance increasing for SRAM such as error detection and correction codes, parity bits, and redundant elements are considered. Area penalties due to using combinations of these methods are investigated. Estimation is made for different configurations of 4K x 128 RAM memory block for 28 nm manufacturing process. Evaluation of the effectiveness of the proposed combinations is also reported. The results of these investigations can be useful for designing fault-tolerant “system on chips”.

  16. SmartShadow models and methods for pervasive computing

    CERN Document Server

    Wu, Zhaohui

    2013-01-01

    SmartShadow: Models and Methods for Pervasive Computing offers a new perspective on pervasive computing with SmartShadow, which is designed to model a user as a personality ""shadow"" and to model pervasive computing environments as user-centric dynamic virtual personal spaces. Just like human beings' shadows in the physical world, it follows people wherever they go, providing them with pervasive services. The model, methods, and software infrastructure for SmartShadow are presented and an application for smart cars is also introduced.  The book can serve as a valuable reference work for resea

  17. A Hybrid Computational Intelligence Approach Combining Genetic Programming And Heuristic Classification for Pap-Smear Diagnosis

    DEFF Research Database (Denmark)

    Tsakonas, Athanasios; Dounias, Georgios; Jantzen, Jan

    2001-01-01

    The paper suggests the combined use of different computational intelligence (CI) techniques in a hybrid scheme, as an effective approach to medical diagnosis. Getting to know the advantages and disadvantages of each computational intelligence technique in the recent years, the time has come...

  18. Efficient free energy calculations by combining two complementary tempering sampling methods.

    Science.gov (United States)

    Xie, Liangxu; Shen, Lin; Chen, Zhe-Ning; Yang, Mingjun

    2017-01-14

    Although energy barriers can be efficiently crossed in the reaction coordinate (RC) guided sampling, this type of method suffers from identification of the correct RCs or requirements of high dimensionality of the defined RCs for a given system. If only the approximate RCs with significant barriers are used in the simulations, hidden energy barriers with small to medium height would exist in other degrees of freedom (DOFs) relevant to the target process and consequently cause the problem of insufficient sampling. To address the sampling in this so-called hidden barrier situation, here we propose an effective approach to combine temperature accelerated molecular dynamics (TAMD), an efficient RC-guided sampling method, with the integrated tempering sampling (ITS), a generalized ensemble sampling method. In this combined ITS-TAMD method, the sampling along the major RCs with high energy barriers is guided by TAMD and the sampling of the rest of the DOFs with lower but not negligible barriers is enhanced by ITS. The performance of ITS-TAMD to three systems in the processes with hidden barriers has been examined. In comparison to the standalone TAMD or ITS approach, the present hybrid method shows three main improvements. (1) Sampling efficiency can be improved at least five times even if in the presence of hidden energy barriers. (2) The canonical distribution can be more accurately recovered, from which the thermodynamic properties along other collective variables can be computed correctly. (3) The robustness of the selection of major RCs suggests that the dimensionality of necessary RCs can be reduced. Our work shows more potential applications of the ITS-TAMD method as the efficient and powerful tool for the investigation of a broad range of interesting cases.

  19. Combined X-ray fluorescence and absorption computed tomography using a synchrotron beam

    International Nuclear Information System (INIS)

    Hall, C

    2013-01-01

    X-ray computed tomography (CT) and fluorescence X-ray computed tomography (FXCT) using synchrotron sources are both useful tools in biomedical imaging research. Synchrotron CT (SRCT) in its various forms is considered an important technique for biomedical imaging since the phase coherence of SR beams can be exploited to obtain images with high contrast resolution. Using a synchrotron as the source for FXCT ensures a fluorescence signal that is optimally detectable by exploiting the beam monochromaticity and polarisation. The ability to combine these techniques so that SRCT and FXCT images are collected simultaneously, would bring distinct benefits to certain biomedical experiments. Simultaneous image acquisition would alleviate some of the registration difficulties which comes from collecting separate data, and it would provide increased information about the sample: functional X-ray images from the FXCT, with the morphological information from the SRCT. A method is presented for generating simultaneous SRCT and FXCT images. Proof of principle modelling has been used to show that it is possible to recover a fluorescence image of a point-like source from an SRCT apparatus by suitably modulating the illuminating planar X-ray beam. The projection image can be successfully used for reconstruction by removing the static modulation from the sinogram in the normal flat and dark field processing. Detection of the modulated fluorescence signal using an energy resolving detector allows the position of a fluorescent marker to be obtained using inverse reconstruction techniques. A discussion is made of particular reconstruction methods which might be applied by utilising both the CT and FXCT data.

  20. Computational methods for two-phase flow and particle transport

    CERN Document Server

    Lee, Wen Ho

    2013-01-01

    This book describes mathematical formulations and computational methods for solving two-phase flow problems with a computer code that calculates thermal hydraulic problems related to light water and fast breeder reactors. The physical model also handles the particle and gas flow problems that arise from coal gasification and fluidized beds. The second part of this book deals with the computational methods for particle transport.

  1. A Novel in situ Trigger Combination Method

    International Nuclear Information System (INIS)

    Buzatu, Adrian; Warburton, Andreas; Krumnack, Nils; Yao, Wei-Ming

    2012-01-01

    Searches for rare physics processes using particle detectors in high-luminosity colliding hadronic beam environments require the use of multi-level trigger systems to reject colossal background rates in real time. In analyses like the search for the Higgs boson, there is a need to maximize the signal acceptance by combining multiple different trigger chains when forming the offline data sample. In such statistically limited searches, datasets are often amassed over periods of several years, during which the trigger characteristics evolve and their performance can vary significantly. Reliable production cross-section measurements and upper limits must take into account a detailed understanding of the effective trigger inefficiency for every selected candidate event. We present as an example the complex situation of three trigger chains, based on missing energy and jet energy, to be combined in the context of the search for the Higgs (H) boson produced in association with a W boson at the Collider Detector at Fermilab (CDF). We briefly review the existing techniques for combining triggers, namely the inclusion, division, and exclusion methods. We introduce and describe a novel fourth in situ method whereby, for each candidate event, only the trigger chain with the highest a priori probability of selecting the event is considered. The in situ combination method has advantages of scalability to large numbers of differing trigger chains and of insensitivity to correlations between triggers. We compare the inclusion and in situ methods for signal event yields in the CDF WH search.

  2. Computational methods for structural load and resistance modeling

    Science.gov (United States)

    Thacker, B. H.; Millwater, H. R.; Harren, S. V.

    1991-01-01

    An automated capability for computing structural reliability considering uncertainties in both load and resistance variables is presented. The computations are carried out using an automated Advanced Mean Value iteration algorithm (AMV +) with performance functions involving load and resistance variables obtained by both explicit and implicit methods. A complete description of the procedures used is given as well as several illustrative examples, verified by Monte Carlo Analysis. In particular, the computational methods described in the paper are shown to be quite accurate and efficient for a material nonlinear structure considering material damage as a function of several primitive random variables. The results show clearly the effectiveness of the algorithms for computing the reliability of large-scale structural systems with a maximum number of resolutions.

  3. Numerical evaluation of methods for computing tomographic projections

    International Nuclear Information System (INIS)

    Zhuang, W.; Gopal, S.S.; Hebert, T.J.

    1994-01-01

    Methods for computing forward/back projections of 2-D images can be viewed as numerical integration techniques. The accuracy of any ray-driven projection method can be improved by increasing the number of ray-paths that are traced per projection bin. The accuracy of pixel-driven projection methods can be increased by dividing each pixel into a number of smaller sub-pixels and projecting each sub-pixel. The authors compared four competing methods of computing forward/back projections: bilinear interpolation, ray-tracing, pixel-driven projection based upon sub-pixels, and pixel-driven projection based upon circular, rather than square, pixels. This latter method is equivalent to a fast, bi-nonlinear interpolation. These methods and the choice of the number of ray-paths per projection bin or the number of sub-pixels per pixel present a trade-off between computational speed and accuracy. To solve the problem of assessing backprojection accuracy, the analytical inverse Fourier transform of the ramp filtered forward projection of the Shepp and Logan head phantom is derived

  4. Computer methods in physics 250 problems with guided solutions

    CERN Document Server

    Landau, Rubin H

    2018-01-01

    Our future scientists and professionals must be conversant in computational techniques. In order to facilitate integration of computer methods into existing physics courses, this textbook offers a large number of worked examples and problems with fully guided solutions in Python as well as other languages (Mathematica, Java, C, Fortran, and Maple). It’s also intended as a self-study guide for learning how to use computer methods in physics. The authors include an introductory chapter on numerical tools and indication of computational and physics difficulty level for each problem.

  5. The study of combining Latin Hypercube Sampling method and LU decomposition method (LULHS method) for constructing spatial random field

    Science.gov (United States)

    WANG, P. T.

    2015-12-01

    Groundwater modeling requires to assign hydrogeological properties to every numerical grid. Due to the lack of detailed information and the inherent spatial heterogeneity, geological properties can be treated as random variables. Hydrogeological property is assumed to be a multivariate distribution with spatial correlations. By sampling random numbers from a given statistical distribution and assigning a value to each grid, a random field for modeling can be completed. Therefore, statistics sampling plays an important role in the efficiency of modeling procedure. Latin Hypercube Sampling (LHS) is a stratified random sampling procedure that provides an efficient way to sample variables from their multivariate distributions. This study combines the the stratified random procedure from LHS and the simulation by using LU decomposition to form LULHS. Both conditional and unconditional simulations of LULHS were develpoed. The simulation efficiency and spatial correlation of LULHS are compared to the other three different simulation methods. The results show that for the conditional simulation and unconditional simulation, LULHS method is more efficient in terms of computational effort. Less realizations are required to achieve the required statistical accuracy and spatial correlation.

  6. All for One: Integrating Budgetary Methods by Computer.

    Science.gov (United States)

    Herman, Jerry J.

    1994-01-01

    With the advent of high speed and sophisticated computer programs, all budgetary systems can be combined in one fiscal management information system. Defines and provides examples for the four budgeting systems: (1) function/object; (2) planning, programming, budgeting system; (3) zero-based budgeting; and (4) site-based budgeting. (MLF)

  7. Proceedings of computational methods in materials science

    International Nuclear Information System (INIS)

    Mark, J.E. Glicksman, M.E.; Marsh, S.P.

    1992-01-01

    The Symposium on which this volume is based was conceived as a timely expression of some of the fast-paced developments occurring throughout materials science and engineering. It focuses particularly on those involving modern computational methods applied to model and predict the response of materials under a diverse range of physico-chemical conditions. The current easy access of many materials scientists in industry, government laboratories, and academe to high-performance computers has opened many new vistas for predicting the behavior of complex materials under realistic conditions. Some have even argued that modern computational methods in materials science and engineering are literally redefining the bounds of our knowledge from which we predict structure-property relationships, perhaps forever changing the historically descriptive character of the science and much of the engineering

  8. A hybrid finite element analysis and evolutionary computation method for the design of lightweight lattice components with optimized strut diameter

    DEFF Research Database (Denmark)

    Salonitis, Konstantinos; Chantzis, Dimitrios; Kappatos, Vasileios

    2017-01-01

    approaches or with the use of topology optimization methodologies. An optimization approach utilizing multipurpose optimization algorithms has not been proposed yet. This paper presents a novel user-friendly method for the design optimization of lattice components towards weight minimization, which combines...... finite element analysis and evolutionary computation. The proposed method utilizes the cell homogenization technique in order to reduce the computational cost of the finite element analysis and a genetic algorithm in order to search for the most lightweight lattice configuration. A bracket consisting...

  9. Computational methods in molecular imaging technologies

    CERN Document Server

    Gunjan, Vinit Kumar; Venkatesh, C; Amarnath, M

    2017-01-01

    This book highlights the experimental investigations that have been carried out on magnetic resonance imaging and computed tomography (MRI & CT) images using state-of-the-art Computational Image processing techniques, and tabulates the statistical values wherever necessary. In a very simple and straightforward way, it explains how image processing methods are used to improve the quality of medical images and facilitate analysis. It offers a valuable resource for researchers, engineers, medical doctors and bioinformatics experts alike.

  10. Computational and experimental methods for enclosed natural convection

    International Nuclear Information System (INIS)

    Larson, D.W.; Gartling, D.K.; Schimmel, W.P. Jr.

    1977-10-01

    Two computational procedures and one optical experimental procedure for studying enclosed natural convection are described. The finite-difference and finite-element numerical methods are developed and several sample problems are solved. Results obtained from the two computational approaches are compared. A temperature-visualization scheme using laser holographic interferometry is described, and results from this experimental procedure are compared with results from both numerical methods

  11. Simple analytical methods for computing the gravity-wave contribution to the cosmic background radiation anisotropy

    International Nuclear Information System (INIS)

    Wang, Y.

    1996-01-01

    We present two simple analytical methods for computing the gravity-wave contribution to the cosmic background radiation (CBR) anisotropy in inflationary models; one method uses a time-dependent transfer function, the other methods uses an approximate gravity-mode function which is a simple combination of the lowest order spherical Bessel functions. We compare the CBR anisotropy tensor multipole spectrum computed using our methods with the previous result of the highly accurate numerical method, the open-quote open-quote Boltzmann close-quote close-quote method. Our time-dependent transfer function is more accurate than the time-independent transfer function found by Turner, White, and Lindsey; however, we find that the transfer function method is only good for l approx-lt 120. Using our approximate gravity-wave mode function, we obtain much better accuracy; the tensor multipole spectrum we find differs by less than 2% for l approx-lt 50, less than 10% for l approx-lt 120, and less than 20% for l≤300 from the open-quote open-quote Boltzmann close-quote close-quote result. Our approximate graviton mode function should be quite useful in studying tensor perturbations from inflationary models. copyright 1996 The American Physical Society

  12. Computational methods for stellerator configurations

    International Nuclear Information System (INIS)

    Betancourt, O.

    1992-01-01

    This project had two main objectives. The first one was to continue to develop computational methods for the study of three dimensional magnetic confinement configurations. The second one was to collaborate and interact with researchers in the field who can use these techniques to study and design fusion experiments. The first objective has been achieved with the development of the spectral code BETAS and the formulation of a new variational approach for the study of magnetic island formation in a self consistent fashion. The code can compute the correct island width corresponding to the saturated island, a result shown by comparing the computed island with the results of unstable tearing modes in Tokamaks and with experimental results in the IMS Stellarator. In addition to studying three dimensional nonlinear effects in Tokamaks configurations, these self consistent computed island equilibria will be used to study transport effects due to magnetic island formation and to nonlinearly bifurcated equilibria. The second objective was achieved through direct collaboration with Steve Hirshman at Oak Ridge, D. Anderson and R. Talmage at Wisconsin as well as through participation in the Sherwood and APS meetings

  13. Hybrid Monte Carlo methods in computational finance

    NARCIS (Netherlands)

    Leitao Rodriguez, A.

    2017-01-01

    Monte Carlo methods are highly appreciated and intensively employed in computational finance in the context of financial derivatives valuation or risk management. The method offers valuable advantages like flexibility, easy interpretation and straightforward implementation. Furthermore, the

  14. Testing and Validation of Computational Methods for Mass Spectrometry.

    Science.gov (United States)

    Gatto, Laurent; Hansen, Kasper D; Hoopmann, Michael R; Hermjakob, Henning; Kohlbacher, Oliver; Beyer, Andreas

    2016-03-04

    High-throughput methods based on mass spectrometry (proteomics, metabolomics, lipidomics, etc.) produce a wealth of data that cannot be analyzed without computational methods. The impact of the choice of method on the overall result of a biological study is often underappreciated, but different methods can result in very different biological findings. It is thus essential to evaluate and compare the correctness and relative performance of computational methods. The volume of the data as well as the complexity of the algorithms render unbiased comparisons challenging. This paper discusses some problems and challenges in testing and validation of computational methods. We discuss the different types of data (simulated and experimental validation data) as well as different metrics to compare methods. We also introduce a new public repository for mass spectrometric reference data sets ( http://compms.org/RefData ) that contains a collection of publicly available data sets for performance evaluation for a wide range of different methods.

  15. Geometric computations with interval and new robust methods applications in computer graphics, GIS and computational geometry

    CERN Document Server

    Ratschek, H

    2003-01-01

    This undergraduate and postgraduate text will familiarise readers with interval arithmetic and related tools to gain reliable and validated results and logically correct decisions for a variety of geometric computations plus the means for alleviating the effects of the errors. It also considers computations on geometric point-sets, which are neither robust nor reliable in processing with standard methods. The authors provide two effective tools for obtaining correct results: (a) interval arithmetic, and (b) ESSA the new powerful algorithm which improves many geometric computations and makes th

  16. Computational methods for three-dimensional microscopy reconstruction

    CERN Document Server

    Frank, Joachim

    2014-01-01

    Approaches to the recovery of three-dimensional information on a biological object, which are often formulated or implemented initially in an intuitive way, are concisely described here based on physical models of the object and the image-formation process. Both three-dimensional electron microscopy and X-ray tomography can be captured in the same mathematical framework, leading to closely-related computational approaches, but the methodologies differ in detail and hence pose different challenges. The editors of this volume, Gabor T. Herman and Joachim Frank, are experts in the respective methodologies and present research at the forefront of biological imaging and structural biology.   Computational Methods for Three-Dimensional Microscopy Reconstruction will serve as a useful resource for scholars interested in the development of computational methods for structural biology and cell biology, particularly in the area of 3D imaging and modeling.

  17. A Method of Extracting Ontology Module Using Concept Relations for Sharing Knowledge in Mobile Cloud Computing Environment

    Directory of Open Access Journals (Sweden)

    Keonsoo Lee

    2014-01-01

    Full Text Available In mobile cloud computing environment, the cooperation of distributed computing objects is one of the most important requirements for providing successful cloud services. To satisfy this requirement, all the members, who are employed in the cooperation group, need to share the knowledge for mutual understanding. Even if ontology can be the right tool for this goal, there are several issues to make a right ontology. As the cost and complexity of managing knowledge increase according to the scale of the knowledge, reducing the size of ontology is one of the critical issues. In this paper, we propose a method of extracting ontology module to increase the utility of knowledge. For the given signature, this method extracts the ontology module, which is semantically self-contained to fulfill the needs of the service, by considering the syntactic structure and semantic relation of concepts. By employing this module, instead of the original ontology, the cooperation of computing objects can be performed with less computing load and complexity. In particular, when multiple external ontologies need to be combined for more complex services, this method can be used to optimize the size of shared knowledge.

  18. A method of extracting ontology module using concept relations for sharing knowledge in mobile cloud computing environment.

    Science.gov (United States)

    Lee, Keonsoo; Rho, Seungmin; Lee, Seok-Won

    2014-01-01

    In mobile cloud computing environment, the cooperation of distributed computing objects is one of the most important requirements for providing successful cloud services. To satisfy this requirement, all the members, who are employed in the cooperation group, need to share the knowledge for mutual understanding. Even if ontology can be the right tool for this goal, there are several issues to make a right ontology. As the cost and complexity of managing knowledge increase according to the scale of the knowledge, reducing the size of ontology is one of the critical issues. In this paper, we propose a method of extracting ontology module to increase the utility of knowledge. For the given signature, this method extracts the ontology module, which is semantically self-contained to fulfill the needs of the service, by considering the syntactic structure and semantic relation of concepts. By employing this module, instead of the original ontology, the cooperation of computing objects can be performed with less computing load and complexity. In particular, when multiple external ontologies need to be combined for more complex services, this method can be used to optimize the size of shared knowledge.

  19. GRAPH-BASED POST INCIDENT INTERNAL AUDIT METHOD OF COMPUTER EQUIPMENT

    Directory of Open Access Journals (Sweden)

    I. S. Pantiukhin

    2016-05-01

    Full Text Available Graph-based post incident internal audit method of computer equipment is proposed. The essence of the proposed solution consists in the establishing of relationships among hard disk damps (image, RAM and network. This method is intended for description of information security incident properties during the internal post incident audit of computer equipment. Hard disk damps receiving and formation process takes place at the first step. It is followed by separation of these damps into the set of components. The set of components includes a large set of attributes that forms the basis for the formation of the graph. Separated data is recorded into the non-relational database management system (NoSQL that is adapted for graph storage, fast access and processing. Damps linking application method is applied at the final step. The presented method gives the possibility to human expert in information security or computer forensics for more precise, informative internal audit of computer equipment. The proposed method allows reducing the time spent on internal audit of computer equipment, increasing accuracy and informativeness of such audit. The method has a development potential and can be applied along with the other components in the tasks of users’ identification and computer forensics.

  20. Domain decomposition methods and parallel computing

    International Nuclear Information System (INIS)

    Meurant, G.

    1991-01-01

    In this paper, we show how to efficiently solve large linear systems on parallel computers. These linear systems arise from discretization of scientific computing problems described by systems of partial differential equations. We show how to get a discrete finite dimensional system from the continuous problem and the chosen conjugate gradient iterative algorithm is briefly described. Then, the different kinds of parallel architectures are reviewed and their advantages and deficiencies are emphasized. We sketch the problems found in programming the conjugate gradient method on parallel computers. For this algorithm to be efficient on parallel machines, domain decomposition techniques are introduced. We give results of numerical experiments showing that these techniques allow a good rate of convergence for the conjugate gradient algorithm as well as computational speeds in excess of a billion of floating point operations per second. (author). 5 refs., 11 figs., 2 tabs., 1 inset

  1. Diagnostic performance of combined noninvasive coronary angiography and myocardial perfusion imaging using 320 row detector computed tomography

    DEFF Research Database (Denmark)

    Vavere, Andrea L; Simon, Gregory G; George, Richard T

    2013-01-01

    Multidetector coronary computed tomography angiography (CTA) is a promising modality for widespread clinical application because of its noninvasive nature and high diagnostic accuracy as found in previous studies using 64 to 320 simultaneous detector rows. It is, however, limited in its ability...... to detect myocardial ischemia. In this article, we describe the design of the CORE320 study ("Combined coronary atherosclerosis and myocardial perfusion evaluation using 320 detector row computed tomography"). This prospective, multicenter, multinational study is unique in that it is designed to assess...... the diagnostic performance of combined 320-row CTA and myocardial CT perfusion imaging (CTP) in comparison with the combination of invasive coronary angiography and single-photon emission computed tomography myocardial perfusion imaging (SPECT-MPI). The trial is being performed at 16 medical centers located in 8...

  2. Application of statistical method for FBR plant transient computation

    International Nuclear Information System (INIS)

    Kikuchi, Norihiro; Mochizuki, Hiroyasu

    2014-01-01

    Highlights: • A statistical method with a large trial number up to 10,000 is applied to the plant system analysis. • A turbine trip test conducted at the “Monju” reactor is selected as a plant transient. • A reduction method of trial numbers is discussed. • The result with reduced trial number can express the base regions of the computed distribution. -- Abstract: It is obvious that design tolerances, errors included in operation, and statistical errors in empirical correlations effect on the transient behavior. The purpose of the present study is to apply above mentioned statistical errors to a plant system computation in order to evaluate the statistical distribution contained in the transient evolution. A selected computation case is the turbine trip test conducted at 40% electric power of the prototype fast reactor “Monju”. All of the heat transport systems of “Monju” are modeled with the NETFLOW++ system code which has been validated using the plant transient tests of the experimental fast reactor Joyo, and “Monju”. The effects of parameters on upper plenum temperature are confirmed by sensitivity analyses, and dominant parameters are chosen. The statistical errors are applied to each computation deck by using a pseudorandom number and the Monte-Carlo method. The dSFMT (Double precision SIMD-oriented Fast Mersenne Twister) that is developed version of Mersenne Twister (MT), is adopted as the pseudorandom number generator. In the present study, uniform random numbers are generated by dSFMT, and these random numbers are transformed to the normal distribution by the Box–Muller method. Ten thousands of different computations are performed at once. In every computation case, the steady calculation is performed for 12,000 s, and transient calculation is performed for 4000 s. In the purpose of the present statistical computation, it is important that the base regions of distribution functions should be calculated precisely. A large number of

  3. Modification of design methods to suit computer aided design of pumps

    International Nuclear Information System (INIS)

    Kumaraswamy, S.

    1994-01-01

    Engineering designs involve a large number of repetitive calculations to achieve optimisation. So, computers which are fast and accurate lend themselves as an aid for the design process. However, certain modifications in the steps of conventional design method become necessary for easier adaptation. In addition, it will be advantageous if the empirical coefficients of design are allowed to be chosen by the designer with prompting of ranges taken from design charts by the program itself. This paper describes two examples of modification in pump design. In the first case Anderson's area ratio method and Pfleiderer's Slip power methods are combined to achieve an integrated design of impeller and casing. The second case is the design of a Mixed flow pump impeller by considering it as an assembly of a number of radial flow pump impellers called part impellers. In addition, these modifications are useful in redesign for a different operating condition or in matching of impellers to existing casings. (author). 13 refs., 4 figs

  4. Computational simulation in architectural and environmental acoustics methods and applications of wave-based computation

    CERN Document Server

    Sakamoto, Shinichi; Otsuru, Toru

    2014-01-01

    This book reviews a variety of methods for wave-based acoustic simulation and recent applications to architectural and environmental acoustic problems. Following an introduction providing an overview of computational simulation of sound environment, the book is in two parts: four chapters on methods and four chapters on applications. The first part explains the fundamentals and advanced techniques for three popular methods, namely, the finite-difference time-domain method, the finite element method, and the boundary element method, as well as alternative time-domain methods. The second part demonstrates various applications to room acoustics simulation, noise propagation simulation, acoustic property simulation for building components, and auralization. This book is a valuable reference that covers the state of the art in computational simulation for architectural and environmental acoustics.  

  5. Development of an aeroelastic code based on three-dimensional viscous–inviscid method for wind turbine computations

    DEFF Research Database (Denmark)

    Sessarego, Matias; Ramos García, Néstor; Sørensen, Jens Nørkær

    2017-01-01

    Aerodynamic and structural dynamic performance analysis of modern wind turbines are routinely estimated in the wind energy field using computational tools known as aeroelastic codes. Most aeroelastic codes use the blade element momentum (BEM) technique to model the rotor aerodynamics and a modal......, multi-body or the finite-element approach to model the turbine structural dynamics. The present work describes the development of a novel aeroelastic code that combines a three-dimensional viscous–inviscid interactive method, method for interactive rotor aerodynamic simulations (MIRAS...... Code Comparison Collaboration Project. Simulation tests consist of steady wind inflow conditions with different combinations of yaw error, wind shear, tower shadow and turbine-elastic modeling. Turbulent inflow created by using a Mann box is also considered. MIRAS-FLEX results, such as blade tip...

  6. BLUES function method in computational physics

    Science.gov (United States)

    Indekeu, Joseph O.; Müller-Nedebock, Kristian K.

    2018-04-01

    We introduce a computational method in physics that goes ‘beyond linear use of equation superposition’ (BLUES). A BLUES function is defined as a solution of a nonlinear differential equation (DE) with a delta source that is at the same time a Green’s function for a related linear DE. For an arbitrary source, the BLUES function can be used to construct an exact solution to the nonlinear DE with a different, but related source. Alternatively, the BLUES function can be used to construct an approximate piecewise analytical solution to the nonlinear DE with an arbitrary source. For this alternative use the related linear DE need not be known. The method is illustrated in a few examples using analytical calculations and numerical computations. Areas for further applications are suggested.

  7. Application of the level set method for multi-phase flow computation in fusion engineering

    International Nuclear Information System (INIS)

    Luo, X-Y.; Ni, M-J.; Ying, A.; Abdou, M.

    2006-01-01

    Numerical simulation of multi-phase flow is essential to evaluate the feasibility of a liquid protection scheme for the power plant chamber. The level set method is one of the best methods for computing and analyzing the motion of interface among the multi-phase flow. This paper presents a general formula for the second-order projection method combined with the level set method to simulate unsteady incompressible multi-phase flow with/out phase change flow encountered in fusion science and engineering. The third-order ENO scheme and second-order semi-implicit Crank-Nicholson scheme is used to update the convective and diffusion term. The numerical results show this method can handle the complex deformation of the interface and the effect of liquid-vapor phase change will be included in the future work

  8. GRAVTool, Advances on the Package to Compute Geoid Model path by the Remove-Compute-Restore Technique, Following Helmert's Condensation Method

    Science.gov (United States)

    Marotta, G. S.

    2017-12-01

    Currently, there are several methods to determine geoid models. They can be based on terrestrial gravity data, geopotential coefficients, astrogeodetic data or a combination of them. Among the techniques to compute a precise geoid model, the Remove Compute Restore (RCR) has been widely applied. It considers short, medium and long wavelengths derived from altitude data provided by Digital Terrain Models (DTM), terrestrial gravity data and Global Geopotential Model (GGM), respectively. In order to apply this technique, it is necessary to create procedures that compute gravity anomalies and geoid models, by the integration of different wavelengths, and adjust these models to one local vertical datum. This research presents the advances on the package called GRAVTool to compute geoid models path by the RCR, following Helmert's condensation method, and its application in a study area. The studied area comprehends the federal district of Brazil, with 6000 km², wavy relief, heights varying from 600 m to 1340 m, located between the coordinates 48.25ºW, 15.45ºS and 47.33ºW, 16.06ºS. The results of the numerical example on the studied area show a geoid model computed by the GRAVTool package, after analysis of the density, DTM and GGM values, more adequate to the reference values used on the study area. The accuracy of the computed model (σ = ± 0.058 m, RMS = 0.067 m, maximum = 0.124 m and minimum = -0.155 m), using density value of 2.702 g/cm³ ±0.024 g/cm³, DTM SRTM Void Filled 3 arc-second and GGM EIGEN-6C4 up to degree and order 250, matches the uncertainty (σ =± 0.073) of 26 points randomly spaced where the geoid was computed by geometrical leveling technique supported by positioning GNSS. The results were also better than those achieved by Brazilian official regional geoid model (σ = ± 0.076 m, RMS = 0.098 m, maximum = 0.320 m and minimum = -0.061 m).

  9. Computer aided diagnosis based on medical image processing and artificial intelligence methods

    Science.gov (United States)

    Stoitsis, John; Valavanis, Ioannis; Mougiakakou, Stavroula G.; Golemati, Spyretta; Nikita, Alexandra; Nikita, Konstantina S.

    2006-12-01

    Advances in imaging technology and computer science have greatly enhanced interpretation of medical images, and contributed to early diagnosis. The typical architecture of a Computer Aided Diagnosis (CAD) system includes image pre-processing, definition of region(s) of interest, features extraction and selection, and classification. In this paper, the principles of CAD systems design and development are demonstrated by means of two examples. The first one focuses on the differentiation between symptomatic and asymptomatic carotid atheromatous plaques. For each plaque, a vector of texture and motion features was estimated, which was then reduced to the most robust ones by means of ANalysis of VAriance (ANOVA). Using fuzzy c-means, the features were then clustered into two classes. Clustering performances of 74%, 79%, and 84% were achieved for texture only, motion only, and combinations of texture and motion features, respectively. The second CAD system presented in this paper supports the diagnosis of focal liver lesions and is able to characterize liver tissue from Computed Tomography (CT) images as normal, hepatic cyst, hemangioma, and hepatocellular carcinoma. Five texture feature sets were extracted for each lesion, while a genetic algorithm based feature selection method was applied to identify the most robust features. The selected feature set was fed into an ensemble of neural network classifiers. The achieved classification performance was 100%, 93.75% and 90.63% in the training, validation and testing set, respectively. It is concluded that computerized analysis of medical images in combination with artificial intelligence can be used in clinical practice and may contribute to more efficient diagnosis.

  10. Computer aided diagnosis based on medical image processing and artificial intelligence methods

    International Nuclear Information System (INIS)

    Stoitsis, John; Valavanis, Ioannis; Mougiakakou, Stavroula G.; Golemati, Spyretta; Nikita, Alexandra; Nikita, Konstantina S.

    2006-01-01

    Advances in imaging technology and computer science have greatly enhanced interpretation of medical images, and contributed to early diagnosis. The typical architecture of a Computer Aided Diagnosis (CAD) system includes image pre-processing, definition of region(s) of interest, features extraction and selection, and classification. In this paper, the principles of CAD systems design and development are demonstrated by means of two examples. The first one focuses on the differentiation between symptomatic and asymptomatic carotid atheromatous plaques. For each plaque, a vector of texture and motion features was estimated, which was then reduced to the most robust ones by means of ANalysis of VAriance (ANOVA). Using fuzzy c-means, the features were then clustered into two classes. Clustering performances of 74%, 79%, and 84% were achieved for texture only, motion only, and combinations of texture and motion features, respectively. The second CAD system presented in this paper supports the diagnosis of focal liver lesions and is able to characterize liver tissue from Computed Tomography (CT) images as normal, hepatic cyst, hemangioma, and hepatocellular carcinoma. Five texture feature sets were extracted for each lesion, while a genetic algorithm based feature selection method was applied to identify the most robust features. The selected feature set was fed into an ensemble of neural network classifiers. The achieved classification performance was 100%, 93.75% and 90.63% in the training, validation and testing set, respectively. It is concluded that computerized analysis of medical images in combination with artificial intelligence can be used in clinical practice and may contribute to more efficient diagnosis

  11. Computer aided diagnosis based on medical image processing and artificial intelligence methods

    Energy Technology Data Exchange (ETDEWEB)

    Stoitsis, John [National Technical University of Athens, School of Electrical and Computer Engineering, Athens 157 71 (Greece)]. E-mail: stoitsis@biosim.ntua.gr; Valavanis, Ioannis [National Technical University of Athens, School of Electrical and Computer Engineering, Athens 157 71 (Greece); Mougiakakou, Stavroula G. [National Technical University of Athens, School of Electrical and Computer Engineering, Athens 157 71 (Greece); Golemati, Spyretta [National Technical University of Athens, School of Electrical and Computer Engineering, Athens 157 71 (Greece); Nikita, Alexandra [University of Athens, Medical School 152 28 Athens (Greece); Nikita, Konstantina S. [National Technical University of Athens, School of Electrical and Computer Engineering, Athens 157 71 (Greece)

    2006-12-20

    Advances in imaging technology and computer science have greatly enhanced interpretation of medical images, and contributed to early diagnosis. The typical architecture of a Computer Aided Diagnosis (CAD) system includes image pre-processing, definition of region(s) of interest, features extraction and selection, and classification. In this paper, the principles of CAD systems design and development are demonstrated by means of two examples. The first one focuses on the differentiation between symptomatic and asymptomatic carotid atheromatous plaques. For each plaque, a vector of texture and motion features was estimated, which was then reduced to the most robust ones by means of ANalysis of VAriance (ANOVA). Using fuzzy c-means, the features were then clustered into two classes. Clustering performances of 74%, 79%, and 84% were achieved for texture only, motion only, and combinations of texture and motion features, respectively. The second CAD system presented in this paper supports the diagnosis of focal liver lesions and is able to characterize liver tissue from Computed Tomography (CT) images as normal, hepatic cyst, hemangioma, and hepatocellular carcinoma. Five texture feature sets were extracted for each lesion, while a genetic algorithm based feature selection method was applied to identify the most robust features. The selected feature set was fed into an ensemble of neural network classifiers. The achieved classification performance was 100%, 93.75% and 90.63% in the training, validation and testing set, respectively. It is concluded that computerized analysis of medical images in combination with artificial intelligence can be used in clinical practice and may contribute to more efficient diagnosis.

  12. Method for accelerated aging under combined environmental stress conditions

    International Nuclear Information System (INIS)

    Gillen, K.T.

    1979-01-01

    An accelerated aging method which can be used to simulate aging in combined stress environment situations is described. It is shown how the assumptions of the method can be tested experimentally. Aging data for a chloroprene cable jacketing material in single and combined radiation and temperature environments are analyzed and it is shown that these data offer evidence for the validity of the method

  13. Recent Advances in Computational Methods for Nuclear Magnetic Resonance Data Processing

    KAUST Repository

    Gao, Xin

    2013-01-11

    Although three-dimensional protein structure determination using nuclear magnetic resonance (NMR) spectroscopy is a computationally costly and tedious process that would benefit from advanced computational techniques, it has not garnered much research attention from specialists in bioinformatics and computational biology. In this paper, we review recent advances in computational methods for NMR protein structure determination. We summarize the advantages of and bottlenecks in the existing methods and outline some open problems in the field. We also discuss current trends in NMR technology development and suggest directions for research on future computational methods for NMR.

  14. Computational botany methods for automated species identification

    CERN Document Server

    Remagnino, Paolo; Wilkin, Paul; Cope, James; Kirkup, Don

    2017-01-01

    This book discusses innovative methods for mining information from images of plants, especially leaves, and highlights the diagnostic features that can be implemented in fully automatic systems for identifying plant species. Adopting a multidisciplinary approach, it explores the problem of plant species identification, covering both the concepts of taxonomy and morphology. It then provides an overview of morphometrics, including the historical background and the main steps in the morphometric analysis of leaves together with a number of applications. The core of the book focuses on novel diagnostic methods for plant species identification developed from a computer scientist’s perspective. It then concludes with a chapter on the characterization of botanists' visions, which highlights important cognitive aspects that can be implemented in a computer system to more accurately replicate the human expert’s fixation process. The book not only represents an authoritative guide to advanced computational tools fo...

  15. Computation of saddle-type slow manifolds using iterative methods

    DEFF Research Database (Denmark)

    Kristiansen, Kristian Uldall

    2015-01-01

    with respect to , appropriate estimates are directly attainable using the method of this paper. The method is applied to several examples, including a model for a pair of neurons coupled by reciprocal inhibition with two slow and two fast variables, and the computation of homoclinic connections in the Fitz......This paper presents an alternative approach for the computation of trajectory segments on slow manifolds of saddle type. This approach is based on iterative methods rather than collocation-type methods. Compared to collocation methods, which require mesh refinements to ensure uniform convergence...

  16. Multidisciplinary Design Optimization (MDO) Methods: Their Synergy with Computer Technology in Design Process

    Science.gov (United States)

    Sobieszczanski-Sobieski, Jaroslaw

    1998-01-01

    The paper identifies speed, agility, human interface, generation of sensitivity information, task decomposition, and data transmission (including storage) as important attributes for a computer environment to have in order to support engineering design effectively. It is argued that when examined in terms of these attributes the presently available environment can be shown to be inadequate a radical improvement is needed, and it may be achieved by combining new methods that have recently emerged from multidisciplinary design optimization (MDO) with massively parallel processing computer technology. The caveat is that, for successful use of that technology in engineering computing, new paradigms for computing will have to be developed - specifically, innovative algorithms that are intrinsically parallel so that their performance scales up linearly with the number of processors. It may be speculated that the idea of simulating a complex behavior by interaction of a large number of very simple models may be an inspiration for the above algorithms, the cellular automata are an example. Because of the long lead time needed to develop and mature new paradigms, development should be now, even though the widespread availability of massively parallel processing is still a few years away.

  17. Research on a Pulmonary Nodule Segmentation Method Combining Fast Self-Adaptive FCM and Classification

    Directory of Open Access Journals (Sweden)

    Hui Liu

    2015-01-01

    Full Text Available The key problem of computer-aided diagnosis (CAD of lung cancer is to segment pathologically changed tissues fast and accurately. As pulmonary nodules are potential manifestation of lung cancer, we propose a fast and self-adaptive pulmonary nodules segmentation method based on a combination of FCM clustering and classification learning. The enhanced spatial function considers contributions to fuzzy membership from both the grayscale similarity between central pixels and single neighboring pixels and the spatial similarity between central pixels and neighborhood and improves effectively the convergence rate and self-adaptivity of the algorithm. Experimental results show that the proposed method can achieve more accurate segmentation of vascular adhesion, pleural adhesion, and ground glass opacity (GGO pulmonary nodules than other typical algorithms.

  18. Combined use of computational chemistry and chemoinformatics methods for chemical discovery

    Energy Technology Data Exchange (ETDEWEB)

    Sugimoto, Manabu, E-mail: sugimoto@kumamoto-u.ac.jp [Graduate School of Science and Technology, Kumamoto University, 2-39-1, Kurokami, Chuo-ku, Kumamoto 860-8555 (Japan); Institute for Molecular Science, 38 Nishigo-Naka, Myodaiji, Okazaki 444-8585 (Japan); CREST, Japan Science and Technology Agency, 4-1-8 Honcho, Kawaguchi, Saitama 332-0012 (Japan); Ideo, Toshihiro; Iwane, Ryo [Graduate School of Science and Technology, Kumamoto University, 2-39-1, Kurokami, Chuo-ku, Kumamoto 860-8555 (Japan)

    2015-12-31

    Data analysis on numerical data by the computational chemistry calculations is carried out to obtain knowledge information of molecules. A molecular database is developed to systematically store chemical, electronic-structure, and knowledge-based information. The database is used to find molecules related to a keyword of “cancer”. Then the electronic-structure calculations are performed to quantitatively evaluate quantum chemical similarity of the molecules. Among the 377 compounds registered in the database, 24 molecules are found to be “cancer”-related. This set of molecules includes both carcinogens and anticancer drugs. The quantum chemical similarity analysis, which is carried out by using numerical results of the density-functional theory calculations, shows that, when some energy spectra are referred to, carcinogens are reasonably distinguished from the anticancer drugs. Therefore these spectral properties are considered of as important measures for classification.

  19. Teamwork: improved eQTL mapping using combinations of machine learning methods.

    Directory of Open Access Journals (Sweden)

    Marit Ackermann

    Full Text Available Expression quantitative trait loci (eQTL mapping is a widely used technique to uncover regulatory relationships between genes. A range of methodologies have been developed to map links between expression traits and genotypes. The DREAM (Dialogue on Reverse Engineering Assessments and Methods initiative is a community project to objectively assess the relative performance of different computational approaches for solving specific systems biology problems. The goal of one of the DREAM5 challenges was to reverse-engineer genetic interaction networks from synthetic genetic variation and gene expression data, which simulates the problem of eQTL mapping. In this framework, we proposed an approach whose originality resides in the use of a combination of existing machine learning algorithms (committee. Although it was not the best performer, this method was by far the most precise on average. After the competition, we continued in this direction by evaluating other committees using the DREAM5 data and developed a method that relies on Random Forests and LASSO. It achieved a much higher average precision than the DREAM best performer at the cost of slightly lower average sensitivity.

  20. Computational Methods for Modeling Aptamers and Designing Riboswitches

    Directory of Open Access Journals (Sweden)

    Sha Gong

    2017-11-01

    Full Text Available Riboswitches, which are located within certain noncoding RNA region perform functions as genetic “switches”, regulating when and where genes are expressed in response to certain ligands. Understanding the numerous functions of riboswitches requires computation models to predict structures and structural changes of the aptamer domains. Although aptamers often form a complex structure, computational approaches, such as RNAComposer and Rosetta, have already been applied to model the tertiary (three-dimensional (3D structure for several aptamers. As structural changes in aptamers must be achieved within the certain time window for effective regulation, kinetics is another key point for understanding aptamer function in riboswitch-mediated gene regulation. The coarse-grained self-organized polymer (SOP model using Langevin dynamics simulation has been successfully developed to investigate folding kinetics of aptamers, while their co-transcriptional folding kinetics can be modeled by the helix-based computational method and BarMap approach. Based on the known aptamers, the web server Riboswitch Calculator and other theoretical methods provide a new tool to design synthetic riboswitches. This review will represent an overview of these computational methods for modeling structure and kinetics of riboswitch aptamers and for designing riboswitches.

  1. Progresses in application of computational ?uid dynamic methods to large scale wind turbine aerodynamics?

    Institute of Scientific and Technical Information of China (English)

    Zhenyu ZHANG; Ning ZHAO; Wei ZHONG; Long WANG; Bofeng XU

    2016-01-01

    The computational ?uid dynamics (CFD) methods are applied to aerody-namic problems for large scale wind turbines. The progresses including the aerodynamic analyses of wind turbine pro?les, numerical ?ow simulation of wind turbine blades, evalu-ation of aerodynamic performance, and multi-objective blade optimization are discussed. Based on the CFD methods, signi?cant improvements are obtained to predict two/three-dimensional aerodynamic characteristics of wind turbine airfoils and blades, and the vorti-cal structure in their wake ?ows is accurately captured. Combining with a multi-objective genetic algorithm, a 1.5 MW NH-1500 optimized blade is designed with high e?ciency in wind energy conversion.

  2. Computational methods for protein identification from mass spectrometry data.

    Directory of Open Access Journals (Sweden)

    Leo McHugh

    2008-02-01

    Full Text Available Protein identification using mass spectrometry is an indispensable computational tool in the life sciences. A dramatic increase in the use of proteomic strategies to understand the biology of living systems generates an ongoing need for more effective, efficient, and accurate computational methods for protein identification. A wide range of computational methods, each with various implementations, are available to complement different proteomic approaches. A solid knowledge of the range of algorithms available and, more critically, the accuracy and effectiveness of these techniques is essential to ensure as many of the proteins as possible, within any particular experiment, are correctly identified. Here, we undertake a systematic review of the currently available methods and algorithms for interpreting, managing, and analyzing biological data associated with protein identification. We summarize the advances in computational solutions as they have responded to corresponding advances in mass spectrometry hardware. The evolution of scoring algorithms and metrics for automated protein identification are also discussed with a focus on the relative performance of different techniques. We also consider the relative advantages and limitations of different techniques in particular biological contexts. Finally, we present our perspective on future developments in the area of computational protein identification by considering the most recent literature on new and promising approaches to the problem as well as identifying areas yet to be explored and the potential application of methods from other areas of computational biology.

  3. Wave resistance calculation method combining Green functions based on Rankine and Kelvin source

    Directory of Open Access Journals (Sweden)

    LI Jingyu

    2017-12-01

    Full Text Available [Ojectives] At present, the Boundary Element Method(BEM of wave-making resistance mostly uses a model in which the velocity distribution near the hull is solved first, and the pressure integral is then calculated using the Bernoulli equation. However,the process of this model of wave-making resistance is complex and has low accuracy.[Methods] To address this problem, the present paper deduces a compound method for the quick calculation of ship wave resistance using the Rankine source Green function to solve the hull surface's source density, and combining the Lagally theorem concerning source point force calculation based on the Kelvin source Green function so as to solve the wave resistance. A case for the Wigley model is given.[Results] The results show that in contrast to the thin ship method of the linear wave resistance theorem, this method has higher precision, and in contrast to the method which completely uses the Kelvin source Green function, this method has better computational efficiency.[Conclusions] In general, the algorithm in this paper provides a compromise between precision and efficiency in wave-making resistance calculation.

  4. Computer science handbook. Vol. 13.3. Environmental computer science. Computer science methods for environmental protection and environmental research

    International Nuclear Information System (INIS)

    Page, B.; Hilty, L.M.

    1994-01-01

    Environmental computer science is a new partial discipline of applied computer science, which makes use of methods and techniques of information processing in environmental protection. Thanks to the inter-disciplinary nature of environmental problems, computer science acts as a mediator between numerous disciplines and institutions in this sector. The handbook reflects the broad spectrum of state-of-the art environmental computer science. The following important subjects are dealt with: Environmental databases and information systems, environmental monitoring, modelling and simulation, visualization of environmental data and knowledge-based systems in the environmental sector. (orig.) [de

  5. Shale Fracture Analysis using the Combined Finite-Discrete Element Method

    Science.gov (United States)

    Carey, J. W.; Lei, Z.; Rougier, E.; Knight, E. E.; Viswanathan, H.

    2014-12-01

    Hydraulic fracturing (hydrofrac) is a successful method used to extract oil and gas from highly carbonate rocks like shale. However, challenges exist for industry experts estimate that for a single $10 million dollar lateral wellbore fracking operation, only 10% of the hydrocarbons contained in the rock are extracted. To better understand how to improve hydrofrac recovery efficiencies and to lower its costs, LANL recently funded the Laboratory Directed Research and Development (LDRD) project: "Discovery Science of Hydraulic Fracturing: Innovative Working Fluids and Their Interactions with Rocks, Fractures, and Hydrocarbons". Under the support of this project, the LDRD modeling team is working with the experimental team to understand fracture initiation and propagation in shale rocks. LANL's hybrid hydro-mechanical (HM) tool, the Hybrid Optimization Software Suite (HOSS), is being used to simulate the complex fracture and fragment processes under a variety of different boundary conditions. HOSS is based on the combined finite-discrete element method (FDEM) and has been proven to be a superior computational tool for multi-fracturing problems. In this work, the comparison of HOSS simulation results to triaxial core flooding experiments will be presented.

  6. A computational method for sharp interface advection

    DEFF Research Database (Denmark)

    Roenby, Johan; Bredmose, Henrik; Jasak, Hrvoje

    2016-01-01

    We devise a numerical method for passive advection of a surface, such as the interface between two incompressible fluids, across a computational mesh. The method is called isoAdvector, and is developed for general meshes consisting of arbitrary polyhedral cells. The algorithm is based on the volu...

  7. Combining the multilevel fast multipole method with the uniform geometrical theory of diffraction

    Directory of Open Access Journals (Sweden)

    A. Tzoulis

    2005-01-01

    Full Text Available The presence of arbitrarily shaped and electrically large objects in the same environment leads to hybridization of the Method of Moments (MoM with the Uniform Geometrical Theory of Diffraction (UTD. The computation and memory complexity of the MoM solution is improved with the Multilevel Fast Multipole Method (MLFMM. By expanding the k-space integrals in spherical harmonics, further considerable amount of memory can be saved without compromising accuracy and numerical speed. However, until now MoM-UTD hybrid methods are restricted to conventional MoM formulations only with Electric Field Integral Equation (EFIE. In this contribution, a MLFMM-UTD hybridization for Combined Field Integral Equation (CFIE is proposed and applied within a hybrid Finite Element - Boundary Integral (FEBI technique. The MLFMM-UTD hybridization is performed at the translation procedure on the various levels of the MLFMM, using a far-field approximation of the corresponding translation operator. The formulation of this new hybrid technique is presented, as well as numerical results.

  8. Methods in Symbolic Computation and p-Adic Valuations of Polynomials

    Science.gov (United States)

    Guan, Xiao

    Symbolic computation has widely appear in many mathematical fields such as combinatorics, number theory and stochastic processes. The techniques created in the area of experimental mathematics provide us efficient ways of symbolic computing and verification of complicated relations. Part I consists of three problems. The first one focuses on a unimodal sequence derived from a quartic integral. Many of its properties are explored with the help of hypergeometric representations and automatic proofs. The second problem tackles the generating function of the reciprocal of Catalan number. It springs from the closed form given by Mathematica. Furthermore, three methods in special functions are used to justify this result. The third issue addresses the closed form solutions for the moments of products of generalized elliptic integrals , which combines the experimental mathematics and classical analysis. Part II concentrates on the p-adic valuations of polynomials from the perspective of trees. For a given polynomial f( n) indexed in positive integers, the package developed in Mathematica will create certain tree structure following a couple of rules. The evolution of such trees are studied both rigorously and experimentally from the view of field extension, nonparametric statistics and random matrix.

  9. Method of generating a computer readable model

    DEFF Research Database (Denmark)

    2008-01-01

    A method of generating a computer readable model of a geometrical object constructed from a plurality of interconnectable construction elements, wherein each construction element has a number of connection elements for connecting the construction element with another construction element. The met......A method of generating a computer readable model of a geometrical object constructed from a plurality of interconnectable construction elements, wherein each construction element has a number of connection elements for connecting the construction element with another construction element....... The method comprises encoding a first and a second one of the construction elements as corresponding data structures, each representing the connection elements of the corresponding construction element, and each of the connection elements having associated with it a predetermined connection type. The method...... further comprises determining a first connection element of the first construction element and a second connection element of the second construction element located in a predetermined proximity of each other; and retrieving connectivity information of the corresponding connection types of the first...

  10. An online hybrid brain-computer interface combining multiple physiological signals for webpage browse.

    Science.gov (United States)

    Long Chen; Zhongpeng Wang; Feng He; Jiajia Yang; Hongzhi Qi; Peng Zhou; Baikun Wan; Dong Ming

    2015-08-01

    The hybrid brain computer interface (hBCI) could provide higher information transfer rate than did the classical BCIs. It included more than one brain-computer or human-machine interact paradigms, such as the combination of the P300 and SSVEP paradigms. Research firstly constructed independent subsystems of three different paradigms and tested each of them with online experiments. Then we constructed a serial hybrid BCI system which combined these paradigms to achieve the functions of typing letters, moving and clicking cursor, and switching among them for the purpose of browsing webpages. Five subjects were involved in this study. They all successfully realized these functions in the online tests. The subjects could achieve an accuracy above 90% after training, which met the requirement in operating the system efficiently. The results demonstrated that it was an efficient system capable of robustness, which provided an approach for the clinic application.

  11. Hamiltonian lattice field theory: Computer calculations using variational methods

    International Nuclear Information System (INIS)

    Zako, R.L.

    1991-01-01

    I develop a variational method for systematic numerical computation of physical quantities -- bound state energies and scattering amplitudes -- in quantum field theory. An infinite-volume, continuum theory is approximated by a theory on a finite spatial lattice, which is amenable to numerical computation. I present an algorithm for computing approximate energy eigenvalues and eigenstates in the lattice theory and for bounding the resulting errors. I also show how to select basis states and choose variational parameters in order to minimize errors. The algorithm is based on the Rayleigh-Ritz principle and Kato's generalizations of Temple's formula. The algorithm could be adapted to systems such as atoms and molecules. I show how to compute Green's functions from energy eigenvalues and eigenstates in the lattice theory, and relate these to physical (renormalized) coupling constants, bound state energies and Green's functions. Thus one can compute approximate physical quantities in a lattice theory that approximates a quantum field theory with specified physical coupling constants. I discuss the errors in both approximations. In principle, the errors can be made arbitrarily small by increasing the size of the lattice, decreasing the lattice spacing and computing sufficiently long. Unfortunately, I do not understand the infinite-volume and continuum limits well enough to quantify errors due to the lattice approximation. Thus the method is currently incomplete. I apply the method to real scalar field theories using a Fock basis of free particle states. All needed quantities can be calculated efficiently with this basis. The generalization to more complicated theories is straightforward. I describe a computer implementation of the method and present numerical results for simple quantum mechanical systems

  12. Hamiltonian lattice field theory: Computer calculations using variational methods

    International Nuclear Information System (INIS)

    Zako, R.L.

    1991-01-01

    A variational method is developed for systematic numerical computation of physical quantities-bound state energies and scattering amplitudes-in quantum field theory. An infinite-volume, continuum theory is approximated by a theory on a finite spatial lattice, which is amenable to numerical computation. An algorithm is presented for computing approximate energy eigenvalues and eigenstates in the lattice theory and for bounding the resulting errors. It is shown how to select basis states and choose variational parameters in order to minimize errors. The algorithm is based on the Rayleigh-Ritz principle and Kato's generalizations of Temple's formula. The algorithm could be adapted to systems such as atoms and molecules. It is shown how to compute Green's functions from energy eigenvalues and eigenstates in the lattice theory, and relate these to physical (renormalized) coupling constants, bound state energies and Green's functions. Thus one can compute approximate physical quantities in a lattice theory that approximates a quantum field theory with specified physical coupling constants. The author discusses the errors in both approximations. In principle, the errors can be made arbitrarily small by increasing the size of the lattice, decreasing the lattice spacing and computing sufficiently long. Unfortunately, the author does not understand the infinite-volume and continuum limits well enough to quantify errors due to the lattice approximation. Thus the method is currently incomplete. The method is applied to real scalar field theories using a Fock basis of free particle states. All needed quantities can be calculated efficiently with this basis. The generalization to more complicated theories is straightforward. The author describes a computer implementation of the method and present numerical results for simple quantum mechanical systems

  13. Personal computer versus personal computer/mobile device combination users' preclinical laboratory e-learning activity.

    Science.gov (United States)

    Kon, Haruka; Kobayashi, Hiroshi; Sakurai, Naoki; Watanabe, Kiyoshi; Yamaga, Yoshiro; Ono, Takahiro

    2017-11-01

    The aim of the present study was to clarify differences between personal computer (PC)/mobile device combination and PC-only user patterns. We analyzed access frequency and time spent on a complete denture preclinical website in order to maximize website effectiveness. Fourth-year undergraduate students (N=41) in the preclinical complete denture laboratory course were invited to participate in this survey during the final week of the course to track login data. Students accessed video demonstrations and quizzes via our e-learning site/course program, and were instructed to view online demonstrations before classes. When the course concluded, participating students filled out a questionnaire about the program, their opinions, and devices they had used to access the site. Combination user access was significantly more frequent than PC-only during supplementary learning time, indicating that students with mobile devices studied during lunch breaks and before morning classes. Most students had favorable opinions of the e-learning site, but a few combination users commented that some videos were too long and that descriptive answers were difficult on smartphones. These results imply that mobile devices' increased accessibility encouraged learning by enabling more efficient time use between classes. They also suggest that e-learning system improvements should cater to mobile device users by reducing video length and including more short-answer questions. © 2016 John Wiley & Sons Australia, Ltd.

  14. Comparison of four computational methods for computing Q factors and resonance wavelengths in photonic crystal membrane cavities

    DEFF Research Database (Denmark)

    de Lasson, Jakob Rosenkrantz; Frandsen, Lars Hagedorn; Burger, Sven

    2016-01-01

    We benchmark four state-of-the-art computational methods by computing quality factors and resonance wavelengths in photonic crystal membrane L5 and L9 line defect cavities.The convergence of the methods with respect to resolution, degrees of freedom and number ofmodes is investigated. Special att...... attention is paid to the influence of the size of the computational domain. Convergence is not obtained for some of the methods, indicating that some are moresuitable than others for analyzing line defect cavities....

  15. The Direct Lighting Computation in Global Illumination Methods

    Science.gov (United States)

    Wang, Changyaw Allen

    1994-01-01

    Creating realistic images is a computationally expensive process, but it is very important for applications such as interior design, product design, education, virtual reality, and movie special effects. To generate realistic images, state-of-art rendering techniques are employed to simulate global illumination, which accounts for the interreflection of light among objects. In this document, we formalize the global illumination problem into a eight -dimensional integral and discuss various methods that can accelerate the process of approximating this integral. We focus on the direct lighting computation, which accounts for the light reaching the viewer from the emitting sources after exactly one reflection, Monte Carlo sampling methods, and light source simplification. Results include a new sample generation method, a framework for the prediction of the total number of samples used in a solution, and a generalized Monte Carlo approach for computing the direct lighting from an environment which for the first time makes ray tracing feasible for highly complex environments.

  16. Research on Large-Scale Road Network Partition and Route Search Method Combined with Traveler Preferences

    Directory of Open Access Journals (Sweden)

    De-Xin Yu

    2013-01-01

    Full Text Available Combined with improved Pallottino parallel algorithm, this paper proposes a large-scale route search method, which considers travelers’ route choice preferences. And urban road network is decomposed into multilayers effectively. Utilizing generalized travel time as road impedance function, the method builds a new multilayer and multitasking road network data storage structure with object-oriented class definition. Then, the proposed path search algorithm is verified by using the real road network of Guangzhou city as an example. By the sensitive experiments, we make a comparative analysis of the proposed path search method with the current advanced optimal path algorithms. The results demonstrate that the proposed method can increase the road network search efficiency by more than 16% under different search proportion requests, node numbers, and computing process numbers, respectively. Therefore, this method is a great breakthrough in the guidance field of urban road network.

  17. A general method for generating bathymetric data for hydrodynamic computer models

    Science.gov (United States)

    Burau, J.R.; Cheng, R.T.

    1989-01-01

    To generate water depth data from randomly distributed bathymetric data for numerical hydrodymamic models, raw input data from field surveys, water depth data digitized from nautical charts, or a combination of the two are sorted to given an ordered data set on which a search algorithm is used to isolate data for interpolation. Water depths at locations required by hydrodynamic models are interpolated from the bathymetric data base using linear or cubic shape functions used in the finite-element method. The bathymetric database organization and preprocessing, the search algorithm used in finding the bounding points for interpolation, the mathematics of the interpolation formulae, and the features of the automatic generation of water depths at hydrodynamic model grid points are included in the analysis. This report includes documentation of two computer programs which are used to: (1) organize the input bathymetric data; and (2) to interpolate depths for hydrodynamic models. An example of computer program operation is drawn from a realistic application to the San Francisco Bay estuarine system. (Author 's abstract)

  18. Reduced order methods for modeling and computational reduction

    CERN Document Server

    Rozza, Gianluigi

    2014-01-01

    This monograph addresses the state of the art of reduced order methods for modeling and computational reduction of complex parametrized systems, governed by ordinary and/or partial differential equations, with a special emphasis on real time computing techniques and applications in computational mechanics, bioengineering and computer graphics.  Several topics are covered, including: design, optimization, and control theory in real-time with applications in engineering; data assimilation, geometry registration, and parameter estimation with special attention to real-time computing in biomedical engineering and computational physics; real-time visualization of physics-based simulations in computer science; the treatment of high-dimensional problems in state space, physical space, or parameter space; the interactions between different model reduction and dimensionality reduction approaches; the development of general error estimation frameworks which take into account both model and discretization effects. This...

  19. A simplified technique for polymethyl methacrylate cranioplasty: combined cotton stacking and finger fracture method.

    Science.gov (United States)

    Kung, Woon-Man; Lin, Muh-Shi

    2012-01-01

    Polymethyl methacrylate (PMMA) is one of the most frequently used cranioplasty materials. However, limitations exist with PMMA cranioplasty including longer operative time, greater blood loss and a higher infection rate. To reduce these disadvantages, it is proposed to introduce a new surgical method for PMMA cranioplasty. Retrospective review of nine patients who received nine PMMA implants using combined cotton stacking and finger fracture method from January 2008 to July 2011. The definitive height of skull defect was quantified by computer-based image analysis of computed tomography (CT) scans. Aesthetic outcomes as measured by post-reduction radiographs and cranial index of symmetry (CIS), cranial nerve V and VII function and complications (wound infection, hardware extrusions, meningitis, osteomyelitis and brain abscess) were evaluated. The mean operation time for implant moulding was 24.56 ± 4.6 minutes and 178.0 ± 53 minutes for skin-to-skin. Average blood loss was 169 mL. All post-operative radiographs revealed excellent reduction. The mean CIS score was 95.86 ± 1.36%, indicating excellent symmetry. These results indicate the safety, practicability, excellent cosmesis, craniofacial symmetry and stability of this new surgical technique.

  20. Evolutionary Computing Methods for Spectral Retrieval

    Science.gov (United States)

    Terrile, Richard; Fink, Wolfgang; Huntsberger, Terrance; Lee, Seugwon; Tisdale, Edwin; VonAllmen, Paul; Tinetti, Geivanna

    2009-01-01

    A methodology for processing spectral images to retrieve information on underlying physical, chemical, and/or biological phenomena is based on evolutionary and related computational methods implemented in software. In a typical case, the solution (the information that one seeks to retrieve) consists of parameters of a mathematical model that represents one or more of the phenomena of interest. The methodology was developed for the initial purpose of retrieving the desired information from spectral image data acquired by remote-sensing instruments aimed at planets (including the Earth). Examples of information desired in such applications include trace gas concentrations, temperature profiles, surface types, day/night fractions, cloud/aerosol fractions, seasons, and viewing angles. The methodology is also potentially useful for retrieving information on chemical and/or biological hazards in terrestrial settings. In this methodology, one utilizes an iterative process that minimizes a fitness function indicative of the degree of dissimilarity between observed and synthetic spectral and angular data. The evolutionary computing methods that lie at the heart of this process yield a population of solutions (sets of the desired parameters) within an accuracy represented by a fitness-function value specified by the user. The evolutionary computing methods (ECM) used in this methodology are Genetic Algorithms and Simulated Annealing, both of which are well-established optimization techniques and have also been described in previous NASA Tech Briefs articles. These are embedded in a conceptual framework, represented in the architecture of the implementing software, that enables automatic retrieval of spectral and angular data and analysis of the retrieved solutions for uniqueness.

  1. Monte Carlo methods of PageRank computation

    NARCIS (Netherlands)

    Litvak, Nelli

    2004-01-01

    We describe and analyze an on-line Monte Carlo method of PageRank computation. The PageRank is being estimated basing on results of a large number of short independent simulation runs initiated from each page that contains outgoing hyperlinks. The method does not require any storage of the hyperlink

  2. Computational methods for industrial radiation measurement applications

    International Nuclear Information System (INIS)

    Gardner, R.P.; Guo, P.; Ao, Q.

    1996-01-01

    Computational methods have been used with considerable success to complement radiation measurements in solving a wide range of industrial problems. The almost exponential growth of computer capability and applications in the last few years leads to a open-quotes black boxclose quotes mentality for radiation measurement applications. If a black box is defined as any radiation measurement device that is capable of measuring the parameters of interest when a wide range of operating and sample conditions may occur, then the development of computational methods for industrial radiation measurement applications should now be focused on the black box approach and the deduction of properties of interest from the response with acceptable accuracy and reasonable efficiency. Nowadays, increasingly better understanding of radiation physical processes, more accurate and complete fundamental physical data, and more advanced modeling and software/hardware techniques have made it possible to make giant strides in that direction with new ideas implemented with computer software. The Center for Engineering Applications of Radioisotopes (CEAR) at North Carolina State University has been working on a variety of projects in the area of radiation analyzers and gauges for accomplishing this for quite some time, and they are discussed here with emphasis on current accomplishments

  3. Computationally efficient methods for digital control

    NARCIS (Netherlands)

    Guerreiro Tome Antunes, D.J.; Hespanha, J.P.; Silvestre, C.J.; Kataria, N.; Brewer, F.

    2008-01-01

    The problem of designing a digital controller is considered with the novelty of explicitly taking into account the computation cost of the controller implementation. A class of controller emulation methods inspired by numerical analysis is proposed. Through various examples it is shown that these

  4. A blended pressure/density based method for the computation of incompressible and compressible flows

    International Nuclear Information System (INIS)

    Rossow, C.-C.

    2003-01-01

    An alternative method to low speed preconditioning for the computation of nearly incompressible flows with compressible methods is developed. For this approach the leading terms of the flux difference splitting (FDS) approximate Riemann solver are analyzed in the incompressible limit. In combination with the requirement of the velocity field to be divergence-free, an elliptic equation to solve for a pressure correction to enforce the divergence-free velocity field on the discrete level is derived. The pressure correction equation established is shown to be equivalent to classical methods for incompressible flows. In order to allow the computation of flows at all speeds, a blending technique for the transition from the incompressible, pressure based formulation to the compressible, density based formulation is established. It is found necessary to use preconditioning with this blending technique to account for a remaining 'compressible' contribution in the incompressible limit, and a suitable matrix directly applicable to conservative residuals is derived. Thus, a coherent framework is established to cover the discretization of both incompressible and compressible flows. Compared with standard preconditioning techniques, the blended pressure/density based approach showed improved robustness for high lift flows close to separation

  5. Dual-Modality Imaging of the Human Finger Joint Systems by Using Combined Multispectral Photoacoustic Computed Tomography and Ultrasound Computed Tomography

    Directory of Open Access Journals (Sweden)

    Yubin Liu

    2016-01-01

    Full Text Available We developed a homemade dual-modality imaging system that combines multispectral photoacoustic computed tomography and ultrasound computed tomography for reconstructing the structural and functional information of human finger joint systems. The fused multispectral photoacoustic-ultrasound computed tomography (MPAUCT system was examined by the phantom and in vivo experimental tests. The imaging results indicate that the hard tissues such as the bones and the soft tissues including the blood vessels, the tendon, the skins, and the subcutaneous tissues in the finger joints systems can be effectively recovered by using our multimodality MPAUCT system. The developed MPAUCT system is able to provide us with more comprehensive information of the human finger joints, which shows its potential for characterization and diagnosis of bone or joint diseases.

  6. 3rd Workshop on "Combinations of Intelligent Methods and Applications"

    CERN Document Server

    Palade, Vasile

    2013-01-01

    The combination of different intelligent methods is a very active research area in Artificial Intelligence (AI). The aim is to create integrated or hybrid methods that benefit from each of their components.  The 3rd Workshop on “Combinations of Intelligent Methods and Applications” (CIMA 2012) was intended to become a forum for exchanging experience and ideas among researchers and practitioners who are dealing with combining intelligent methods either based on first principles or in the context of specific applications. CIMA 2012 was held in conjunction with the 22nd European Conference on Artificial Intelligence (ECAI 2012).This volume includes revised versions of the papers presented at CIMA 2012.  .

  7. Comparison of accounting methods for business combinations

    Directory of Open Access Journals (Sweden)

    Jaroslav Sedláček

    2012-01-01

    Full Text Available The revised accounting rules applicable to business combinations in force on July1st 2009, are the result of several years efforts the convergence of U.S. and International Committee of the Financial Accounting Standards. Following the harmonization of global accounting procedures are revised and implemented also Czech accounting regulations. In our research we wanted to see how changes can affect the strategy and timing of business combinations. Comparative analysis is mainly focused on the differences between U.S. and international accounting policies and Czech accounting regulations. Key areas of analysis and synthesis are the identification of business combination, accounting methods for business combinations and goodwill recognition. The result is to assess the impact of the identified differences in the reported financial position and profit or loss of company.

  8. Degradation of acephate using combined ultrasonic and ozonation method

    Directory of Open Access Journals (Sweden)

    Bin Wang

    2015-07-01

    Full Text Available The degradation of acephate in aqueous solutions was investigated with the ultrasonic and ozonation methods, as well as a combination of both. An experimental facility was designed and operation parameters such as the ultrasonic power, temperature, and gas flow rate were strictly controlled at constant levels. The frequency of the ultrasonic wave was 160 kHz. The ultraviolet-visible (UV-Vis spectroscopic and Raman spectroscopic techniques were used in the experiment. The UV-Vis spectroscopic results show that ultrasonication and ozonation have a synergistic effect in the combined system. The degradation efficiency of acephate increases from 60.6% to 87.6% after the solution is irradiated by a 160 kHz ultrasonic wave for 60 min in the ozonation process, and it is higher with the combined method than the sum of the separated ultrasonic and ozonation methods. Raman spectra studies show that degradation via the combined ultrasonic/ozonation method is more thorough than photocatalysis. The oxidability of nitrogen atoms is promoted under ultrasonic waves. Changes of the inorganic ions and degradation pathway during the degradation process were investigated in this study. Most final products are innocuous to the environment.

  9. Evolutionary Computation Methods and their applications in Statistics

    Directory of Open Access Journals (Sweden)

    Francesco Battaglia

    2013-05-01

    Full Text Available A brief discussion of the genesis of evolutionary computation methods, their relationship to artificial intelligence, and the contribution of genetics and Darwin’s theory of natural evolution is provided. Then, the main evolutionary computation methods are illustrated: evolution strategies, genetic algorithms, estimation of distribution algorithms, differential evolution, and a brief description of some evolutionary behavior methods such as ant colony and particle swarm optimization. We also discuss the role of the genetic algorithm for multivariate probability distribution random generation, rather than as a function optimizer. Finally, some relevant applications of genetic algorithm to statistical problems are reviewed: selection of variables in regression, time series model building, outlier identification, cluster analysis, design of experiments.

  10. Computer methods for transient fluid-structure analysis of nuclear reactors

    International Nuclear Information System (INIS)

    Belytschko, T.; Liu, W.K.

    1985-01-01

    Fluid-structure interaction problems in nuclear engineering are categorized according to the dominant physical phenomena and the appropriate computational methods. Linear fluid models that are considered include acoustic fluids, incompressible fluids undergoing small disturbances, and small amplitude sloshing. Methods available in general-purpose codes for these linear fluid problems are described. For nonlinear fluid problems, the major features of alternative computational treatments are reviewed; some special-purpose and multipurpose computer codes applicable to these problems are then described. For illustration, some examples of nuclear reactor problems that entail coupled fluid-structure analysis are described along with computational results

  11. Computational biology in the cloud: methods and new insights from computing at scale.

    Science.gov (United States)

    Kasson, Peter M

    2013-01-01

    The past few years have seen both explosions in the size of biological data sets and the proliferation of new, highly flexible on-demand computing capabilities. The sheer amount of information available from genomic and metagenomic sequencing, high-throughput proteomics, experimental and simulation datasets on molecular structure and dynamics affords an opportunity for greatly expanded insight, but it creates new challenges of scale for computation, storage, and interpretation of petascale data. Cloud computing resources have the potential to help solve these problems by offering a utility model of computing and storage: near-unlimited capacity, the ability to burst usage, and cheap and flexible payment models. Effective use of cloud computing on large biological datasets requires dealing with non-trivial problems of scale and robustness, since performance-limiting factors can change substantially when a dataset grows by a factor of 10,000 or more. New computing paradigms are thus often needed. The use of cloud platforms also creates new opportunities to share data, reduce duplication, and to provide easy reproducibility by making the datasets and computational methods easily available.

  12. Data analysis through interactive computer animation method (DATICAM)

    International Nuclear Information System (INIS)

    Curtis, J.N.; Schwieder, D.H.

    1983-01-01

    DATICAM is an interactive computer animation method designed to aid in the analysis of nuclear research data. DATICAM was developed at the Idaho National Engineering Laboratory (INEL) by EG and G Idaho, Inc. INEL analysts use DATICAM to produce computer codes that are better able to predict the behavior of nuclear power reactors. In addition to increased code accuracy, DATICAM has saved manpower and computer costs. DATICAM has been generalized to assist in the data analysis of virtually any data-producing dynamic process

  13. An Augmented Fast Marching Method for Computing Skeletons and Centerlines

    NARCIS (Netherlands)

    Telea, Alexandru; Wijk, Jarke J. van

    2002-01-01

    We present a simple and robust method for computing skeletons for arbitrary planar objects and centerlines for 3D objects. We augment the Fast Marching Method (FMM) widely used in level set applications by computing the paramterized boundary location every pixel came from during the boundary

  14. Numerical computer methods part E

    CERN Document Server

    Johnson, Michael L

    2004-01-01

    The contributions in this volume emphasize analysis of experimental data and analytical biochemistry, with examples taken from biochemistry. They serve to inform biomedical researchers of the modern data analysis methods that have developed concomitantly with computer hardware. Selected Contents: A practical approach to interpretation of SVD results; modeling of oscillations in endocrine networks with feedback; quantifying asynchronous breathing; sample entropy; wavelet modeling and processing of nasal airflow traces.

  15. The Experiment Method for Manufacturing Grid Development on Single Computer

    Institute of Scientific and Technical Information of China (English)

    XIAO Youan; ZHOU Zude

    2006-01-01

    In this paper, an experiment method for the Manufacturing Grid application system development in the single personal computer environment is proposed. The characteristic of the proposed method is constructing a full prototype Manufacturing Grid application system which is hosted on a single personal computer with the virtual machine technology. Firstly, it builds all the Manufacturing Grid physical resource nodes on an abstraction layer of a single personal computer with the virtual machine technology. Secondly, all the virtual Manufacturing Grid resource nodes will be connected with virtual network and the application software will be deployed on each Manufacturing Grid nodes. Then, we can obtain a prototype Manufacturing Grid application system which is working in the single personal computer, and can carry on the experiment on this foundation. Compared with the known experiment methods for the Manufacturing Grid application system development, the proposed method has the advantages of the known methods, such as cost inexpensively, operation simple, and can get the confidence experiment result easily. The Manufacturing Grid application system constructed with the proposed method has the high scalability, stability and reliability. It is can be migrated to the real application environment rapidly.

  16. A Joint Method of Envelope Inversion Combined with Hybrid-domain Full Waveform Inversion

    Science.gov (United States)

    CUI, C.; Hou, W.

    2017-12-01

    Full waveform inversion (FWI) aims to construct high-precision subsurface models by fully using the information in seismic records, including amplitude, travel time, phase and so on. However, high non-linearity and the absence of low frequency information in seismic data lead to the well-known cycle skipping problem and make inversion easily fall into local minima. In addition, those 3D inversion methods that are based on acoustic approximation ignore the elastic effects in real seismic field, and make inversion harder. As a result, the accuracy of final inversion results highly relies on the quality of initial model. In order to improve stability and quality of inversion results, multi-scale inversion that reconstructs subsurface model from low to high frequency are applied. But, the absence of very low frequencies (time domain and inversion in the frequency domain. To accelerate the inversion, we adopt CPU/GPU heterogeneous computing techniques. There were two levels of parallelism. In the first level, the inversion tasks are decomposed and assigned to each computation node by shot number. In the second level, GPU multithreaded programming is used for the computation tasks in each node, including forward modeling, envelope extraction, DFT (discrete Fourier transform) calculation and gradients calculation. Numerical tests demonstrated that the combined envelope inversion + hybrid-domain FWI could obtain much faithful and accurate result than conventional hybrid-domain FWI. The CPU/GPU heterogeneous parallel computation could improve the performance speed.

  17. Multidisciplinary Design Optimisation (MDO) Methods: Their Synergy with Computer Technology in the Design Process

    Science.gov (United States)

    Sobieszczanski-Sobieski, Jaroslaw

    1999-01-01

    The paper identifies speed, agility, human interface, generation of sensitivity information, task decomposition, and data transmission (including storage) as important attributes for a computer environment to have in order to support engineering design effectively. It is argued that when examined in terms of these attributes the presently available environment can be shown to be inadequate. A radical improvement is needed, and it may be achieved by combining new methods that have recently emerged from multidisciplinary design optimisation (MDO) with massively parallel processing computer technology. The caveat is that, for successful use of that technology in engineering computing, new paradigms for computing will have to be developed - specifically, innovative algorithms that are intrinsically parallel so that their performance scales up linearly with the number of processors. It may be speculated that the idea of simulating a complex behaviour by interaction of a large number of very simple models may be an inspiration for the above algorithms; the cellular automata are an example. Because of the long lead time needed to develop and mature new paradigms, development should begin now, even though the widespread availability of massively parallel processing is still a few years away.

  18. Computational methods for molecular imaging

    CERN Document Server

    Shi, Kuangyu; Li, Shuo

    2015-01-01

    This volume contains original submissions on the development and application of molecular imaging computing. The editors invited authors to submit high-quality contributions on a wide range of topics including, but not limited to: • Image Synthesis & Reconstruction of Emission Tomography (PET, SPECT) and other Molecular Imaging Modalities • Molecular Imaging Enhancement • Data Analysis of Clinical & Pre-clinical Molecular Imaging • Multi-Modal Image Processing (PET/CT, PET/MR, SPECT/CT, etc.) • Machine Learning and Data Mining in Molecular Imaging. Molecular imaging is an evolving clinical and research discipline enabling the visualization, characterization and quantification of biological processes taking place at the cellular and subcellular levels within intact living subjects. Computational methods play an important role in the development of molecular imaging, from image synthesis to data analysis and from clinical diagnosis to therapy individualization. This work will bring readers fro...

  19. A method of paralleling computer calculation for two-dimensional kinetic plasma model

    International Nuclear Information System (INIS)

    Brazhnik, V.A.; Demchenko, V.V.; Dem'yanov, V.G.; D'yakov, V.E.; Ol'shanskij, V.V.; Panchenko, V.I.

    1987-01-01

    A method for parallel computer calculation and OSIRIS program complex realizing it and designed for numerical plasma simulation by the macroparticle method are described. The calculation can be carried out either with one or simultaneously with two computers BESM-6, that is provided by some package of interacting programs functioning in every computer. Program interaction in every computer is based on event techniques realized in OS DISPAK. Parallel computer calculation with two BESM-6 computers allows to accelerate the computation 1.5 times

  20. Computers, pattern, chaos and beauty

    CERN Document Server

    Pickover, Clifford A

    1980-01-01

    Combining fractal theory with computer art, this book introduces a creative use of computers. It describes graphic methods for detecting patterns in complicated data and illustrates simple techniques for visualizing chaotic behavior. ""Beautiful."" - Martin Gardner, Scientific American. Over 275 illustrations, 29 in color.

  1. Discrete linear canonical transform computation by adaptive method.

    Science.gov (United States)

    Zhang, Feng; Tao, Ran; Wang, Yue

    2013-07-29

    The linear canonical transform (LCT) describes the effect of quadratic phase systems on a wavefield and generalizes many optical transforms. In this paper, the computation method for the discrete LCT using the adaptive least-mean-square (LMS) algorithm is presented. The computation approaches of the block-based discrete LCT and the stream-based discrete LCT using the LMS algorithm are derived, and the implementation structures of these approaches by the adaptive filter system are considered. The proposed computation approaches have the inherent parallel structures which make them suitable for efficient VLSI implementations, and are robust to the propagation of possible errors in the computation process.

  2. Three numerical methods for the computation of the electrostatic energy

    International Nuclear Information System (INIS)

    Poenaru, D.N.; Galeriu, D.

    1975-01-01

    The FORTRAN programs for computation of the electrostatic energy of a body with axial symmetry by Lawrence, Hill-Wheeler and Beringer methods are presented in detail. The accuracy, time of computation and the required memory of these methods are tested at various deformations for two simple parametrisations: two overlapping identical spheres and a spheroid. On this basis the field of application of each method is recomended

  3. Three-dimensional SPECT [single photon emission computed tomography] reconstruction of combined cone beam and parallel beam data

    International Nuclear Information System (INIS)

    Jaszczak, R.J.; Jianying Li; Huili Wang; Coleman, R.E.

    1992-01-01

    Single photon emission computed tomography (SPECT) using cone beam (CB) collimation exhibits increased sensitivity compared with acquisition geometries using parallel (P) hole collimation. However, CB collimation has a smaller field-of-view which may result in truncated projections and image artifacts. A primary objective of this work is to investigate maximum likelihood-expectation maximization (ML-EM) methods to reconstruct simultaneously acquired parallel and cone beam (P and CB) SPECT data. Simultaneous P and CB acquisition can be performed with commercially available triple camera systems by using two cone-beam collimators and a single parallel-hole collimator. The loss in overall sensitivity (relative to the use of three CB collimators) is about 15 to 20%. The authors have developed three methods to combine P and CB data using modified ML-EM algorithms. (author)

  4. A systematic study of genome context methods: calibration, normalization and combination

    Directory of Open Access Journals (Sweden)

    Dale Joseph M

    2010-10-01

    Full Text Available Abstract Background Genome context methods have been introduced in the last decade as automatic methods to predict functional relatedness between genes in a target genome using the patterns of existence and relative locations of the homologs of those genes in a set of reference genomes. Much work has been done in the application of these methods to different bioinformatics tasks, but few papers present a systematic study of the methods and their combination necessary for their optimal use. Results We present a thorough study of the four main families of genome context methods found in the literature: phylogenetic profile, gene fusion, gene cluster, and gene neighbor. We find that for most organisms the gene neighbor method outperforms the phylogenetic profile method by as much as 40% in sensitivity, being competitive with the gene cluster method at low sensitivities. Gene fusion is generally the worst performing of the four methods. A thorough exploration of the parameter space for each method is performed and results across different target organisms are presented. We propose the use of normalization procedures as those used on microarray data for the genome context scores. We show that substantial gains can be achieved from the use of a simple normalization technique. In particular, the sensitivity of the phylogenetic profile method is improved by around 25% after normalization, resulting, to our knowledge, on the best-performing phylogenetic profile system in the literature. Finally, we show results from combining the various genome context methods into a single score. When using a cross-validation procedure to train the combiners, with both original and normalized scores as input, a decision tree combiner results in gains of up to 20% with respect to the gene neighbor method. Overall, this represents a gain of around 15% over what can be considered the state of the art in this area: the four original genome context methods combined using a

  5. A Simple Method for Dynamic Scheduling in a Heterogeneous Computing System

    OpenAIRE

    Žumer, Viljem; Brest, Janez

    2002-01-01

    A simple method for the dynamic scheduling on a heterogeneous computing system is proposed in this paper. It was implemented to minimize the parallel program execution time. The proposed method decomposes the program workload into computationally homogeneous subtasks, which may be of the different size, depending on the current load of each machine in a heterogeneous computing system.

  6. Computer-Aided Modelling Methods and Tools

    DEFF Research Database (Denmark)

    Cameron, Ian; Gani, Rafiqul

    2011-01-01

    The development of models for a range of applications requires methods and tools. In many cases a reference model is required that allows the generation of application specific models that are fit for purpose. There are a range of computer aided modelling tools available that help to define the m...

  7. Numerical methods design, analysis, and computer implementation of algorithms

    CERN Document Server

    Greenbaum, Anne

    2012-01-01

    Numerical Methods provides a clear and concise exploration of standard numerical analysis topics, as well as nontraditional ones, including mathematical modeling, Monte Carlo methods, Markov chains, and fractals. Filled with appealing examples that will motivate students, the textbook considers modern application areas, such as information retrieval and animation, and classical topics from physics and engineering. Exercises use MATLAB and promote understanding of computational results. The book gives instructors the flexibility to emphasize different aspects--design, analysis, or computer implementation--of numerical algorithms, depending on the background and interests of students. Designed for upper-division undergraduates in mathematics or computer science classes, the textbook assumes that students have prior knowledge of linear algebra and calculus, although these topics are reviewed in the text. Short discussions of the history of numerical methods are interspersed throughout the chapters. The book a...

  8. Hybrid classifiers methods of data, knowledge, and classifier combination

    CERN Document Server

    Wozniak, Michal

    2014-01-01

    This book delivers a definite and compact knowledge on how hybridization can help improving the quality of computer classification systems. In order to make readers clearly realize the knowledge of hybridization, this book primarily focuses on introducing the different levels of hybridization and illuminating what problems we will face with as dealing with such projects. In the first instance the data and knowledge incorporated in hybridization were the action points, and then a still growing up area of classifier systems known as combined classifiers was considered. This book comprises the aforementioned state-of-the-art topics and the latest research results of the author and his team from Department of Systems and Computer Networks, Wroclaw University of Technology, including as classifier based on feature space splitting, one-class classification, imbalance data, and data stream classification.

  9. Reference depth for geostrophic computation - A new method

    Digital Repository Service at National Institute of Oceanography (India)

    Varkey, M.J.; Sastry, J.S.

    Various methods are available for the determination of reference depth for geostrophic computation. A new method based on the vertical profiles of mean and variance of the differences of mean specific volume anomaly (delta x 10) for different layers...

  10. The combined use of computer-guided, minimally invasive, flapless corticotomy and clear aligners as a novel approach to moderate crowding: A case report.

    Science.gov (United States)

    Cassetta, Michele; Altieri, Federica; Pandolfi, Stefano; Giansanti, Matteo

    2017-03-01

    The aim of this case report was to describe an innovative orthodontic treatment method that combined surgical and orthodontic techniques. The novel method was used to achieve a positive result in a case of moderate crowding by employing a computer-guided piezocision procedure followed by the use of clear aligners. A 23-year-old woman had a malocclusion with moderate crowding. Her periodontal indices, oral health-related quality of life (OHRQoL), and treatment time were evaluated. The treatment included interproximal corticotomy cuts extending through the entire thickness of the cortical layer, without a full-thickness flap reflection. This was achieved with a three-dimensionally printed surgical guide using computer-aided design and computer-aided manufacturing. Orthodontic force was applied to the teeth immediately after surgery by using clear appliances for better control of tooth movement. The total treatment time was 8 months. The periodontal indices improved after crowding correction, but the oral health impact profile showed a slight deterioration of OHRQoL during the 3 days following surgery. At the 2-year retention follow-up, the stability of treatment was excellent. The reduction in surgical time and patient discomfort, increased periodontal safety and patient acceptability, and accurate control of orthodontic movement without the risk of losing anchorage may encourage the use of this combined technique in appropriate cases.

  11. Permeability computation on a REV with an immersed finite element method

    International Nuclear Information System (INIS)

    Laure, P.; Puaux, G.; Silva, L.; Vincent, M.

    2011-01-01

    An efficient method to compute permeability of fibrous media is presented. An immersed domain approach is used to represent the porous material at its microscopic scale and the flow motion is computed with a stabilized mixed finite element method. Therefore the Stokes equation is solved on the whole domain (including solid part) using a penalty method. The accuracy is controlled by refining the mesh around the solid-fluid interface defined by a level set function. Using homogenisation techniques, the permeability of a representative elementary volume (REV) is computed. The computed permeabilities of regular fibre packings are compared to classical analytical relations found in the bibliography.

  12. Combination of radiological and gray level co-occurrence matrix textural features used to distinguish solitary pulmonary nodules by computed tomography.

    Science.gov (United States)

    Wu, Haifeng; Sun, Tao; Wang, Jingjing; Li, Xia; Wang, Wei; Huo, Da; Lv, Pingxin; He, Wen; Wang, Keyang; Guo, Xiuhua

    2013-08-01

    The objective of this study was to investigate the method of the combination of radiological and textural features for the differentiation of malignant from benign solitary pulmonary nodules by computed tomography. Features including 13 gray level co-occurrence matrix textural features and 12 radiological features were extracted from 2,117 CT slices, which came from 202 (116 malignant and 86 benign) patients. Lasso-type regularization to a nonlinear regression model was applied to select predictive features and a BP artificial neural network was used to build the diagnostic model. Eight radiological and two textural features were obtained after the Lasso-type regularization procedure. Twelve radiological features alone could reach an area under the ROC curve (AUC) of 0.84 in differentiating between malignant and benign lesions. The 10 selected characters improved the AUC to 0.91. The evaluation results showed that the method of selecting radiological and textural features appears to yield more effective in the distinction of malignant from benign solitary pulmonary nodules by computed tomography.

  13. Combined Forecasting Method of Landslide Deformation Based on MEEMD, Approximate Entropy, and WLS-SVM

    Directory of Open Access Journals (Sweden)

    Shaofeng Xie

    2017-01-01

    Full Text Available Given the chaotic characteristics of the time series of landslides, a new method based on modified ensemble empirical mode decomposition (MEEMD, approximate entropy and the weighted least square support vector machine (WLS-SVM was proposed. The method mainly started from the chaotic sequence of time-frequency analysis and improved the model performance as follows: first a deformation time series was decomposed into a series of subsequences with significantly different complexity using MEEMD. Then the approximate entropy method was used to generate a new subsequence for the combination of subsequences with similar complexity, which could effectively concentrate the component feature information and reduce the computational scale. Finally the WLS-SVM prediction model was established for each new subsequence. At the same time, phase space reconstruction theory and the grid search method were used to select the input dimension and the optimal parameters of the model, and then the superposition of each predicted value was the final forecasting result. Taking the landslide deformation data of Danba as an example, the experiments were carried out and compared with wavelet neural network, support vector machine, least square support vector machine and various combination schemes. The experimental results show that the algorithm has high prediction accuracy. It can ensure a better prediction effect even in landslide deformation periods of rapid fluctuation, and it can also better control the residual value and effectively reduce the error interval.

  14. A hybrid method for the computation of quasi-3D seismograms.

    Science.gov (United States)

    Masson, Yder; Romanowicz, Barbara

    2013-04-01

    The development of powerful computer clusters and efficient numerical computation methods, such as the Spectral Element Method (SEM) made possible the computation of seismic wave propagation in a heterogeneous 3D earth. However, the cost of theses computations is still problematic for global scale tomography that requires hundreds of such simulations. Part of the ongoing research effort is dedicated to the development of faster modeling methods based on the spectral element method. Capdeville et al. (2002) proposed to couple SEM simulations with normal modes calculation (C-SEM). Nissen-Meyer et al. (2007) used 2D SEM simulations to compute 3D seismograms in a 1D earth model. Thanks to these developments, and for the first time, Lekic et al. (2011) developed a 3D global model of the upper mantle using SEM simulations. At the local and continental scale, adjoint tomography that is using a lot of SEM simulation can be implemented on current computers (Tape, Liu et al. 2009). Due to their smaller size, these models offer higher resolution. They provide us with images of the crust and the upper part of the mantle. In an attempt to teleport such local adjoint tomographic inversions into the deep earth, we are developing a hybrid method where SEM computation are limited to a region of interest within the earth. That region can have an arbitrary shape and size. Outside this region, the seismic wavefield is extrapolated to obtain synthetic data at the Earth's surface. A key feature of the method is the use of a time reversal mirror to inject the wavefield induced by distant seismic source into the region of interest (Robertsson and Chapman 2000). We compute synthetic seismograms as follow: Inside the region of interest, we are using regional spectral element software RegSEM to compute wave propagation in 3D. Outside this region, the wavefield is extrapolated to the surface by convolution with the Green's functions from the mirror to the seismic stations. For now, these

  15. A synchrotron-based local computed tomography combined with data-constrained modelling approach for quantitative analysis of anthracite coal microstructure

    International Nuclear Information System (INIS)

    Chen, Wen Hao; Yang, Sam Y. S.; Xiao, Ti Qiao; Mayo, Sherry C.; Wang, Yu Dan; Wang, Hai Peng

    2014-01-01

    A quantitative local computed tomography combined with data-constrained modelling has been developed. The method could improve distinctly the spatial resolution and the composition resolution in a sample larger than the field of view, for quantitative characterization of three-dimensional distributions of material compositions and void. Quantifying three-dimensional spatial distributions of pores and material compositions in samples is a key materials characterization challenge, particularly in samples where compositions are distributed across a range of length scales, and where such compositions have similar X-ray absorption properties, such as in coal. Consequently, obtaining detailed information within sub-regions of a multi-length-scale sample by conventional approaches may not provide the resolution and level of detail one might desire. Herein, an approach for quantitative high-definition determination of material compositions from X-ray local computed tomography combined with a data-constrained modelling method is proposed. The approach is capable of dramatically improving the spatial resolution and enabling finer details within a region of interest of a sample larger than the field of view to be revealed than by using conventional techniques. A coal sample containing distributions of porosity and several mineral compositions is employed to demonstrate the approach. The optimal experimental parameters are pre-analyzed. The quantitative results demonstrated that the approach can reveal significantly finer details of compositional distributions in the sample region of interest. The elevated spatial resolution is crucial for coal-bed methane reservoir evaluation and understanding the transformation of the minerals during coal processing. The method is generic and can be applied for three-dimensional compositional characterization of other materials

  16. Multiscale Methods, Parallel Computation, and Neural Networks for Real-Time Computer Vision.

    Science.gov (United States)

    Battiti, Roberto

    1990-01-01

    This thesis presents new algorithms for low and intermediate level computer vision. The guiding ideas in the presented approach are those of hierarchical and adaptive processing, concurrent computation, and supervised learning. Processing of the visual data at different resolutions is used not only to reduce the amount of computation necessary to reach the fixed point, but also to produce a more accurate estimation of the desired parameters. The presented adaptive multiple scale technique is applied to the problem of motion field estimation. Different parts of the image are analyzed at a resolution that is chosen in order to minimize the error in the coefficients of the differential equations to be solved. Tests with video-acquired images show that velocity estimation is more accurate over a wide range of motion with respect to the homogeneous scheme. In some cases introduction of explicit discontinuities coupled to the continuous variables can be used to avoid propagation of visual information from areas corresponding to objects with different physical and/or kinematic properties. The human visual system uses concurrent computation in order to process the vast amount of visual data in "real -time." Although with different technological constraints, parallel computation can be used efficiently for computer vision. All the presented algorithms have been implemented on medium grain distributed memory multicomputers with a speed-up approximately proportional to the number of processors used. A simple two-dimensional domain decomposition assigns regions of the multiresolution pyramid to the different processors. The inter-processor communication needed during the solution process is proportional to the linear dimension of the assigned domain, so that efficiency is close to 100% if a large region is assigned to each processor. Finally, learning algorithms are shown to be a viable technique to engineer computer vision systems for different applications starting from

  17. A Computationally Efficient Method for Polyphonic Pitch Estimation

    Directory of Open Access Journals (Sweden)

    Ruohua Zhou

    2009-01-01

    Full Text Available This paper presents a computationally efficient method for polyphonic pitch estimation. The method employs the Fast Resonator Time-Frequency Image (RTFI as the basic time-frequency analysis tool. The approach is composed of two main stages. First, a preliminary pitch estimation is obtained by means of a simple peak-picking procedure in the pitch energy spectrum. Such spectrum is calculated from the original RTFI energy spectrum according to harmonic grouping principles. Then the incorrect estimations are removed according to spectral irregularity and knowledge of the harmonic structures of the music notes played on commonly used music instruments. The new approach is compared with a variety of other frame-based polyphonic pitch estimation methods, and results demonstrate the high performance and computational efficiency of the approach.

  18. Geometric optical transfer function and tis computation method

    International Nuclear Information System (INIS)

    Wang Qi

    1992-01-01

    Geometric Optical Transfer Function formula is derived after expound some content to be easily ignored, and the computation method is given with Bessel function of order zero and numerical integration and Spline interpolation. The method is of advantage to ensure accuracy and to save calculation

  19. The qualitative and quantitative accuracy of DFT methods in computing 1J(C–F), 1J(C–N) and nJ(F–F) spin–spin coupling of fluorobenzene and fluoropyridine molecules

    International Nuclear Information System (INIS)

    Adeniyi, Adebayo A.; Ajibade, Peter A.

    2015-01-01

    The qualitative and quantitative quality of DFT methods combined with different basis sets in computing the J-coupling of the types 1 J(C–F) and n J(F–F) are investigated for the fluorobenzene and fluoropyridine derivatives. Interestingly, all of the computational methods perfectly reproduced the experimental order for n J(F–F) but many failed to reproduce the experimental order for 1 J(C–F) coupling. The functional PBEPBE gives the best quantitative values that are closer to the experimental spin–spin coupling when combined with the basis sets aug-cc-pVDZ and DGDZVP but is also part of the methods that fail to perfectly reproduce the experimental order for the 1 J(C–F) coupling. The basis set DGDZVP combined with all the methods except with PBEPBE perfectly reproduces the 1 J(C–F) experimental order. All the methods reproduce either the positive or the negative sign of the experimental spin–spin coupling except for the basis set 6-31+G(d,p) which fails to reproduce the experimental positive value of 3 J(F–F) regardless of what type of DFT methods was used. The values of the FC term is far higher than all other Ramsey terms in the one bond 1 J(C–F) coupling but in the two, three and four bonds n J(F–F) the values of PSO and SD are higher. - Graphical abstract: DFT methods were used to compute the J-coupling of molecules benf, benf2, benf2c, benf2c2, pyrf, pyrfc and pyrfc2, and are presented. Right combination of DFT functional with basis set can reproduce high level EOM-CCSD and experimental J-coupling results. All the methods can reproduce the qualitative order of the experimental J-coupling but not all reproduce the quantitative. The best quantitative results were obtained from PBEPBE combined with the high basis set aug-cc-pVDZ Also, PBEPBE combines with lower basis set DGDZVP to give a highly similar value. - Highlights: • DFT methods were used to compute the J-coupling of the molecules. • Right combination of DFT functional with basis

  20. Digital image processing mathematical and computational methods

    CERN Document Server

    Blackledge, J M

    2005-01-01

    This authoritative text (the second part of a complete MSc course) provides mathematical methods required to describe images, image formation and different imaging systems, coupled with the principle techniques used for processing digital images. It is based on a course for postgraduates reading physics, electronic engineering, telecommunications engineering, information technology and computer science. This book relates the methods of processing and interpreting digital images to the 'physics' of imaging systems. Case studies reinforce the methods discussed, with examples of current research

  1. Heterotic computing: exploiting hybrid computational devices.

    Science.gov (United States)

    Kendon, Viv; Sebald, Angelika; Stepney, Susan

    2015-07-28

    Current computational theory deals almost exclusively with single models: classical, neural, analogue, quantum, etc. In practice, researchers use ad hoc combinations, realizing only recently that they can be fundamentally more powerful than the individual parts. A Theo Murphy meeting brought together theorists and practitioners of various types of computing, to engage in combining the individual strengths to produce powerful new heterotic devices. 'Heterotic computing' is defined as a combination of two or more computational systems such that they provide an advantage over either substrate used separately. This post-meeting collection of articles provides a wide-ranging survey of the state of the art in diverse computational paradigms, together with reflections on their future combination into powerful and practical applications. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  2. Computational Fluid Dynamic Modeling of Rocket Based Combined Cycle Engine Flowfields

    Science.gov (United States)

    Daines, Russell L.; Merkle, Charles L.

    1994-01-01

    Computational Fluid Dynamic techniques are used to study the flowfield of a fixed geometry Rocket Based Combined Cycle engine operating in rocket ejector mode. Heat addition resulting from the combustion of injected fuel causes the subsonic engine flow to choke and go supersonic in the slightly divergent combustor-mixer section. Reacting flow computations are undertaken to predict the characteristics of solutions where the heat addition is determined by the flowfield. Here, adaptive gridding is used to improve resolution in the shear layers. Results show that the sonic speed is reached in the unheated portions of the flow first, while the heated portions become supersonic later. Comparison with results from another code show reasonable agreement. The coupled solutions show that the character of the combustion-based thermal choking phenomenon can be controlled reasonably well such that there is opportunity to optimize the length and expansion ratio of the combustor-mixer.

  3. Spatial analysis statistics, visualization, and computational methods

    CERN Document Server

    Oyana, Tonny J

    2015-01-01

    An introductory text for the next generation of geospatial analysts and data scientists, Spatial Analysis: Statistics, Visualization, and Computational Methods focuses on the fundamentals of spatial analysis using traditional, contemporary, and computational methods. Outlining both non-spatial and spatial statistical concepts, the authors present practical applications of geospatial data tools, techniques, and strategies in geographic studies. They offer a problem-based learning (PBL) approach to spatial analysis-containing hands-on problem-sets that can be worked out in MS Excel or ArcGIS-as well as detailed illustrations and numerous case studies. The book enables readers to: Identify types and characterize non-spatial and spatial data Demonstrate their competence to explore, visualize, summarize, analyze, optimize, and clearly present statistical data and results Construct testable hypotheses that require inferential statistical analysis Process spatial data, extract explanatory variables, conduct statisti...

  4. Simulating elastic light scattering using high performance computing methods

    NARCIS (Netherlands)

    Hoekstra, A.G.; Sloot, P.M.A.; Verbraeck, A.; Kerckhoffs, E.J.H.

    1993-01-01

    The Coupled Dipole method, as originally formulated byPurcell and Pennypacker, is a very powerful method tosimulate the Elastic Light Scattering from arbitraryparticles. This method, which is a particle simulationmodel for Computational Electromagnetics, has one majordrawback: if the size of the

  5. Systems, computer-implemented methods, and tangible computer-readable storage media for wide-field interferometry

    Science.gov (United States)

    Lyon, Richard G. (Inventor); Leisawitz, David T. (Inventor); Rinehart, Stephen A. (Inventor); Memarsadeghi, Nargess (Inventor)

    2012-01-01

    Disclosed herein are systems, computer-implemented methods, and tangible computer-readable storage media for wide field imaging interferometry. The method includes for each point in a two dimensional detector array over a field of view of an image: gathering a first interferogram from a first detector and a second interferogram from a second detector, modulating a path-length for a signal from an image associated with the first interferogram in the first detector, overlaying first data from the modulated first detector and second data from the second detector, and tracking the modulating at every point in a two dimensional detector array comprising the first detector and the second detector over a field of view for the image. The method then generates a wide-field data cube based on the overlaid first data and second data for each point. The method can generate an image from the wide-field data cube.

  6. Integrating computational methods to retrofit enzymes to synthetic pathways.

    Science.gov (United States)

    Brunk, Elizabeth; Neri, Marilisa; Tavernelli, Ivano; Hatzimanikatis, Vassily; Rothlisberger, Ursula

    2012-02-01

    Microbial production of desired compounds provides an efficient framework for the development of renewable energy resources. To be competitive to traditional chemistry, one requirement is to utilize the full capacity of the microorganism to produce target compounds with high yields and turnover rates. We use integrated computational methods to generate and quantify the performance of novel biosynthetic routes that contain highly optimized catalysts. Engineering a novel reaction pathway entails addressing feasibility on multiple levels, which involves handling the complexity of large-scale biochemical networks while respecting the critical chemical phenomena at the atomistic scale. To pursue this multi-layer challenge, our strategy merges knowledge-based metabolic engineering methods with computational chemistry methods. By bridging multiple disciplines, we provide an integral computational framework that could accelerate the discovery and implementation of novel biosynthetic production routes. Using this approach, we have identified and optimized a novel biosynthetic route for the production of 3HP from pyruvate. Copyright © 2011 Wiley Periodicals, Inc.

  7. Concealed nuclear material identification via combined fast-neutron/γ-ray computed tomography (FNGCT): a Monte Carlo study

    Science.gov (United States)

    Licata, M.; Joyce, M. J.

    2018-02-01

    The potential of a combined and simultaneous fast-neutron/γ-ray computed tomography technique using Monte Carlo simulations is described. This technique is applied on the basis of a hypothetical tomography system comprising an isotopic radiation source (americium-beryllium) and a number (13) of organic scintillation detectors for the production and detection of both fast neutrons and γ rays, respectively. Via a combination of γ-ray and fast neutron tomography the potential is demonstrated to discern nuclear materials, such as compounds comprising plutonium and uranium, from substances that are used widely for neutron moderation and shielding. This discrimination is achieved on the basis of the difference in the attenuation characteristics of these substances. Discrimination of a variety of nuclear material compounds from shielding/moderating substances (the latter comprising lead or polyethylene for example) is shown to be challenging when using either γ-ray or neutron tomography in isolation of one another. Much-improved contrast is obtained for a combination of these tomographic modalities. This method has potential applications for in-situ, non-destructive assessments in nuclear security, safeguards, waste management and related requirements in the nuclear industry.

  8. Positron emission tomography combined with computed tomography for diagnosis of synchronous and metachronous tumors

    International Nuclear Information System (INIS)

    Zlatareva, D.; Garcheva, M.; Hadjiiska, V.

    2013-01-01

    Full text: Introduction: Positron emission tomography combined computed tomography (PET/CT) has proved to be the method of choice in oncology for diagnosis and staging, planning and determining the effect of treatment. Aim of the study was to determine the diagnostic capabilities of PET/CT for the detection of synchronous and metachronous tumors. Materials and Methods: The study was conducted with 18F FDG on Discovery, GE Healthcare under standard protocol. 18F FDG is dosed per kg body weight applying before a meal in blood sugar within reference values. The survey was conducted 60 min after application, in addition to visual assessment using quantitative indicators. For a period of a year (2012) 1408 patients were studied. In 11 (2 men, 9 women) of them synchronous and metachronous unsuspected tumors were found. Results: The most common as the second tumors are found processes in the head and neck, followed by lung cancer and colorectal cancer. In four of the cases operational or histological verification was made. In others cases due to refusal or advanced disease indications for systemic therapy the verification wasn't made. Diagnosis of the second tumor has changed the approach to patients as the therapeutic effect was detected at 3 patients over a period of nine months by repeated PET/CT study. Conclusion: The hybrid PET/CT, combining information about structural changes (CT) and metabolic changes (PET) plays an important role in the diagnosis of synchronous and metachronous tumors. This can significantly change the therapeutic management and prognosis of patients

  9. A Krylov Subspace Method for Unstructured Mesh SN Transport Computation

    International Nuclear Information System (INIS)

    Yoo, Han Jong; Cho, Nam Zin; Kim, Jong Woon; Hong, Ser Gi; Lee, Young Ouk

    2010-01-01

    Hong, et al., have developed a computer code MUST (Multi-group Unstructured geometry S N Transport) for the neutral particle transport calculations in three-dimensional unstructured geometry. In this code, the discrete ordinates transport equation is solved by using the discontinuous finite element method (DFEM) or the subcell balance methods with linear discontinuous expansion. In this paper, the conventional source iteration in the MUST code is replaced by the Krylov subspace method to reduce computing time and the numerical test results are given

  10. Computational electrodynamics the finite-difference time-domain method

    CERN Document Server

    Taflove, Allen

    2005-01-01

    This extensively revised and expanded third edition of the Artech House bestseller, Computational Electrodynamics: The Finite-Difference Time-Domain Method, offers engineers the most up-to-date and definitive resource on this critical method for solving Maxwell's equations. The method helps practitioners design antennas, wireless communications devices, high-speed digital and microwave circuits, and integrated optical devices with unsurpassed efficiency. There has been considerable advancement in FDTD computational technology over the past few years, and the third edition brings professionals the very latest details with entirely new chapters on important techniques, major updates on key topics, and new discussions on emerging areas such as nanophotonics. What's more, to supplement the third edition, the authors have created a Web site with solutions to problems, downloadable graphics and videos, and updates, making this new edition the ideal textbook on the subject as well.

  11. Fully consistent CFD methods for incompressible flow computations

    DEFF Research Database (Denmark)

    Kolmogorov, Dmitry; Shen, Wen Zhong; Sørensen, Niels N.

    2014-01-01

    Nowadays collocated grid based CFD methods are one of the most e_cient tools for computations of the ows past wind turbines. To ensure the robustness of the methods they require special attention to the well-known problem of pressure-velocity coupling. Many commercial codes to ensure the pressure...

  12. CMS results in the Combined Computing Readiness Challenge CCRC'08

    International Nuclear Information System (INIS)

    Bonacorsi, D.; Bauerdick, L.

    2009-01-01

    During February and May 2008, CMS participated to the Combined Computing Readiness Challenge (CCRC'08) together with all other LHC experiments. The purpose of this worldwide exercise was to check the readiness of the Computing infrastructure for LHC data taking. Another set of major CMS tests called Computing, Software and Analysis challenge (CSA'08) - as well as CMS cosmic runs - were also running at the same time: CCRC augmented the load on computing with additional tests to validate and stress-test all CMS computing workflows at full data taking scale, also extending this to the global WLCG community. CMS exercised most aspects of the CMS computing model, with very comprehensive tests. During May 2008, CMS moved more than 3.6 Petabytes among more than 300 links in the complex Grid topology. CMS demonstrated that is able to safely move data out of CERN to the Tier-1 sites, sustaining more than 600 MB/s as a daily average for more than seven days in a row, with enough headroom and with hourly peaks of up to 1.7 GB/s. CMS ran hundreds of simultaneous jobs at each Tier-1 site, re-reconstructing and skimming hundreds of millions of events. After re-reconstruction the fresh AOD (Analysis Object Data) has to be synchronized between Tier-1 centers: CMS demonstrated that the required inter-Tier-1 transfers are achievable within a few days. CMS also showed that skimmed analysis data sets can be transferred to Tier-2 sites for analysis at sufficient rate, regionally as well as inter-regionally, achieving all goals in about 90% of >200 links. Simultaneously, CMS also ran a large Tier-2 analysis exercise, where realistic analysis jobs were submitted to a large set of Tier-2 sites by a large number of people to produce a chaotic workload across the systems, and with more than 400 analysis users in May. Taken all together, CMS routinely achieved submissions of 100k jobs/day, with peaks up to 200k jobs/day. The achieved results in CCRC'08 - focussing on the distributed

  13. High performance computing and quantum trajectory method in CPU and GPU systems

    International Nuclear Information System (INIS)

    Wiśniewska, Joanna; Sawerwain, Marek; Leoński, Wiesław

    2015-01-01

    Nowadays, a dynamic progress in computational techniques allows for development of various methods, which offer significant speed-up of computations, especially those related to the problems of quantum optics and quantum computing. In this work, we propose computational solutions which re-implement the quantum trajectory method (QTM) algorithm in modern parallel computation environments in which multi-core CPUs and modern many-core GPUs can be used. In consequence, new computational routines are developed in more effective way than those applied in other commonly used packages, such as Quantum Optics Toolbox (QOT) for Matlab or QuTIP for Python

  14. Combined Effects of Numerical Method Type and Time Step on Water Stressed Actual Crop ET

    Directory of Open Access Journals (Sweden)

    B. Ghahraman

    2016-02-01

    .7. Therefore, nine different classes were formed by combination of three crop types and three soil class types. Then, the results of numerical methods were compared to the analytical solution of the soil moisture differential equation as a datum. Three factors (time step, initial soil water content, and maximum evaporation, ETc were considered as influencing variables. Results and Discussion: It was clearly shown that as the crops becomes more sensitive, the dependency of Eta to ETc increases. The same is true as the soil becomes fine textured. The results showed that as water stress progress during the time step, relative errors of computed ET by different numerical methods did not depend on initial soil moisture. On overall and irrespective to soil tpe, crop type, and numerical method, relative error increased by increasing time step and/or increasing ETc. On overall, the absolute errors were negative for implicit Euler and third order Heun, while for other methods were positive. There was a systematic trend for relative error, as it increased by sandier soil and/or crop sensitivity. Absolute errors of ET computations decreased with consecutive time steps, which ensures the stability of water balance predictions. It was not possible to prescribe a unique numerical method for considering all variables. For comparing the numerical methods, however, we took the largest relative error corresponding to 10-day time step and ETc equal to 12 mm.d-1, while considered soil and crop types as variable. Explicit Euler was unstable and varied between 40% and 150%. Implicit Euler was robust and its relative error was around 20% for all combinations of soil and crop types. Unstable pattern was governed for modified Euler. The relative error was as low as 10% only for two cases while on overall it ranged between 20% and 100%. Although the relative errors of third order Heun were the smallest among the all methods, its robustness was not as good as implicit Euler method. Excluding one large

  15. Regularized iterative integration combined with non-linear diffusion filtering for phase-contrast x-ray computed tomography.

    Science.gov (United States)

    Burger, Karin; Koehler, Thomas; Chabior, Michael; Allner, Sebastian; Marschner, Mathias; Fehringer, Andreas; Willner, Marian; Pfeiffer, Franz; Noël, Peter

    2014-12-29

    Phase-contrast x-ray computed tomography has a high potential to become clinically implemented because of its complementarity to conventional absorption-contrast.In this study, we investigate noise-reducing but resolution-preserving analytical reconstruction methods to improve differential phase-contrast imaging. We apply the non-linear Perona-Malik filter on phase-contrast data prior or post filtered backprojected reconstruction. Secondly, the Hilbert kernel is replaced by regularized iterative integration followed by ramp filtered backprojection as used for absorption-contrast imaging. Combining the Perona-Malik filter with this integration algorithm allows to successfully reveal relevant sample features, quantitatively confirmed by significantly increased structural similarity indices and contrast-to-noise ratios. With this concept, phase-contrast imaging can be performed at considerably lower dose.

  16. Coupled numerical approach combining finite volume and lattice Boltzmann methods for multi-scale multi-physicochemical processes

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Li; He, Ya-Ling [Key Laboratory of Thermo-Fluid Science and Engineering of MOE, School of Energy and Power Engineering, Xi' an Jiaotong University, Xi' an, Shaanxi 710049 (China); Kang, Qinjun [Computational Earth Science Group (EES-16), Los Alamos National Laboratory, Los Alamos, NM (United States); Tao, Wen-Quan, E-mail: wqtao@mail.xjtu.edu.cn [Key Laboratory of Thermo-Fluid Science and Engineering of MOE, School of Energy and Power Engineering, Xi' an Jiaotong University, Xi' an, Shaanxi 710049 (China)

    2013-12-15

    A coupled (hybrid) simulation strategy spatially combining the finite volume method (FVM) and the lattice Boltzmann method (LBM), called CFVLBM, is developed to simulate coupled multi-scale multi-physicochemical processes. In the CFVLBM, computational domain of multi-scale problems is divided into two sub-domains, i.e., an open, free fluid region and a region filled with porous materials. The FVM and LBM are used for these two regions, respectively, with information exchanged at the interface between the two sub-domains. A general reconstruction operator (RO) is proposed to derive the distribution functions in the LBM from the corresponding macro scalar, the governing equation of which obeys the convection–diffusion equation. The CFVLBM and the RO are validated in several typical physicochemical problems and then are applied to simulate complex multi-scale coupled fluid flow, heat transfer, mass transport, and chemical reaction in a wall-coated micro reactor. The maximum ratio of the grid size between the FVM and LBM regions is explored and discussed. -- Highlights: •A coupled simulation strategy for simulating multi-scale phenomena is developed. •Finite volume method and lattice Boltzmann method are coupled. •A reconstruction operator is derived to transfer information at the sub-domains interface. •Coupled multi-scale multiple physicochemical processes in micro reactor are simulated. •Techniques to save computational resources and improve the efficiency are discussed.

  17. Combined experimental and computational modelling studies of the solubility of nickel in strontium titanate

    NARCIS (Netherlands)

    Beale, A.M.; Paul, M.; Sankar, G.; Oldman, R.J.; Catlow, R.A.; French, S.; Fowles, M.

    2009-01-01

    A combination of X-ray techniques and atomistic computational modelling has been used to study the solubility of Ni in SrTiO3 in relation to the application of this material for the catalytic partial oxidation of methane. The experiments have demonstrated that low temperature, hydrothermal synthesis

  18. Fast method to compute scattering by a buried object under a randomly rough surface: PILE combined with FB-SA.

    Science.gov (United States)

    Bourlier, Christophe; Kubické, Gildas; Déchamps, Nicolas

    2008-04-01

    A fast, exact numerical method based on the method of moments (MM) is developed to calculate the scattering from an object below a randomly rough surface. Déchamps et al. [J. Opt. Soc. Am. A23, 359 (2006)] have recently developed the PILE (propagation-inside-layer expansion) method for a stack of two one-dimensional rough interfaces separating homogeneous media. From the inversion of the impedance matrix by block (in which two impedance matrices of each interface and two coupling matrices are involved), this method allows one to calculate separately and exactly the multiple-scattering contributions inside the layer in which the inverses of the impedance matrices of each interface are involved. Our purpose here is to apply this method for an object below a rough surface. In addition, to invert a matrix of large size, the forward-backward spectral acceleration (FB-SA) approach of complexity O(N) (N is the number of unknowns on the interface) proposed by Chou and Johnson [Radio Sci.33, 1277 (1998)] is applied. The new method, PILE combined with FB-SA, is tested on perfectly conducting circular and elliptic cylinders located below a dielectric rough interface obeying a Gaussian process with Gaussian and exponential height autocorrelation functions.

  19. Classification Method to Define Synchronization Capability Limits of Line-Start Permanent-Magnet Motor Using Mesh-Based Magnetic Equivalent Circuit Computation Results

    Directory of Open Access Journals (Sweden)

    Bart Wymeersch

    2018-04-01

    Full Text Available Line start permanent magnet synchronous motors (LS-PMSM are energy-efficient synchronous motors that can start asynchronously due to a squirrel cage in the rotor. The drawback, however, with this motor type is the chance of failure to synchronize after start-up. To identify the problem, and the stable operation limits, the synchronization at various parameter combinations is investigated. For accurate knowledge of the operation limits to assure synchronization with the utility grid, an accurate classification of parameter combinations is needed. As for this, many simulations have to be executed, a rapid evaluation method is indispensable. To simulate the dynamic behavior in the time domain, several modeling methods exist. In this paper, a discussion is held with respect to different modeling methods. In order to include spatial factors and magnetic nonlinearities, on the one hand, and to restrict the computation time on the other hand, a magnetic equivalent circuit (MEC modeling method is developed. In order to accelerate numerical convergence, a mesh-based analysis method is applied. The novelty in this paper is the implementation of support vector machine (SVM to classify the results of simulations at various parameter combinations into successful or unsuccessful synchronization, in order to define the synchronization capability limits. It is explained how these techniques can benefit the simulation time and the evaluation process. The results of the MEC modeling correspond to those obtained with finite element analysis (FEA, despite the reduced computation time. In addition, simulation results obtained with MEC modeling are experimentally validated.

  20. A stochastic method for computing hadronic matrix elements

    Energy Technology Data Exchange (ETDEWEB)

    Alexandrou, Constantia [Cyprus Univ., Nicosia (Cyprus). Dept. of Physics; The Cyprus Institute, Nicosia (Cyprus). Computational-based Science and Technology Research Center; Dinter, Simon; Drach, Vincent [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Jansen, Karl [Cyprus Univ., Nicosia (Cyprus). Dept. of Physics; Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Hadjiyiannakou, Kyriakos [Cyprus Univ., Nicosia (Cyprus). Dept. of Physics; Renner, Dru B. [Thomas Jefferson National Accelerator Facility, Newport News, VA (United States); Collaboration: European Twisted Mass Collaboration

    2013-02-15

    We present a stochastic method for the calculation of baryon three-point functions that is more versatile compared to the typically used sequential method. We analyze the scaling of the error of the stochastically evaluated three-point function with the lattice volume and find a favorable signal-to-noise ratio suggesting that our stochastic method can be used efficiently at large volumes to compute hadronic matrix elements.

  1. Computational methods for 2D materials: discovery, property characterization, and application design.

    Science.gov (United States)

    Paul, J T; Singh, A K; Dong, Z; Zhuang, H; Revard, B C; Rijal, B; Ashton, M; Linscheid, A; Blonsky, M; Gluhovic, D; Guo, J; Hennig, R G

    2017-11-29

    The discovery of two-dimensional (2D) materials comes at a time when computational methods are mature and can predict novel 2D materials, characterize their properties, and guide the design of 2D materials for applications. This article reviews the recent progress in computational approaches for 2D materials research. We discuss the computational techniques and provide an overview of the ongoing research in the field. We begin with an overview of known 2D materials, common computational methods, and available cyber infrastructures. We then move onto the discovery of novel 2D materials, discussing the stability criteria for 2D materials, computational methods for structure prediction, and interactions of monolayers with electrochemical and gaseous environments. Next, we describe the computational characterization of the 2D materials' electronic, optical, magnetic, and superconducting properties and the response of the properties under applied mechanical strain and electrical fields. From there, we move on to discuss the structure and properties of defects in 2D materials, and describe methods for 2D materials device simulations. We conclude by providing an outlook on the needs and challenges for future developments in the field of computational research for 2D materials.

  2. Accurate prediction of stability changes in protein mutants by combining machine learning with structure based computational mutagenesis.

    Science.gov (United States)

    Masso, Majid; Vaisman, Iosif I

    2008-09-15

    Accurate predictive models for the impact of single amino acid substitutions on protein stability provide insight into protein structure and function. Such models are also valuable for the design and engineering of new proteins. Previously described methods have utilized properties of protein sequence or structure to predict the free energy change of mutants due to thermal (DeltaDeltaG) and denaturant (DeltaDeltaG(H2O)) denaturations, as well as mutant thermal stability (DeltaT(m)), through the application of either computational energy-based approaches or machine learning techniques. However, accuracy associated with applying these methods separately is frequently far from optimal. We detail a computational mutagenesis technique based on a four-body, knowledge-based, statistical contact potential. For any mutation due to a single amino acid replacement in a protein, the method provides an empirical normalized measure of the ensuing environmental perturbation occurring at every residue position. A feature vector is generated for the mutant by considering perturbations at the mutated position and it's ordered six nearest neighbors in the 3-dimensional (3D) protein structure. These predictors of stability change are evaluated by applying machine learning tools to large training sets of mutants derived from diverse proteins that have been experimentally studied and described. Predictive models based on our combined approach are either comparable to, or in many cases significantly outperform, previously published results. A web server with supporting documentation is available at http://proteins.gmu.edu/automute.

  3. A combined emitter threat assessment method based on ICW-RCM

    Science.gov (United States)

    Zhang, Ying; Wang, Hongwei; Guo, Xiaotao; Wang, Yubing

    2017-08-01

    Considering that the tradition al emitter threat assessment methods are difficult to intuitively reflect the degree of target threaten and the deficiency of real-time and complexity, on the basis of radar chart method(RCM), an algorithm of emitter combined threat assessment based on ICW-RCM (improved combination weighting method, ICW) is proposed. The coarse sorting is integrated with fine sorting in emitter combined threat assessment, sequencing the emitter threat level roughly accordance to radar operation mode, and reducing task priority of the low-threat emitter; On the basis of ICW-RCM, sequencing the same radar operation mode emitter roughly, finally, obtain the results of emitter threat assessment through coarse and fine sorting. Simulation analyses show the correctness and effectiveness of this algorithm. Comparing with classical method of emitter threat assessment based on CW-RCM, the algorithm is visual in image and can work quickly with lower complexity.

  4. A New Computationally Frugal Method For Sensitivity Analysis Of Environmental Models

    Science.gov (United States)

    Rakovec, O.; Hill, M. C.; Clark, M. P.; Weerts, A.; Teuling, R.; Borgonovo, E.; Uijlenhoet, R.

    2013-12-01

    Effective and efficient parameter sensitivity analysis methods are crucial to understand the behaviour of complex environmental models and use of models in risk assessment. This paper proposes a new computationally frugal method for analyzing parameter sensitivity: the Distributed Evaluation of Local Sensitivity Analysis (DELSA). The DELSA method can be considered a hybrid of local and global methods, and focuses explicitly on multiscale evaluation of parameter sensitivity across the parameter space. Results of the DELSA method are compared with the popular global, variance-based Sobol' method and the delta method. We assess the parameter sensitivity of both (1) a simple non-linear reservoir model with only two parameters, and (2) five different "bucket-style" hydrologic models applied to a medium-sized catchment (200 km2) in the Belgian Ardennes. Results show that in both the synthetic and real-world examples, the global Sobol' method and the DELSA method provide similar sensitivities, with the DELSA method providing more detailed insight at much lower computational cost. The ability to understand how sensitivity measures vary through parameter space with modest computational requirements provides exciting new opportunities.

  5. A hybrid method for the parallel computation of Green's functions

    DEFF Research Database (Denmark)

    Petersen, Dan Erik; Li, Song; Stokbro, Kurt

    2009-01-01

    of the large number of times this calculation needs to be performed, this is computationally very expensive even on supercomputers. The classical approach is based on recurrence formulas which cannot be efficiently parallelized. This practically prevents the solution of large problems with hundreds...... of thousands of atoms. We propose new recurrences for a general class of sparse matrices to calculate Green's and lesser Green's function matrices which extend formulas derived by Takahashi and others. We show that these recurrences may lead to a dramatically reduced computational cost because they only...... require computing a small number of entries of the inverse matrix. Then. we propose a parallelization strategy for block tridiagonal matrices which involves a combination of Schur complement calculations and cyclic reduction. It achieves good scalability even on problems of modest size....

  6. Minimizing the Free Energy: A Computer Method for Teaching Chemical Equilibrium Concepts.

    Science.gov (United States)

    Heald, Emerson F.

    1978-01-01

    Presents a computer method for teaching chemical equilibrium concepts using material balance conditions and the minimization of the free energy. Method for the calculation of chemical equilibrium, the computer program used to solve equilibrium problems and applications of the method are also included. (HM)

  7. Computer-aided method for recognition of proton track in nuclear emulsion

    International Nuclear Information System (INIS)

    Ruan Jinlu; Li Hongyun; Song Jiwen; Zhang Jianfu; Chen Liang; Zhang Zhongbing; Liu Jinliang

    2014-01-01

    In order to overcome the shortcomings of the manual method for proton-recoil track recognition in nuclear emulsions, a computer-aided track recognition method was studied. In this method, image sequences captured by a microscope system were processed through image convolution with composite filters, binarization by multi thresholds, track grains clustering and redundant grains removing to recognize the track grains in the image sequences. Then the proton-recoil tracks were reconstructed from the recognized track grains through track reconstruction. The proton-recoil tracks in the nuclear emulsion irradiated by the neutron beam at energy of 14.9 MeV were recognized by the computer-aided method. The results show that proton-recoil tracks reconstructed by this method consist well with those reconstructed by the manual method. This compute-raided track recognition method lays an important technical foundation of developments of a proton-recoil track automatic recognition system and applications of nuclear emulsions in pulsed neutron spectrum measurement. (authors)

  8. Applications of meshless methods for damage computations with finite strains

    International Nuclear Information System (INIS)

    Pan Xiaofei; Yuan Huang

    2009-01-01

    Material defects such as cavities have great effects on the damage process in ductile materials. Computations based on finite element methods (FEMs) often suffer from instability due to material failure as well as large distortions. To improve computational efficiency and robustness the element-free Galerkin (EFG) method is applied in the micro-mechanical constitute damage model proposed by Gurson and modified by Tvergaard and Needleman (the GTN damage model). The EFG algorithm is implemented in the general purpose finite element code ABAQUS via the user interface UEL. With the help of the EFG method, damage processes in uniaxial tension specimens and notched specimens are analyzed and verified with experimental data. Computational results reveal that the damage which takes place in the interior of specimens will extend to the exterior and cause fracture of specimens; the damage is a fast procedure relative to the whole tensing process. The EFG method provides more stable and robust numerical solution in comparing with the FEM analysis

  9. Efficient computation method of Jacobian matrix

    International Nuclear Information System (INIS)

    Sasaki, Shinobu

    1995-05-01

    As well known, the elements of the Jacobian matrix are complex trigonometric functions of the joint angles, resulting in a matrix of staggering complexity when we write it all out in one place. This article addresses that difficulties to this subject are overcome by using velocity representation. The main point is that its recursive algorithm and computer algebra technologies allow us to derive analytical formulation with no human intervention. Particularly, it is to be noted that as compared to previous results the elements are extremely simplified throughout the effective use of frame transformations. Furthermore, in case of a spherical wrist, it is shown that the present approach is computationally most efficient. Due to such advantages, the proposed method is useful in studying kinematically peculiar properties such as singularity problems. (author)

  10. Computational methods of electron/photon transport

    International Nuclear Information System (INIS)

    Mack, J.M.

    1983-01-01

    A review of computational methods simulating the non-plasma transport of electrons and their attendant cascades is presented. Remarks are mainly restricted to linearized formalisms at electron energies above 1 keV. The effectiveness of various metods is discussed including moments, point-kernel, invariant imbedding, discrete-ordinates, and Monte Carlo. Future research directions and the potential impact on various aspects of science and engineering are indicated

  11. Decomposition and Cross-Product-Based Method for Computing the Dynamic Equation of Robots

    Directory of Open Access Journals (Sweden)

    Ching-Long Shih

    2012-08-01

    Full Text Available This paper aims to demonstrate a clear relationship between Lagrange equations and Newton-Euler equations regarding computational methods for robot dynamics, from which we derive a systematic method for using either symbolic or on-line numerical computations. Based on the decomposition approach and cross-product operation, a computing method for robot dynamics can be easily developed. The advantages of this computing framework are that: it can be used for both symbolic and on-line numeric computation purposes, and it can also be applied to biped systems, as well as some simple closed-chain robot systems.

  12. [Computational fluid dynamics simulation of different impeller combinations in high viscosity fermentation and its application].

    Science.gov (United States)

    Dong, Shuhao; Zhu, Ping; Xu, Xiaoying; Li, Sha; Jiang, Yongxiang; Xu, Hong

    2015-07-01

    Agitator is one of the essential factors to realize high efficient fermentation for high aerobic and viscous microorganisms, and the influence of different impeller combination on the fermentation process is very important. Welan gum is a microbial exopolysaccharide produced by Alcaligenes sp. under high aerobic and high viscos conditions. Computational fluid dynamics (CFD) numerical simulation was used for analyzing the distribution of velocity, shear rate and gas holdup in the welan fermentation reactor under six different impeller combinations. The best three combinations of impellers were applied to the fermentation of welan. By analyzing the fermentation performance, the MB-4-6 combination had better effect on dissolved oxygen and velocity. The content of welan was increased by 13%. Furthermore, the viscosity of production were also increased.

  13. The adaptation method in the Monte Carlo simulation for computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hyoung Gun; Yoon, Chang Yeon; Lee, Won Ho [Dept. of Bio-convergence Engineering, Korea University, Seoul (Korea, Republic of); Cho, Seung Ryong [Dept. of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Park, Sung Ho [Dept. of Neurosurgery, Ulsan University Hospital, Ulsan (Korea, Republic of)

    2015-06-15

    The patient dose incurred from diagnostic procedures during advanced radiotherapy has become an important issue. Many researchers in medical physics are using computational simulations to calculate complex parameters in experiments. However, extended computation times make it difficult for personal computers to run the conventional Monte Carlo method to simulate radiological images with high-flux photons such as images produced by computed tomography (CT). To minimize the computation time without degrading imaging quality, we applied a deterministic adaptation to the Monte Carlo calculation and verified its effectiveness by simulating CT image reconstruction for an image evaluation phantom (Catphan; Phantom Laboratory, New York NY, USA) and a human-like voxel phantom (KTMAN-2) (Los Alamos National Laboratory, Los Alamos, NM, USA). For the deterministic adaptation, the relationship between iteration numbers and the simulations was estimated and the option to simulate scattered radiation was evaluated. The processing times of simulations using the adaptive method were at least 500 times faster than those using a conventional statistical process. In addition, compared with the conventional statistical method, the adaptive method provided images that were more similar to the experimental images, which proved that the adaptive method was highly effective for a simulation that requires a large number of iterations-assuming no radiation scattering in the vicinity of detectors minimized artifacts in the reconstructed image.

  14. A City Parking Integration System Combined with Cloud Computing Technologies and Smart Mobile Devices

    Science.gov (United States)

    Yeh, Her-Tyan; Chen, Bing-Chang; Wang, Bo-Xun

    2016-01-01

    The current study applied cloud computing technology and smart mobile devices combined with a streaming server for parking lots to plan a city parking integration system. It is also equipped with a parking search system, parking navigation system, parking reservation service, and car retrieval service. With this system, users can quickly find…

  15. Computational Identification of Potential Multi-drug Combinations for Reduction of Microglial Inflammation in Alzheimer Disease

    Directory of Open Access Journals (Sweden)

    Thomas J. Anastasio

    2015-06-01

    Full Text Available Like other neurodegenerative diseases, Alzheimer Disease (AD has a prominent inflammatory component mediated by brain microglia. Reducing microglial inflammation could potentially halt or at least slow the neurodegenerative process. A major challenge in the development of treatments targeting brain inflammation is the sheer complexity of the molecular mechanisms that determine whether microglia become inflammatory or take on a more neuroprotective phenotype. The process is highly multifactorial, raising the possibility that a multi-target/multi-drug strategy could be more effective than conventional monotherapy. This study takes a computational approach in finding combinations of approved drugs that are potentially more effective than single drugs in reducing microglial inflammation in AD. This novel approach exploits the distinct advantages of two different computer programming languages, one imperative and the other declarative. Existing programs written in both languages implement the same model of microglial behavior, and the input/output relationships of both programs agree with each other and with data on microglia over an extensive test battery. Here the imperative program is used efficiently to screen the model for the most efficacious combinations of 10 drugs, while the declarative program is used to analyze in detail the mechanisms of action of the most efficacious combinations. Of the 1024 possible drug combinations, the simulated screen identifies only 7 that are able to move simulated microglia at least 50% of the way from a neurotoxic to a neuroprotective phenotype. Subsequent analysis shows that of the 7 most efficacious combinations, 2 stand out as superior both in strength and reliability. The model offers many experimentally testable and therapeutically relevant predictions concerning effective drug combinations and their mechanisms of action.

  16. Computational identification of potential multi-drug combinations for reduction of microglial inflammation in Alzheimer disease.

    Science.gov (United States)

    Anastasio, Thomas J

    2015-01-01

    Like other neurodegenerative diseases, Alzheimer Disease (AD) has a prominent inflammatory component mediated by brain microglia. Reducing microglial inflammation could potentially halt or at least slow the neurodegenerative process. A major challenge in the development of treatments targeting brain inflammation is the sheer complexity of the molecular mechanisms that determine whether microglia become inflammatory or take on a more neuroprotective phenotype. The process is highly multifactorial, raising the possibility that a multi-target/multi-drug strategy could be more effective than conventional monotherapy. This study takes a computational approach in finding combinations of approved drugs that are potentially more effective than single drugs in reducing microglial inflammation in AD. This novel approach exploits the distinct advantages of two different computer programming languages, one imperative and the other declarative. Existing programs written in both languages implement the same model of microglial behavior, and the input/output relationships of both programs agree with each other and with data on microglia over an extensive test battery. Here the imperative program is used efficiently to screen the model for the most efficacious combinations of 10 drugs, while the declarative program is used to analyze in detail the mechanisms of action of the most efficacious combinations. Of the 1024 possible drug combinations, the simulated screen identifies only 7 that are able to move simulated microglia at least 50% of the way from a neurotoxic to a neuroprotective phenotype. Subsequent analysis shows that of the 7 most efficacious combinations, 2 stand out as superior both in strength and reliability. The model offers many experimentally testable and therapeutically relevant predictions concerning effective drug combinations and their mechanisms of action.

  17. An automated tuberculosis screening strategy combining X-ray-based computer-aided detection and clinical information

    Science.gov (United States)

    Melendez, Jaime; Sánchez, Clara I.; Philipsen, Rick H. H. M.; Maduskar, Pragnya; Dawson, Rodney; Theron, Grant; Dheda, Keertan; van Ginneken, Bram

    2016-04-01

    Lack of human resources and radiological interpretation expertise impair tuberculosis (TB) screening programmes in TB-endemic countries. Computer-aided detection (CAD) constitutes a viable alternative for chest radiograph (CXR) reading. However, no automated techniques that exploit the additional clinical information typically available during screening exist. To address this issue and optimally exploit this information, a machine learning-based combination framework is introduced. We have evaluated this framework on a database containing 392 patient records from suspected TB subjects prospectively recruited in Cape Town, South Africa. Each record comprised a CAD score, automatically computed from a CXR, and 12 clinical features. Comparisons with strategies relying on either CAD scores or clinical information alone were performed. Our results indicate that the combination framework outperforms the individual strategies in terms of the area under the receiving operating characteristic curve (0.84 versus 0.78 and 0.72), specificity at 95% sensitivity (49% versus 24% and 31%) and negative predictive value (98% versus 95% and 96%). Thus, it is believed that combining CAD and clinical information to estimate the risk of active disease is a promising tool for TB screening.

  18. A mixed-methods exploration of an environment for learning computer programming

    Directory of Open Access Journals (Sweden)

    Richard Mather

    2015-08-01

    Full Text Available A mixed-methods approach is evaluated for exploring collaborative behaviour, acceptance and progress surrounding an interactive technology for learning computer programming. A review of literature reveals a compelling case for using mixed-methods approaches when evaluating technology-enhanced-learning environments. Here, ethnographic approaches used for the requirements engineering of computing systems are combined with questionnaire-based feedback and skill tests. These are applied to the ‘Ceebot’ animated 3D learning environment. Video analysis with workplace observation allowed detailed inspection of problem solving and tacit behaviours. Questionnaires and knowledge tests provided broad sample coverage with insights into subject understanding and overall response to the learning environment. Although relatively low scores in programming tests seemingly contradicted the perception that Ceebot had enhanced understanding of programming, this perception was nevertheless found to be correlated with greater test performance. Video analysis corroborated findings that the learning environment and Ceebot animations were engaging and encouraged constructive collaborative behaviours. Ethnographic observations clearly captured Ceebot's value in providing visual cues for problem-solving discussions and for progress through sharing discoveries. Notably, performance in tests was most highly correlated with greater programming practice (p≤0.01. It was apparent that although students had appropriated technology for collaborative working and benefitted from visual and tacit cues provided by Ceebot, they had not necessarily deeply learned the lessons intended. The key value of the ‘mixed-methods’ approach was that ethnographic observations captured the authenticity of learning behaviours, and thereby strengthened confidence in the interpretation of questionnaire and test findings.

  19. Prediction of intestinal absorption and blood-brain barrier penetration by computational methods.

    Science.gov (United States)

    Clark, D E

    2001-09-01

    This review surveys the computational methods that have been developed with the aim of identifying drug candidates likely to fail later on the road to market. The specifications for such computational methods are outlined, including factors such as speed, interpretability, robustness and accuracy. Then, computational filters aimed at predicting "drug-likeness" in a general sense are discussed before methods for the prediction of more specific properties--intestinal absorption and blood-brain barrier penetration--are reviewed. Directions for future research are discussed and, in concluding, the impact of these methods on the drug discovery process, both now and in the future, is briefly considered.

  20. High-integrity software, computation and the scientific method

    International Nuclear Information System (INIS)

    Hatton, L.

    2012-01-01

    Computation rightly occupies a central role in modern science. Datasets are enormous and the processing implications of some algorithms are equally staggering. With the continuing difficulties in quantifying the results of complex computations, it is of increasing importance to understand its role in the essentially Popperian scientific method. In this paper, some of the problems with computation, for example the long-term unquantifiable presence of undiscovered defect, problems with programming languages and process issues will be explored with numerous examples. One of the aims of the paper is to understand the implications of trying to produce high-integrity software and the limitations which still exist. Unfortunately Computer Science itself suffers from an inability to be suitably critical of its practices and has operated in a largely measurement-free vacuum since its earliest days. Within computer science itself, this has not been so damaging in that it simply leads to unconstrained creativity and a rapid turnover of new technologies. In the applied sciences however which have to depend on computational results, such unquantifiability significantly undermines trust. It is time this particular demon was put to rest. (author)

  1. An improved EMD method for modal identification and a combined static-dynamic method for damage detection

    Science.gov (United States)

    Yang, Jinping; Li, Peizhen; Yang, Youfa; Xu, Dian

    2018-04-01

    Empirical mode decomposition (EMD) is a highly adaptable signal processing method. However, the EMD approach has certain drawbacks, including distortions from end effects and mode mixing. In the present study, these two problems are addressed using an end extension method based on the support vector regression machine (SVRM) and a modal decomposition method based on the characteristics of the Hilbert transform. The algorithm includes two steps: using the SVRM, the time series data are extended at both endpoints to reduce the end effects, and then, a modified EMD method using the characteristics of the Hilbert transform is performed on the resulting signal to reduce mode mixing. A new combined static-dynamic method for identifying structural damage is presented. This method combines the static and dynamic information in an equilibrium equation that can be solved using the Moore-Penrose generalized matrix inverse. The combination method uses the differences in displacements of the structure with and without damage and variations in the modal force vector. Tests on a four-story, steel-frame structure were conducted to obtain static and dynamic responses of the structure. The modal parameters are identified using data from the dynamic tests and improved EMD method. The new method is shown to be more accurate and effective than the traditional EMD method. Through tests with a shear-type test frame, the higher performance of the proposed static-dynamic damage detection approach, which can detect both single and multiple damage locations and the degree of the damage, is demonstrated. For structures with multiple damage, the combined approach is more effective than either the static or dynamic method. The proposed EMD method and static-dynamic damage detection method offer improved modal identification and damage detection, respectively, in structures.

  2. Variants of the Borda count method for combining ranked classifier hypotheses

    NARCIS (Netherlands)

    van Erp, Merijn; Schomaker, Lambert; Schomaker, Lambert; Vuurpijl, Louis

    2000-01-01

    The Borda count is a simple yet effective method of combining rankings. In pattern recognition, classifiers are often able to return a ranked set of results. Several experiments have been conducted to test the ability of the Borda count and two variant methods to combine these ranked classifier

  3. A combined Importance Sampling and Kriging reliability method for small failure probabilities with time-demanding numerical models

    International Nuclear Information System (INIS)

    Echard, B.; Gayton, N.; Lemaire, M.; Relun, N.

    2013-01-01

    Applying reliability methods to a complex structure is often delicate for two main reasons. First, such a structure is fortunately designed with codified rules leading to a large safety margin which means that failure is a small probability event. Such a probability level is difficult to assess efficiently. Second, the structure mechanical behaviour is modelled numerically in an attempt to reproduce the real response and numerical model tends to be more and more time-demanding as its complexity is increased to improve accuracy and to consider particular mechanical behaviour. As a consequence, performing a large number of model computations cannot be considered in order to assess the failure probability. To overcome these issues, this paper proposes an original and easily implementable method called AK-IS for active learning and Kriging-based Importance Sampling. This new method is based on the AK-MCS algorithm previously published by Echard et al. [AK-MCS: an active learning reliability method combining Kriging and Monte Carlo simulation. Structural Safety 2011;33(2):145–54]. It associates the Kriging metamodel and its advantageous stochastic property with the Importance Sampling method to assess small failure probabilities. It enables the correction or validation of the FORM approximation with only a very few mechanical model computations. The efficiency of the method is, first, proved on two academic applications. It is then conducted for assessing the reliability of a challenging aerospace case study submitted to fatigue.

  4. Big data mining analysis method based on cloud computing

    Science.gov (United States)

    Cai, Qing Qiu; Cui, Hong Gang; Tang, Hao

    2017-08-01

    Information explosion era, large data super-large, discrete and non-(semi) structured features have gone far beyond the traditional data management can carry the scope of the way. With the arrival of the cloud computing era, cloud computing provides a new technical way to analyze the massive data mining, which can effectively solve the problem that the traditional data mining method cannot adapt to massive data mining. This paper introduces the meaning and characteristics of cloud computing, analyzes the advantages of using cloud computing technology to realize data mining, designs the mining algorithm of association rules based on MapReduce parallel processing architecture, and carries out the experimental verification. The algorithm of parallel association rule mining based on cloud computing platform can greatly improve the execution speed of data mining.

  5. Computational methods for constructing protein structure models from 3D electron microscopy maps.

    Science.gov (United States)

    Esquivel-Rodríguez, Juan; Kihara, Daisuke

    2013-10-01

    Protein structure determination by cryo-electron microscopy (EM) has made significant progress in the past decades. Resolutions of EM maps have been improving as evidenced by recently reported structures that are solved at high resolutions close to 3Å. Computational methods play a key role in interpreting EM data. Among many computational procedures applied to an EM map to obtain protein structure information, in this article we focus on reviewing computational methods that model protein three-dimensional (3D) structures from a 3D EM density map that is constructed from two-dimensional (2D) maps. The computational methods we discuss range from de novo methods, which identify structural elements in an EM map, to structure fitting methods, where known high resolution structures are fit into a low-resolution EM map. A list of available computational tools is also provided. Copyright © 2013 Elsevier Inc. All rights reserved.

  6. Combination of acoustical radiosity and the image source method

    DEFF Research Database (Denmark)

    Koutsouris, Georgios I; Brunskog, Jonas; Jeong, Cheol-Ho

    2013-01-01

    A combined model for room acoustic predictions is developed, aiming to treat both diffuse and specular reflections in a unified way. Two established methods are incorporated: acoustical radiosity, accounting for the diffuse part, and the image source method, accounting for the specular part...

  7. Turning the Page on Pen-and-Paper Questionnaires: Combining Ecological Momentary Assessment and Computer Adaptive Testing to Transform Psychological Assessment in the 21st Century.

    Science.gov (United States)

    Gibbons, Chris J

    2016-01-01

    The current paper describes new opportunities for patient-centred assessment methods which have come about by the increased adoption of affordable smart technologies in biopsychosocial research and medical care. In this commentary, we review modern assessment methods including item response theory (IRT), computer adaptive testing (CAT), and ecological momentary assessment (EMA) and explain how these methods may be combined to improve psychological assessment. We demonstrate both how a 'naïve' selection of a small group of items in an EMA can lead to unacceptably unreliable assessments and how IRT can provide detailed information on the individual information that each item gives thus allowing short form assessments to be selected with acceptable reliability. The combination of CAT and IRT can ensure assessments are precise, efficient, and well targeted to the individual; allowing EMAs to be both brief and accurate.

  8. Prediction of hot spot residues at protein-protein interfaces by combining machine learning and energy-based methods

    Directory of Open Access Journals (Sweden)

    Pontil Massimiliano

    2009-10-01

    Full Text Available Abstract Background Alanine scanning mutagenesis is a powerful experimental methodology for investigating the structural and energetic characteristics of protein complexes. Individual amino-acids are systematically mutated to alanine and changes in free energy of binding (ΔΔG measured. Several experiments have shown that protein-protein interactions are critically dependent on just a few residues ("hot spots" at the interface. Hot spots make a dominant contribution to the free energy of binding and if mutated they can disrupt the interaction. As mutagenesis studies require significant experimental efforts, there is a need for accurate and reliable computational methods. Such methods would also add to our understanding of the determinants of affinity and specificity in protein-protein recognition. Results We present a novel computational strategy to identify hot spot residues, given the structure of a complex. We consider the basic energetic terms that contribute to hot spot interactions, i.e. van der Waals potentials, solvation energy, hydrogen bonds and Coulomb electrostatics. We treat them as input features and use machine learning algorithms such as Support Vector Machines and Gaussian Processes to optimally combine and integrate them, based on a set of training examples of alanine mutations. We show that our approach is effective in predicting hot spots and it compares favourably to other available methods. In particular we find the best performances using Transductive Support Vector Machines, a semi-supervised learning scheme. When hot spots are defined as those residues for which ΔΔG ≥ 2 kcal/mol, our method achieves a precision and a recall respectively of 56% and 65%. Conclusion We have developed an hybrid scheme in which energy terms are used as input features of machine learning models. This strategy combines the strengths of machine learning and energy-based methods. Although so far these two types of approaches have mainly been

  9. Computational Fluid Dynamics (CFD) Simulation of Hypersonic Turbine-Based Combined-Cycle (TBCC) Inlet Mode Transition

    Science.gov (United States)

    Slater, John W.; Saunders, John D.

    2010-01-01

    Methods of computational fluid dynamics were applied to simulate the aerodynamics within the turbine flowpath of a turbine-based combined-cycle propulsion system during inlet mode transition at Mach 4. Inlet mode transition involved the rotation of a splitter cowl to close the turbine flowpath to allow the full operation of a parallel dual-mode ramjet/scramjet flowpath. Steady-state simulations were performed at splitter cowl positions of 0deg, -2deg, -4deg, and -5.7deg, at which the turbine flowpath was closed half way. The simulations satisfied one objective of providing a greater understanding of the flow during inlet mode transition. Comparisons of the simulation results with wind-tunnel test data addressed another objective of assessing the applicability of the simulation methods for simulating inlet mode transition. The simulations showed that inlet mode transition could occur in a stable manner and that accurate modeling of the interactions among the shock waves, boundary layers, and porous bleed regions was critical for evaluating the inlet static and total pressures, bleed flow rates, and bleed plenum pressures. The simulations compared well with some of the wind-tunnel data, but uncertainties in both the windtunnel data and simulations prevented a formal evaluation of the accuracy of the simulation methods.

  10. An interconnecting bus power optimization method combining interconnect wire spacing with wire ordering

    International Nuclear Information System (INIS)

    Zhu Zhang-Ming; Hao Bao-Tian; En Yun-Fei; Yang Yin-Tang; Li Yue-Jin

    2011-01-01

    On-chip interconnect buses consume tens of percents of dynamic power in a nanometer scale integrated circuit and they will consume more power with the rapid scaling down of technology size and continuously rising clock frequency, therefore it is meaningful to lower the interconnecting bus power in design. In this paper, a simple yet accurate interconnect parasitic capacitance model is presented first and then, based on this model, a novel interconnecting bus optimization method is proposed. Wire spacing is a process for spacing wires for minimum dynamic power, while wire ordering is a process that searches for wire orders that maximally enhance it. The method, i.e., combining wire spacing with wire ordering, focuses on bus dynamic power optimization with a consideration of bus performance requirements. The optimization method is verified based on various nanometer technology parameters, showing that with 50% slack of routing space, 25.71% and 32.65% of power can be saved on average by the proposed optimization method for a global bus and an intermediate bus, respectively, under a 65-nm technology node, compared with 21.78% and 27.68% of power saved on average by uniform spacing technology. The proposed method is especially suitable for computer-aided design of nanometer scale on-chip buses. (interdisciplinary physics and related areas of science and technology)

  11. The asymptotic expansion method via symbolic computation

    OpenAIRE

    Navarro, Juan F.

    2012-01-01

    This paper describes an algorithm for implementing a perturbation method based on an asymptotic expansion of the solution to a second-order differential equation. We also introduce a new symbolic computation system which works with the so-called modified quasipolynomials, as well as an implementation of the algorithm on it.

  12. Combining computer modelling and cardiac imaging to understand right ventricular pump function.

    Science.gov (United States)

    Walmsley, John; van Everdingen, Wouter; Cramer, Maarten J; Prinzen, Frits W; Delhaas, Tammo; Lumens, Joost

    2017-10-01

    Right ventricular (RV) dysfunction is a strong predictor of outcome in heart failure and is a key determinant of exercise capacity. Despite these crucial findings, the RV remains understudied in the clinical, experimental, and computer modelling literature. This review outlines how recent advances in using computer modelling and cardiac imaging synergistically help to understand RV function in health and disease. We begin by highlighting the complexity of interactions that make modelling the RV both challenging and necessary, and then summarize the multiscale modelling approaches used to date to simulate RV pump function in the context of these interactions. We go on to demonstrate how these modelling approaches in combination with cardiac imaging have improved understanding of RV pump function in pulmonary arterial hypertension, arrhythmogenic right ventricular cardiomyopathy, dyssynchronous heart failure and cardiac resynchronization therapy, hypoplastic left heart syndrome, and repaired tetralogy of Fallot. We conclude with a perspective on key issues to be addressed by computational models of the RV in the near future. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2017. For permissions, please email: journals.permissions@oup.com.

  13. Cone Beam X-ray Luminescence Computed Tomography Based on Bayesian Method.

    Science.gov (United States)

    Zhang, Guanglei; Liu, Fei; Liu, Jie; Luo, Jianwen; Xie, Yaoqin; Bai, Jing; Xing, Lei

    2017-01-01

    X-ray luminescence computed tomography (XLCT), which aims to achieve molecular and functional imaging by X-rays, has recently been proposed as a new imaging modality. Combining the principles of X-ray excitation of luminescence-based probes and optical signal detection, XLCT naturally fuses functional and anatomical images and provides complementary information for a wide range of applications in biomedical research. In order to improve the data acquisition efficiency of previously developed narrow-beam XLCT, a cone beam XLCT (CB-XLCT) mode is adopted here to take advantage of the useful geometric features of cone beam excitation. Practically, a major hurdle in using cone beam X-ray for XLCT is that the inverse problem here is seriously ill-conditioned, hindering us to achieve good image quality. In this paper, we propose a novel Bayesian method to tackle the bottleneck in CB-XLCT reconstruction. The method utilizes a local regularization strategy based on Gaussian Markov random field to mitigate the ill-conditioness of CB-XLCT. An alternating optimization scheme is then used to automatically calculate all the unknown hyperparameters while an iterative coordinate descent algorithm is adopted to reconstruct the image with a voxel-based closed-form solution. Results of numerical simulations and mouse experiments show that the self-adaptive Bayesian method significantly improves the CB-XLCT image quality as compared with conventional methods.

  14. Platform-independent method for computer aided schematic drawings

    Science.gov (United States)

    Vell, Jeffrey L [Slingerlands, NY; Siganporia, Darius M [Clifton Park, NY; Levy, Arthur J [Fort Lauderdale, FL

    2012-02-14

    A CAD/CAM method is disclosed for a computer system to capture and interchange schematic drawing and associated design information. The schematic drawing and design information are stored in an extensible, platform-independent format.

  15. A-VCI: A flexible method to efficiently compute vibrational spectra

    Science.gov (United States)

    Odunlami, Marc; Le Bris, Vincent; Bégué, Didier; Baraille, Isabelle; Coulaud, Olivier

    2017-06-01

    The adaptive vibrational configuration interaction algorithm has been introduced as a new method to efficiently reduce the dimension of the set of basis functions used in a vibrational configuration interaction process. It is based on the construction of nested bases for the discretization of the Hamiltonian operator according to a theoretical criterion that ensures the convergence of the method. In the present work, the Hamiltonian is written as a sum of products of operators. The purpose of this paper is to study the properties and outline the performance details of the main steps of the algorithm. New parameters have been incorporated to increase flexibility, and their influence has been thoroughly investigated. The robustness and reliability of the method are demonstrated for the computation of the vibrational spectrum up to 3000 cm-1 of a widely studied 6-atom molecule (acetonitrile). Our results are compared to the most accurate up to date computation; we also give a new reference calculation for future work on this system. The algorithm has also been applied to a more challenging 7-atom molecule (ethylene oxide). The computed spectrum up to 3200 cm-1 is the most accurate computation that exists today on such systems.

  16. Experience of computed tomographic myelography and discography in cervical problem

    Energy Technology Data Exchange (ETDEWEB)

    Nakatani, Shigeru; Yamamoto, Masayuki; Uratsuji, Masaaki; Suzuki, Kunio; Matsui, Eigo [Hyogo Prefectural Awaji Hospital, Sumoto, Hyogo (Japan); Kurihara, Akira

    1983-06-01

    CTM (computed tomographic myelography) was performed on 15 cases of cervical lesions, and on 5 of them, CTD (computed tomographic discography) was also made. CTM revealed the intervertebral state, and in combination with CTD, providing more accurate information. The combined method of CTM and CTD was useful for soft disc herniation.

  17. Depth-Averaged Non-Hydrostatic Hydrodynamic Model Using a New Multithreading Parallel Computing Method

    Directory of Open Access Journals (Sweden)

    Ling Kang

    2017-03-01

    Full Text Available Compared to the hydrostatic hydrodynamic model, the non-hydrostatic hydrodynamic model can accurately simulate flows that feature vertical accelerations. The model’s low computational efficiency severely restricts its wider application. This paper proposes a non-hydrostatic hydrodynamic model based on a multithreading parallel computing method. The horizontal momentum equation is obtained by integrating the Navier–Stokes equations from the bottom to the free surface. The vertical momentum equation is approximated by the Keller-box scheme. A two-step method is used to solve the model equations. A parallel strategy based on block decomposition computation is utilized. The original computational domain is subdivided into two subdomains that are physically connected via a virtual boundary technique. Two sub-threads are created and tasked with the computation of the two subdomains. The producer–consumer model and the thread lock technique are used to achieve synchronous communication between sub-threads. The validity of the model was verified by solitary wave propagation experiments over a flat bottom and slope, followed by two sinusoidal wave propagation experiments over submerged breakwater. The parallel computing method proposed here was found to effectively enhance computational efficiency and save 20%–40% computation time compared to serial computing. The parallel acceleration rate and acceleration efficiency are approximately 1.45% and 72%, respectively. The parallel computing method makes a contribution to the popularization of non-hydrostatic models.

  18. Computational Methods in Stochastic Dynamics Volume 2

    CERN Document Server

    Stefanou, George; Papadopoulos, Vissarion

    2013-01-01

    The considerable influence of inherent uncertainties on structural behavior has led the engineering community to recognize the importance of a stochastic approach to structural problems. Issues related to uncertainty quantification and its influence on the reliability of the computational models are continuously gaining in significance. In particular, the problems of dynamic response analysis and reliability assessment of structures with uncertain system and excitation parameters have been the subject of continuous research over the last two decades as a result of the increasing availability of powerful computing resources and technology.   This book is a follow up of a previous book with the same subject (ISBN 978-90-481-9986-0) and focuses on advanced computational methods and software tools which can highly assist in tackling complex problems in stochastic dynamic/seismic analysis and design of structures. The selected chapters are authored by some of the most active scholars in their respective areas and...

  19. Combination of artificial intelligence and procedural language programs in a computer application system supporting nuclear reactor operations

    International Nuclear Information System (INIS)

    Town, G.G.; Stratton, R.C.

    1985-01-01

    A computer application system is described which provides nuclear reactor power plant operators with an improved decision support system. This system combines traditional computer applications such as graphics display with artificial intelligence methodologies such as reasoning and diagnosis so as to improve plant operability. This paper discusses the issues, and a solution, involved with the system integration of applications developed using traditional and artificial intelligence languages

  20. Combination of artificial intelligence and procedural language programs in a computer application system supporting nuclear reactor operations

    International Nuclear Information System (INIS)

    Stratton, R.C.; Town, G.G.

    1985-01-01

    A computer application system is described which provides nuclear reactor power plant operators with an improved decision support system. This system combines traditional computer applications such as graphics display with artifical intelligence methodologies such as reasoning and diagnosis so as to improve plant operability. This paper discusses the issues, and a solution, involved with the system integration of applications developed using traditional and artificial intelligence languages

  1. The Asymptotic Expansion Method via Symbolic Computation

    Directory of Open Access Journals (Sweden)

    Juan F. Navarro

    2012-01-01

    Full Text Available This paper describes an algorithm for implementing a perturbation method based on an asymptotic expansion of the solution to a second-order differential equation. We also introduce a new symbolic computation system which works with the so-called modified quasipolynomials, as well as an implementation of the algorithm on it.

  2. Applying Human Computation Methods to Information Science

    Science.gov (United States)

    Harris, Christopher Glenn

    2013-01-01

    Human Computation methods such as crowdsourcing and games with a purpose (GWAP) have each recently drawn considerable attention for their ability to synergize the strengths of people and technology to accomplish tasks that are challenging for either to do well alone. Despite this increased attention, much of this transformation has been focused on…

  3. Benzoic acid derivatives: Evaluation of thermochemical properties with complementary experimental and computational methods

    International Nuclear Information System (INIS)

    Verevkin, Sergey P.; Zaitsau, Dzmitry H.; Emeĺyanenko, Vladimir N.; Stepurko, Elena N.; Zherikova, Kseniya V.

    2015-01-01

    Highlights: • Vapor pressures of benzoic acid derivatives were measured. • Sublimation enthalpies were derived and compared with the literature. • Thermochemical data tested for consistency using additivity rules and computations. • Contradiction between available enthalpies of sublimation was resolved. • Pairwise interactions of substituents on the benzene ring were derived. - Abstract: Molar sublimation enthalpies of the methyl- and methoxybenzoic acids were derived from the transpiration method, static method, and TGA. Thermochemical data available in the literature were collected, evaluated, and combined with own experimental results. This collection together with the new experimental results reported here has helped to resolve contradictions in the available enthalpy data and to recommend sets of sublimation and formation enthalpies for the benzoic acid derivatives. Gas-phase enthalpies of formation calculated with the G4 quantum-chemical method were in agreement with the experiment. Pairwise interactions of the methyl, methoxy, and carboxyl substituents on the benzene ring were derived and used for the development of simple group-additivity procedures for estimation of the vaporization enthalpies, gas-phase, and liquid-phase enthalpies of formation of substituted benzenes.

  4. Benzoic acid derivatives: Evaluation of thermochemical properties with complementary experimental and computational methods

    Energy Technology Data Exchange (ETDEWEB)

    Verevkin, Sergey P., E-mail: sergey.verevkin@uni-rostock.de [Department of Physical Chemistry and Department “Science and Technology of Life, Light and Matter”, University of Rostock, D-18059 Rostock (Germany); Department of Physical Chemistry, Kazan Federal University, 420008 Kazan (Russian Federation); Zaitsau, Dzmitry H. [Department of Physical Chemistry, Kazan Federal University, 420008 Kazan (Russian Federation); Emeĺyanenko, Vladimir N. [Department of Physical Chemistry and Department “Science and Technology of Life, Light and Matter”, University of Rostock, D-18059 Rostock (Germany); Stepurko, Elena N. [Chemistry Faculty and Research Institute for Physical Chemical Problems, Belarusian State University, 220030 Minsk (Belarus); Zherikova, Kseniya V. [Nikolaev Institute of Inorganic Chemistry of Siberian Branch of Russian Academy of Sciences, 630090 Novosibirsk (Russian Federation)

    2015-12-20

    Highlights: • Vapor pressures of benzoic acid derivatives were measured. • Sublimation enthalpies were derived and compared with the literature. • Thermochemical data tested for consistency using additivity rules and computations. • Contradiction between available enthalpies of sublimation was resolved. • Pairwise interactions of substituents on the benzene ring were derived. - Abstract: Molar sublimation enthalpies of the methyl- and methoxybenzoic acids were derived from the transpiration method, static method, and TGA. Thermochemical data available in the literature were collected, evaluated, and combined with own experimental results. This collection together with the new experimental results reported here has helped to resolve contradictions in the available enthalpy data and to recommend sets of sublimation and formation enthalpies for the benzoic acid derivatives. Gas-phase enthalpies of formation calculated with the G4 quantum-chemical method were in agreement with the experiment. Pairwise interactions of the methyl, methoxy, and carboxyl substituents on the benzene ring were derived and used for the development of simple group-additivity procedures for estimation of the vaporization enthalpies, gas-phase, and liquid-phase enthalpies of formation of substituted benzenes.

  5. Electron beam treatment planning: A review of dose computation methods

    International Nuclear Information System (INIS)

    Mohan, R.; Riley, R.; Laughlin, J.S.

    1983-01-01

    Various methods of dose computations are reviewed. The equivalent path length methods used to account for body curvature and internal structure are not adequate because they ignore the lateral diffusion of electrons. The Monte Carlo method for the broad field three-dimensional situation in treatment planning is impractical because of the enormous computer time required. The pencil beam technique may represent a suitable compromise. The behavior of a pencil beam may be described by the multiple scattering theory or, alternatively, generated using the Monte Carlo method. Although nearly two orders of magnitude slower than the equivalent path length technique, the pencil beam method improves accuracy sufficiently to justify its use. It applies very well when accounting for the effect of surface irregularities; the formulation for handling inhomogeneous internal structure is yet to be developed

  6. Stable numerical method in computation of stellar evolution

    International Nuclear Information System (INIS)

    Sugimoto, Daiichiro; Eriguchi, Yoshiharu; Nomoto, Ken-ichi.

    1982-01-01

    To compute the stellar structure and evolution in different stages, such as (1) red-giant stars in which the density and density gradient change over quite wide ranges, (2) rapid evolution with neutrino loss or unstable nuclear flashes, (3) hydrodynamical stages of star formation or supernova explosion, (4) transition phases from quasi-static to dynamical evolutions, (5) mass-accreting or losing stars in binary-star systems, and (6) evolution of stellar core whose mass is increasing by shell burning or decreasing by penetration of convective envelope into the core, we face ''multi-timescale problems'' which can neither be treated by simple-minded explicit scheme nor implicit one. This problem has been resolved by three prescriptions; one by introducing the hybrid scheme suitable for the multi-timescale problems of quasi-static evolution with heat transport, another by introducing also the hybrid scheme suitable for the multi-timescale problems of hydrodynamic evolution, and the other by introducing the Eulerian or, in other words, the mass fraction coordinate for evolution with changing mass. When all of them are combined in a single computer code, we can compute numerically stably any phase of stellar evolution including transition phases, as far as the star is spherically symmetric. (author)

  7. A numerical method to compute interior transmission eigenvalues

    International Nuclear Information System (INIS)

    Kleefeld, Andreas

    2013-01-01

    In this paper the numerical calculation of eigenvalues of the interior transmission problem arising in acoustic scattering for constant contrast in three dimensions is considered. From the computational point of view existing methods are very expensive, and are only able to show the existence of such transmission eigenvalues. Furthermore, they have trouble finding them if two or more eigenvalues are situated closely together. We present a new method based on complex-valued contour integrals and the boundary integral equation method which is able to calculate highly accurate transmission eigenvalues. So far, this is the first paper providing such accurate values for various surfaces different from a sphere in three dimensions. Additionally, the computational cost is even lower than those of existing methods. Furthermore, the algorithm is capable of finding complex-valued eigenvalues for which no numerical results have been reported yet. Until now, the proof of existence of such eigenvalues is still open. Finally, highly accurate eigenvalues of the interior Dirichlet problem are provided and might serve as test cases to check newly derived Faber–Krahn type inequalities for larger transmission eigenvalues that are not yet available. (paper)

  8. Mathematical optics classical, quantum, and computational methods

    CERN Document Server

    Lakshminarayanan, Vasudevan

    2012-01-01

    Going beyond standard introductory texts, Mathematical Optics: Classical, Quantum, and Computational Methods brings together many new mathematical techniques from optical science and engineering research. Profusely illustrated, the book makes the material accessible to students and newcomers to the field. Divided into six parts, the text presents state-of-the-art mathematical methods and applications in classical optics, quantum optics, and image processing. Part I describes the use of phase space concepts to characterize optical beams and the application of dynamic programming in optical wave

  9. Combinations of SNP genotypes from the Wellcome Trust Case Control Study of bipolar patients

    DEFF Research Database (Denmark)

    Mellerup, Erling; Jørgensen, Martin Balslev; Dam, Henrik

    2018-01-01

    Objectives: Combinations of genetic variants are the basis for polygenic disorders. We examined combinations of SNP genotypes taken from the 446 729 SNPs in The Wellcome Trust Case Control Study of bipolar patients. Methods: Parallel computing by graphics processing units, cloud computing, and data...

  10. Propagating Class and Method Combination

    DEFF Research Database (Denmark)

    Ernst, Erik

    1999-01-01

    number of implicit combinations. For example, it is possible to specify separate aspects of a family of classes, and then combine several aspects into a full-fledged class family. The combination expressions would explicitly combine whole-family aspects, and by propagation implicitly combine the aspects...

  11. Advances of evolutionary computation methods and operators

    CERN Document Server

    Cuevas, Erik; Oliva Navarro, Diego Alberto

    2016-01-01

    The goal of this book is to present advances that discuss alternative Evolutionary Computation (EC) developments and non-conventional operators which have proved to be effective in the solution of several complex problems. The book has been structured so that each chapter can be read independently from the others. The book contains nine chapters with the following themes: 1) Introduction, 2) the Social Spider Optimization (SSO), 3) the States of Matter Search (SMS), 4) the collective animal behavior (CAB) algorithm, 5) the Allostatic Optimization (AO) method, 6) the Locust Search (LS) algorithm, 7) the Adaptive Population with Reduced Evaluations (APRE) method, 8) the multimodal CAB, 9) the constrained SSO method.

  12. A method of non-contact reading code based on computer vision

    Science.gov (United States)

    Zhang, Chunsen; Zong, Xiaoyu; Guo, Bingxuan

    2018-03-01

    With the purpose of guarantee the computer information exchange security between internal and external network (trusted network and un-trusted network), A non-contact Reading code method based on machine vision has been proposed. Which is different from the existing network physical isolation method. By using the computer monitors, camera and other equipment. Deal with the information which will be on exchanged, Include image coding ,Generate the standard image , Display and get the actual image , Calculate homography matrix, Image distort correction and decoding in calibration, To achieve the computer information security, Non-contact, One-way transmission between the internal and external network , The effectiveness of the proposed method is verified by experiments on real computer text data, The speed of data transfer can be achieved 24kb/s. The experiment shows that this algorithm has the characteristics of high security, fast velocity and less loss of information. Which can meet the daily needs of the confidentiality department to update the data effectively and reliably, Solved the difficulty of computer information exchange between Secret network and non-secret network, With distinctive originality, practicability, and practical research value.

  13. Computational mathematics models, methods, and analysis with Matlab and MPI

    CERN Document Server

    White, Robert E

    2004-01-01

    Computational Mathematics: Models, Methods, and Analysis with MATLAB and MPI explores and illustrates this process. Each section of the first six chapters is motivated by a specific application. The author applies a model, selects a numerical method, implements computer simulations, and assesses the ensuing results. These chapters include an abundance of MATLAB code. By studying the code instead of using it as a "black box, " you take the first step toward more sophisticated numerical modeling. The last four chapters focus on multiprocessing algorithms implemented using message passing interface (MPI). These chapters include Fortran 9x codes that illustrate the basic MPI subroutines and revisit the applications of the previous chapters from a parallel implementation perspective. All of the codes are available for download from www4.ncsu.edu./~white.This book is not just about math, not just about computing, and not just about applications, but about all three--in other words, computational science. Whether us...

  14. Medical image computing for computer-supported diagnostics and therapy. Advances and perspectives.

    Science.gov (United States)

    Handels, H; Ehrhardt, J

    2009-01-01

    Medical image computing has become one of the most challenging fields in medical informatics. In image-based diagnostics of the future software assistance will become more and more important, and image analysis systems integrating advanced image computing methods are needed to extract quantitative image parameters to characterize the state and changes of image structures of interest (e.g. tumors, organs, vessels, bones etc.) in a reproducible and objective way. Furthermore, in the field of software-assisted and navigated surgery medical image computing methods play a key role and have opened up new perspectives for patient treatment. However, further developments are needed to increase the grade of automation, accuracy, reproducibility and robustness. Moreover, the systems developed have to be integrated into the clinical workflow. For the development of advanced image computing systems methods of different scientific fields have to be adapted and used in combination. The principal methodologies in medical image computing are the following: image segmentation, image registration, image analysis for quantification and computer assisted image interpretation, modeling and simulation as well as visualization and virtual reality. Especially, model-based image computing techniques open up new perspectives for prediction of organ changes and risk analysis of patients and will gain importance in diagnostic and therapy of the future. From a methodical point of view the authors identify the following future trends and perspectives in medical image computing: development of optimized application-specific systems and integration into the clinical workflow, enhanced computational models for image analysis and virtual reality training systems, integration of different image computing methods, further integration of multimodal image data and biosignals and advanced methods for 4D medical image computing. The development of image analysis systems for diagnostic support or

  15. Combining Self-Explaining with Computer Architecture Diagrams to Enhance the Learning of Assembly Language Programming

    Science.gov (United States)

    Hung, Y.-C.

    2012-01-01

    This paper investigates the impact of combining self explaining (SE) with computer architecture diagrams to help novice students learn assembly language programming. Pre- and post-test scores for the experimental and control groups were compared and subjected to covariance (ANCOVA) statistical analysis. Results indicate that the SE-plus-diagram…

  16. Minimally invasive computer-navigated total hip arthroplasty, following the concept of femur first and combined anteversion: design of a blinded randomized controlled trial

    Directory of Open Access Journals (Sweden)

    Woerner Michael

    2011-08-01

    Full Text Available Abstract Background Impingement can be a serious complication after total hip arthroplasty (THA, and is one of the major causes of postoperative pain, dislocation, aseptic loosening, and implant breakage. Minimally invasive THA and computer-navigated surgery were introduced several years ago. We have developed a novel, computer-assisted operation method for THA following the concept of "femur first"/"combined anteversion", which incorporates various aspects of performing a functional optimization of the cup position, and comprehensively addresses range of motion (ROM as well as cup containment and alignment parameters. Hence, the purpose of this study is to assess whether the artificial joint's ROM can be improved by this computer-assisted operation method. Second, the clinical and radiological outcome will be evaluated. Methods/Design A registered patient- and observer-blinded randomized controlled trial will be conducted. Patients between the ages of 50 and 75 admitted for primary unilateral THA will be included. Patients will be randomly allocated to either receive minimally invasive computer-navigated "femur first" THA or the conventional minimally invasive THA procedure. Self-reported functional status and health-related quality of life (questionnaires will be assessed both preoperatively and postoperatively. Perioperative complications will be registered. Radiographic evaluation will take place up to 6 weeks postoperatively with a computed tomography (CT scan. Component position will be evaluated by an independent external institute on a 3D reconstruction of the femur/pelvis using image-processing software. Postoperative ROM will be calculated by an algorithm which automatically determines bony and prosthetic impingements. Discussion In the past, computer navigation has improved the accuracy of component positioning. So far, there are only few objective data quantifying the risks and benefits of computer navigated THA. Therefore, this

  17. Methods for computing SN eigenvalues and eigenvectors of slab geometry transport problems

    International Nuclear Information System (INIS)

    Yavuz, Musa

    1998-01-01

    We discuss computational methods for computing the eigenvalues and eigenvectors of single energy-group neutral particle transport (S N ) problems in homogeneous slab geometry, with an arbitrary scattering anisotropy of order L. These eigensolutions are important when exact (or very accurate) solutions are desired for coarse spatial cell problems demanding rapid execution times. Three methods, one of which is 'new', are presented for determining the eigenvalues and eigenvectors of such S N problems. In the first method, separation of variables is directly applied to the S N equations. In the second method, common characteristics of the S N and P N-1 equations are used. In the new method, the eigenvalues and eigenvectors can be computed provided that the cell-interface Green's functions (transmission and reflection factors) are known. Numerical results for S 4 test problems are given to compare the new method with the existing methods

  18. Methods for computing SN eigenvalues and eigenvectors of slab geometry transport problems

    International Nuclear Information System (INIS)

    Yavuz, M.

    1997-01-01

    We discuss computational methods for computing the eigenvalues and eigenvectors of single energy-group neutral particle transport (S N ) problems in homogeneous slab geometry, with an arbitrary scattering anisotropy of order L. These eigensolutions are important when exact (or very accurate) solutions are desired for coarse spatial cell problems demanding rapid execution times. Three methods, one of which is 'new', are presented for determining the eigenvalues and eigenvectors of such S N problems. In the first method, separation of variables is directly applied to the S N equations. In the second method, common characteristics of the S N and P N-1 equations are used. In the new method, the eigenvalues and eigenvectors can be computed provided that the cell-interface Green's functions (transmission and reflection factors) are known. Numerical results for S 4 test problems are given to compare the new method with the existing methods. (author)

  19. Delamination detection using methods of computational intelligence

    Science.gov (United States)

    Ihesiulor, Obinna K.; Shankar, Krishna; Zhang, Zhifang; Ray, Tapabrata

    2012-11-01

    Abstract Reliable delamination prediction scheme is indispensable in order to prevent potential risks of catastrophic failures in composite structures. The existence of delaminations changes the vibration characteristics of composite laminates and hence such indicators can be used to quantify the health characteristics of laminates. An approach for online health monitoring of in-service composite laminates is presented in this paper that relies on methods based on computational intelligence. Typical changes in the observed vibration characteristics (i.e. change in natural frequencies) are considered as inputs to identify the existence, location and magnitude of delaminations. The performance of the proposed approach is demonstrated using numerical models of composite laminates. Since this identification problem essentially involves the solution of an optimization problem, the use of finite element (FE) methods as the underlying tool for analysis turns out to be computationally expensive. A surrogate assisted optimization approach is hence introduced to contain the computational time within affordable limits. An artificial neural network (ANN) model with Bayesian regularization is used as the underlying approximation scheme while an improved rate of convergence is achieved using a memetic algorithm. However, building of ANN surrogate models usually requires large training datasets. K-means clustering is effectively employed to reduce the size of datasets. ANN is also used via inverse modeling to determine the position, size and location of delaminations using changes in measured natural frequencies. The results clearly highlight the efficiency and the robustness of the approach.

  20. Subtraction method of computing QCD jet cross sections at NNLO accuracy

    Science.gov (United States)

    Trócsányi, Zoltán; Somogyi, Gábor

    2008-10-01

    We present a general subtraction method for computing radiative corrections to QCD jet cross sections at next-to-next-to-leading order accuracy. The steps needed to set up this subtraction scheme are the same as those used in next-to-leading order computations. However, all steps need non-trivial modifications, which we implement such that that those can be defined at any order in perturbation theory. We give a status report of the implementation of the method to computing jet cross sections in electron-positron annihilation at the next-to-next-to-leading order accuracy.

  1. Subtraction method of computing QCD jet cross sections at NNLO accuracy

    Energy Technology Data Exchange (ETDEWEB)

    Trocsanyi, Zoltan [University of Debrecen and Institute of Nuclear Research of the Hungarian Academy of Sciences, H-4001 Debrecen P.O.Box 51 (Hungary)], E-mail: Zoltan.Trocsanyi@cern.ch; Somogyi, Gabor [University of Zuerich, Winterthurerstrasse 190, CH-8057 Zuerich (Switzerland)], E-mail: sgabi@physik.unizh.ch

    2008-10-15

    We present a general subtraction method for computing radiative corrections to QCD jet cross sections at next-to-next-to-leading order accuracy. The steps needed to set up this subtraction scheme are the same as those used in next-to-leading order computations. However, all steps need non-trivial modifications, which we implement such that that those can be defined at any order in perturbation theory. We give a status report of the implementation of the method to computing jet cross sections in electron-positron annihilation at the next-to-next-to-leading order accuracy.

  2. Vectorization on the star computer of several numerical methods for a fluid flow problem

    Science.gov (United States)

    Lambiotte, J. J., Jr.; Howser, L. M.

    1974-01-01

    A reexamination of some numerical methods is considered in light of the new class of computers which use vector streaming to achieve high computation rates. A study has been made of the effect on the relative efficiency of several numerical methods applied to a particular fluid flow problem when they are implemented on a vector computer. The method of Brailovskaya, the alternating direction implicit method, a fully implicit method, and a new method called partial implicitization have been applied to the problem of determining the steady state solution of the two-dimensional flow of a viscous imcompressible fluid in a square cavity driven by a sliding wall. Results are obtained for three mesh sizes and a comparison is made of the methods for serial computation.

  3. Control rod computer code IAMCOS: general theory and numerical methods

    International Nuclear Information System (INIS)

    West, G.

    1982-11-01

    IAMCOS is a computer code for the description of mechanical and thermal behavior of cylindrical control rods for fast breeders. This code version was applied, tested and modified from 1979 to 1981. In this report are described the basic model (02 version), theoretical definitions and computation methods [fr

  4. Low-dose X-ray computed tomography image reconstruction with a combined low-mAs and sparse-view protocol.

    Science.gov (United States)

    Gao, Yang; Bian, Zhaoying; Huang, Jing; Zhang, Yunwan; Niu, Shanzhou; Feng, Qianjin; Chen, Wufan; Liang, Zhengrong; Ma, Jianhua

    2014-06-16

    To realize low-dose imaging in X-ray computed tomography (CT) examination, lowering milliampere-seconds (low-mAs) or reducing the required number of projection views (sparse-view) per rotation around the body has been widely studied as an easy and effective approach. In this study, we are focusing on low-dose CT image reconstruction from the sinograms acquired with a combined low-mAs and sparse-view protocol and propose a two-step image reconstruction strategy. Specifically, to suppress significant statistical noise in the noisy and insufficient sinograms, an adaptive sinogram restoration (ASR) method is first proposed with consideration of the statistical property of sinogram data, and then to further acquire a high-quality image, a total variation based projection onto convex sets (TV-POCS) method is adopted with a slight modification. For simplicity, the present reconstruction strategy was termed as "ASR-TV-POCS." To evaluate the present ASR-TV-POCS method, both qualitative and quantitative studies were performed on a physical phantom. Experimental results have demonstrated that the present ASR-TV-POCS method can achieve promising gains over other existing methods in terms of the noise reduction, contrast-to-noise ratio, and edge detail preservation.

  5. Development and Application of Computational/In Vitro Toxicological Methods for Chemical Hazard Risk Reduction of New Materials for Advanced Weapon Systems

    Science.gov (United States)

    Frazier, John M.; Mattie, D. R.; Hussain, Saber; Pachter, Ruth; Boatz, Jerry; Hawkins, T. W.

    2000-01-01

    The development of quantitative structure-activity relationship (QSAR) is essential for reducing the chemical hazards of new weapon systems. The current collaboration between HEST (toxicology research and testing), MLPJ (computational chemistry) and PRS (computational chemistry, new propellant synthesis) is focusing R&D efforts on basic research goals that will rapidly transition to useful products for propellant development. Computational methods are being investigated that will assist in forecasting cellular toxicological end-points. Models developed from these chemical structure-toxicity relationships are useful for the prediction of the toxicological endpoints of new related compounds. Research is focusing on the evaluation tools to be used for the discovery of such relationships and the development of models of the mechanisms of action. Combinations of computational chemistry techniques, in vitro toxicity methods, and statistical correlations, will be employed to develop and explore potential predictive relationships; results for series of molecular systems that demonstrate the viability of this approach are reported. A number of hydrazine salts have been synthesized for evaluation. Computational chemistry methods are being used to elucidate the mechanism of action of these salts. Toxicity endpoints such as viability (LDH) and changes in enzyme activity (glutahoione peroxidase and catalase) are being experimentally measured as indicators of cellular damage. Extrapolation from computational/in vitro studies to human toxicity, is the ultimate goal. The product of this program will be a predictive tool to assist in the development of new, less toxic propellants.

  6. Reliable methods for computer simulation error control and a posteriori estimates

    CERN Document Server

    Neittaanmäki, P

    2004-01-01

    Recent decades have seen a very rapid success in developing numerical methods based on explicit control over approximation errors. It may be said that nowadays a new direction is forming in numerical analysis, the main goal of which is to develop methods ofreliable computations. In general, a reliable numerical method must solve two basic problems: (a) generate a sequence of approximations that converges to a solution and (b) verify the accuracy of these approximations. A computer code for such a method must consist of two respective blocks: solver and checker.In this book, we are chie

  7. Nuclear computational science a century in review

    CERN Document Server

    Azmy, Yousry

    2010-01-01

    Nuclear engineering has undergone extensive progress over the years. In the past century, colossal developments have been made and with specific reference to the mathematical theory and computational science underlying this discipline, advances in areas such as high-order discretization methods, Krylov Methods and Iteration Acceleration have steadily grown. Nuclear Computational Science: A Century in Review addresses these topics and many more; topics which hold special ties to the first half of the century, and topics focused around the unique combination of nuclear engineering, computational

  8. A systematic and efficient method to compute multi-loop master integrals

    Science.gov (United States)

    Liu, Xiao; Ma, Yan-Qing; Wang, Chen-Yu

    2018-04-01

    We propose a novel method to compute multi-loop master integrals by constructing and numerically solving a system of ordinary differential equations, with almost trivial boundary conditions. Thus it can be systematically applied to problems with arbitrary kinematic configurations. Numerical tests show that our method can not only achieve results with high precision, but also be much faster than the only existing systematic method sector decomposition. As a by product, we find a new strategy to compute scalar one-loop integrals without reducing them to master integrals.

  9. Computational methods in metabolic engineering for strain design.

    Science.gov (United States)

    Long, Matthew R; Ong, Wai Kit; Reed, Jennifer L

    2015-08-01

    Metabolic engineering uses genetic approaches to control microbial metabolism to produce desired compounds. Computational tools can identify new biological routes to chemicals and the changes needed in host metabolism to improve chemical production. Recent computational efforts have focused on exploring what compounds can be made biologically using native, heterologous, and/or enzymes with broad specificity. Additionally, computational methods have been developed to suggest different types of genetic modifications (e.g. gene deletion/addition or up/down regulation), as well as suggest strategies meeting different criteria (e.g. high yield, high productivity, or substrate co-utilization). Strategies to improve the runtime performances have also been developed, which allow for more complex metabolic engineering strategies to be identified. Future incorporation of kinetic considerations will further improve strain design algorithms. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Development of computational methods of design by analysis for pressure vessel components

    International Nuclear Information System (INIS)

    Bao Shiyi; Zhou Yu; He Shuyan; Wu Honglin

    2005-01-01

    Stress classification is not only one of key steps when pressure vessel component is designed by analysis, but also a difficulty which puzzles engineers and designers at all times. At present, for calculating and categorizing the stress field of pressure vessel components, there are several computation methods of design by analysis such as Stress Equivalent Linearization, Two-Step Approach, Primary Structure method, Elastic Compensation method, GLOSS R-Node method and so on, that are developed and applied. Moreover, ASME code also gives an inelastic method of design by analysis for limiting gross plastic deformation only. When pressure vessel components design by analysis, sometimes there are huge differences between the calculating results for using different calculating and analysis methods mentioned above. As consequence, this is the main reason that affects wide application of design by analysis approach. Recently, a new approach, presented in the new proposal of a European Standard, CEN's unfired pressure vessel standard EN 13445-3, tries to avoid problems of stress classification by analyzing pressure vessel structure's various failure mechanisms directly based on elastic-plastic theory. In this paper, some stress classification methods mentioned above, are described briefly. And the computational methods cited in the European pressure vessel standard, such as Deviatoric Map, and nonlinear analysis methods (plastic analysis and limit analysis), are depicted compendiously. Furthermore, the characteristics of computational methods of design by analysis are summarized for selecting the proper computational method when design pressure vessel component by analysis. (authors)

  11. Some questions of using coding theory and analytical calculation methods on computers

    International Nuclear Information System (INIS)

    Nikityuk, N.M.

    1987-01-01

    Main results of investigations devoted to the application of theory and practice of correcting codes are presented. These results are used to create very fast units for the selection of events registered in multichannel detectors of nuclear particles. Using this theory and analytical computing calculations, practically new combination devices, for example, parallel encoders, have been developed. Questions concerning the creation of a new algorithm for the calculation of digital functions by computers and problems of devising universal, dynamically reprogrammable logic modules are discussed

  12. Recent Advances in Computational Methods for Nuclear Magnetic Resonance Data Processing

    KAUST Repository

    Gao, Xin

    2013-01-01

    research attention from specialists in bioinformatics and computational biology. In this paper, we review recent advances in computational methods for NMR protein structure determination. We summarize the advantages of and bottlenecks in the existing

  13. Many Body Methods from Chemistry to Physics: Novel Computational Techniques for Materials-Specific Modelling: A Computational Materials Science and Chemistry Network

    Energy Technology Data Exchange (ETDEWEB)

    Millis, Andrew [Columbia Univ., New York, NY (United States). Dept. of Physics

    2016-11-17

    Understanding the behavior of interacting electrons in molecules and solids so that one can predict new superconductors, catalysts, light harvesters, energy and battery materials and optimize existing ones is the ``quantum many-body problem’’. This is one of the scientific grand challenges of the 21st century. A complete solution to the problem has been proven to be exponentially hard, meaning that straightforward numerical approaches fail. New insights and new methods are needed to provide accurate yet feasible approximate solutions. This CMSCN project brought together chemists and physicists to combine insights from the two disciplines to develop innovative new approaches. Outcomes included the Density Matrix Embedding method, a new, computationally inexpensive and extremely accurate approach that may enable first principles treatment of superconducting and magnetic properties of strongly correlated materials, new techniques for existing methods including an Adaptively Truncated Hilbert Space approach that will vastly expand the capabilities of the dynamical mean field method, a self-energy embedding theory and a new memory-function based approach to the calculations of the behavior of driven systems. The methods developed under this project are now being applied to improve our understanding of superconductivity, to calculate novel topological properties of materials and to characterize and improve the properties of nanoscale devices.

  14. Multiscale methods in computational fluid and solid mechanics

    NARCIS (Netherlands)

    Borst, de R.; Hulshoff, S.J.; Lenz, S.; Munts, E.A.; Brummelen, van E.H.; Wall, W.; Wesseling, P.; Onate, E.; Periaux, J.

    2006-01-01

    First, an attempt is made towards gaining a more systematic understanding of recent progress in multiscale modelling in computational solid and fluid mechanics. Sub- sequently, the discussion is focused on variational multiscale methods for the compressible and incompressible Navier-Stokes

  15. Computational studies of a cut-wire pair and combined metamaterials

    International Nuclear Information System (INIS)

    Nguyen, Thanh Tung; Lievens, Peter; Lee, Young Pak; Vu, Dinh Lam

    2011-01-01

    The transfer-matrix method and finite-integration simulations show how the transmission properties of combined metamaterials, which consist of metallic cut-wire pairs and continuous wires, are affected by geometric parameters. The corresponding effective permittivity and permeability are retrieved from the complex scattering parameters using the standard retrieval procedure. The electromagnetic properties of the cut-wire pair as well as the left-handed behavior of the combined structure are understood by the effective medium theory. In addition, the dimensional dependence of transmission properties, the shapes of cut-wire pairs and continuous wire, and the impact of dielectric spacer are both examined. Finally, by expanding the results of previous research (Koschny et al 2003 Phys. Rev. Lett. 93 016608), we generalize the transmission picture of combined structures in terms of the correlation between electric and magnetic responses. (review)

  16. Computer-generated holograms by multiple wavefront recording plane method with occlusion culling.

    Science.gov (United States)

    Symeonidou, Athanasia; Blinder, David; Munteanu, Adrian; Schelkens, Peter

    2015-08-24

    We propose a novel fast method for full parallax computer-generated holograms with occlusion processing, suitable for volumetric data such as point clouds. A novel light wave propagation strategy relying on the sequential use of the wavefront recording plane method is proposed, which employs look-up tables in order to reduce the computational complexity in the calculation of the fields. Also, a novel technique for occlusion culling with little additional computation cost is introduced. Additionally, the method adheres a Gaussian distribution to the individual points in order to improve visual quality. Performance tests show that for a full-parallax high-definition CGH a speedup factor of more than 2,500 compared to the ray-tracing method can be achieved without hardware acceleration.

  17. Lattice Boltzmann method fundamentals and engineering applications with computer codes

    CERN Document Server

    Mohamad, A A

    2014-01-01

    Introducing the Lattice Boltzmann Method in a readable manner, this book provides detailed examples with complete computer codes. It avoids the most complicated mathematics and physics without scarifying the basic fundamentals of the method.

  18. Performance of particle in cell methods on highly concurrent computational architectures

    International Nuclear Information System (INIS)

    Adams, M.F.; Ethier, S.; Wichmann, N.

    2009-01-01

    Particle in cell (PIC) methods are effective in computing Vlasov-Poisson system of equations used in simulations of magnetic fusion plasmas. PIC methods use grid based computations, for solving Poisson's equation or more generally Maxwell's equations, as well as Monte-Carlo type methods to sample the Vlasov equation. The presence of two types of discretizations, deterministic field solves and Monte-Carlo methods for the Vlasov equation, pose challenges in understanding and optimizing performance on today large scale computers which require high levels of concurrency. These challenges arises from the need to optimize two very different types of processes and the interactions between them. Modern cache based high-end computers have very deep memory hierarchies and high degrees of concurrency which must be utilized effectively to achieve good performance. The effective use of these machines requires maximizing concurrency by eliminating serial or redundant work and minimizing global communication. A related issue is minimizing the memory traffic between levels of the memory hierarchy because performance is often limited by the bandwidths and latencies of the memory system. This paper discusses some of the performance issues, particularly in regard to parallelism, of PIC methods. The gyrokinetic toroidal code (GTC) is used for these studies and a new radial grid decomposition is presented and evaluated. Scaling of the code is demonstrated on ITER sized plasmas with up to 16K Cray XT3/4 cores.

  19. Performance of particle in cell methods on highly concurrent computational architectures

    International Nuclear Information System (INIS)

    Adams, M F; Ethier, S; Wichmann, N

    2007-01-01

    Particle in cell (PIC) methods are effective in computing Vlasov-Poisson system of equations used in simulations of magnetic fusion plasmas. PIC methods use grid based computations, for solving Poisson's equation or more generally Maxwell's equations, as well as Monte-Carlo type methods to sample the Vlasov equation. The presence of two types of discretizations, deterministic field solves and Monte-Carlo methods for the Vlasov equation, pose challenges in understanding and optimizing performance on today large scale computers which require high levels of concurrency. These challenges arises from the need to optimize two very different types of processes and the interactions between them. Modern cache based high-end computers have very deep memory hierarchies and high degrees of concurrency which must be utilized effectively to achieve good performance. The effective use of these machines requires maximizing concurrency by eliminating serial or redundant work and minimizing global communication. A related issue is minimizing the memory traffic between levels of the memory hierarchy because performance is often limited by the bandwidths and latencies of the memory system. This paper discusses some of the performance issues, particularly in regard to parallelism, of PIC methods. The gyrokinetic toroidal code (GTC) is used for these studies and a new radial grid decomposition is presented and evaluated. Scaling of the code is demonstrated on ITER sized plasmas with up to 16K Cray XT3/4 cores

  20. Short-term electric load forecasting using computational intelligence methods

    OpenAIRE

    Jurado, Sergio; Peralta, J.; Nebot, Àngela; Mugica, Francisco; Cortez, Paulo

    2013-01-01

    Accurate time series forecasting is a key issue to support individual and organizational decision making. In this paper, we introduce several methods for short-term electric load forecasting. All the presented methods stem from computational intelligence techniques: Random Forest, Nonlinear Autoregressive Neural Networks, Evolutionary Support Vector Machines and Fuzzy Inductive Reasoning. The performance of the suggested methods is experimentally justified with several experiments carried out...

  1. Computational method for free surface hydrodynamics

    International Nuclear Information System (INIS)

    Hirt, C.W.; Nichols, B.D.

    1980-01-01

    There are numerous flow phenomena in pressure vessel and piping systems that involve the dynamics of free fluid surfaces. For example, fluid interfaces must be considered during the draining or filling of tanks, in the formation and collapse of vapor bubbles, and in seismically shaken vessels that are partially filled. To aid in the analysis of these types of flow phenomena, a new technique has been developed for the computation of complicated free-surface motions. This technique is based on the concept of a local average volume of fluid (VOF) and is embodied in a computer program for two-dimensional, transient fluid flow called SOLA-VOF. The basic approach used in the VOF technique is briefly described, and compared to other free-surface methods. Specific capabilities of the SOLA-VOF program are illustrated by generic examples of bubble growth and collapse, flows of immiscible fluid mixtures, and the confinement of spilled liquids

  2. A systematic and efficient method to compute multi-loop master integrals

    Directory of Open Access Journals (Sweden)

    Xiao Liu

    2018-04-01

    Full Text Available We propose a novel method to compute multi-loop master integrals by constructing and numerically solving a system of ordinary differential equations, with almost trivial boundary conditions. Thus it can be systematically applied to problems with arbitrary kinematic configurations. Numerical tests show that our method can not only achieve results with high precision, but also be much faster than the only existing systematic method sector decomposition. As a by product, we find a new strategy to compute scalar one-loop integrals without reducing them to master integrals.

  3. Advanced soft computing diagnosis method for tumour grading.

    Science.gov (United States)

    Papageorgiou, E I; Spyridonos, P P; Stylios, C D; Ravazoula, P; Groumpos, P P; Nikiforidis, G N

    2006-01-01

    To develop an advanced diagnostic method for urinary bladder tumour grading. A novel soft computing modelling methodology based on the augmentation of fuzzy cognitive maps (FCMs) with the unsupervised active Hebbian learning (AHL) algorithm is applied. One hundred and twenty-eight cases of urinary bladder cancer were retrieved from the archives of the Department of Histopathology, University Hospital of Patras, Greece. All tumours had been characterized according to the classical World Health Organization (WHO) grading system. To design the FCM model for tumour grading, three experts histopathologists defined the main histopathological features (concepts) and their impact on grade characterization. The resulted FCM model consisted of nine concepts. Eight concepts represented the main histopathological features for tumour grading. The ninth concept represented the tumour grade. To increase the classification ability of the FCM model, the AHL algorithm was applied to adjust the weights of the FCM. The proposed FCM grading model achieved a classification accuracy of 72.5%, 74.42% and 95.55% for tumours of grades I, II and III, respectively. An advanced computerized method to support tumour grade diagnosis decision was proposed and developed. The novelty of the method is based on employing the soft computing method of FCMs to represent specialized knowledge on histopathology and on augmenting FCMs ability using an unsupervised learning algorithm, the AHL. The proposed method performs with reasonably high accuracy compared to other existing methods and at the same time meets the physicians' requirements for transparency and explicability.

  4. Improved computation method in residual life estimation of structural components

    Directory of Open Access Journals (Sweden)

    Maksimović Stevan M.

    2013-01-01

    Full Text Available This work considers the numerical computation methods and procedures for the fatigue crack growth predicting of cracked notched structural components. Computation method is based on fatigue life prediction using the strain energy density approach. Based on the strain energy density (SED theory, a fatigue crack growth model is developed to predict the lifetime of fatigue crack growth for single or mixed mode cracks. The model is based on an equation expressed in terms of low cycle fatigue parameters. Attention is focused on crack growth analysis of structural components under variable amplitude loads. Crack growth is largely influenced by the effect of the plastic zone at the front of the crack. To obtain efficient computation model plasticity-induced crack closure phenomenon is considered during fatigue crack growth. The use of the strain energy density method is efficient for fatigue crack growth prediction under cyclic loading in damaged structural components. Strain energy density method is easy for engineering applications since it does not require any additional determination of fatigue parameters (those would need to be separately determined for fatigue crack propagation phase, and low cyclic fatigue parameters are used instead. Accurate determination of fatigue crack closure has been a complex task for years. The influence of this phenomenon can be considered by means of experimental and numerical methods. Both of these models are considered. Finite element analysis (FEA has been shown to be a powerful and useful tool1,6 to analyze crack growth and crack closure effects. Computation results are compared with available experimental results. [Projekat Ministarstva nauke Republike Srbije, br. OI 174001

  5. Deterministic absorbed dose estimation in computed tomography using a discrete ordinates method

    International Nuclear Information System (INIS)

    Norris, Edward T.; Liu, Xin; Hsieh, Jiang

    2015-01-01

    Purpose: Organ dose estimation for a patient undergoing computed tomography (CT) scanning is very important. Although Monte Carlo methods are considered gold-standard in patient dose estimation, the computation time required is formidable for routine clinical calculations. Here, the authors instigate a deterministic method for estimating an absorbed dose more efficiently. Methods: Compared with current Monte Carlo methods, a more efficient approach to estimating the absorbed dose is to solve the linear Boltzmann equation numerically. In this study, an axial CT scan was modeled with a software package, Denovo, which solved the linear Boltzmann equation using the discrete ordinates method. The CT scanning configuration included 16 x-ray source positions, beam collimators, flat filters, and bowtie filters. The phantom was the standard 32 cm CT dose index (CTDI) phantom. Four different Denovo simulations were performed with different simulation parameters, including the number of quadrature sets and the order of Legendre polynomial expansions. A Monte Carlo simulation was also performed for benchmarking the Denovo simulations. A quantitative comparison was made of the simulation results obtained by the Denovo and the Monte Carlo methods. Results: The difference in the simulation results of the discrete ordinates method and those of the Monte Carlo methods was found to be small, with a root-mean-square difference of around 2.4%. It was found that the discrete ordinates method, with a higher order of Legendre polynomial expansions, underestimated the absorbed dose near the center of the phantom (i.e., low dose region). Simulations of the quadrature set 8 and the first order of the Legendre polynomial expansions proved to be the most efficient computation method in the authors’ study. The single-thread computation time of the deterministic simulation of the quadrature set 8 and the first order of the Legendre polynomial expansions was 21 min on a personal computer

  6. Efficient method for computing the electronic transport properties of a multiterminal system

    Science.gov (United States)

    Lima, Leandro R. F.; Dusko, Amintor; Lewenkopf, Caio

    2018-04-01

    We present a multiprobe recursive Green's function method to compute the transport properties of mesoscopic systems using the Landauer-Büttiker approach. By introducing an adaptive partition scheme, we map the multiprobe problem into the standard two-probe recursive Green's function method. We apply the method to compute the longitudinal and Hall resistances of a disordered graphene sample, a system of current interest. We show that the performance and accuracy of our method compares very well with other state-of-the-art schemes.

  7. Particular application of methods of AdaBoost and LBP to the problems of computer vision

    OpenAIRE

    Волошин, Микола Володимирович

    2012-01-01

    The application of AdaBoost method and local binary pattern (LBP) method for different spheres of computer vision implementation, such as personality identification and computer iridology, is considered in the article. The goal of the research is to develop error-correcting methods and systems for implements of computer vision and computer iridology, in particular. This article considers the problem of colour spaces, which are used as a filter and as a pre-processing of images. Method of AdaB...

  8. QUANTUM INSPIRED PARTICLE SWARM COMBINED WITH LIN-KERNIGHAN-HELSGAUN METHOD TO THE TRAVELING SALESMAN PROBLEM

    Directory of Open Access Journals (Sweden)

    Bruno Avila Leal de Meirelles Herrera

    2015-12-01

    Full Text Available ABSTRACT The Traveling Salesman Problem (TSP is one of the most well-known and studied problems of Operations Research field, more specifically, in the Combinatorial Optimization field. As the TSP is a NP (Non-Deterministic Polynomial time-hard problem, there are several heuristic methods which have been proposed for the past decades in the attempt to solve it the best possible way. The aim of this work is to introduce and to evaluate the performance of some approaches for achieving optimal solution considering some symmetrical and asymmetrical TSP instances, which were taken from the Traveling Salesman Problem Library (TSPLIB. The analyzed approaches were divided into three methods: (i Lin-Kernighan-Helsgaun (LKH algorithm; (ii LKH with initial tour based on uniform distribution; and (iii an hybrid proposal combining Particle Swarm Optimization (PSO with quantum inspired behavior and LKH for local search procedure. The tested algorithms presented promising results in terms of computational cost and solution quality.

  9. Methods for planning and operating decentralized combined heat and power plants

    Energy Technology Data Exchange (ETDEWEB)

    Palsson, H.

    2000-02-01

    In recent years, the number of decentralized combined heat and power (DCHP) plants, which are typically located in small communities, has grown rapidly. These relatively small plants are based on Danish energy resources, mainly natural gas, and constitute an increasing part of the total energy production in Denmark. The topic of this thesis is the analysis of DCHP plants, with the purpose to optimize the operation of such plants. This involves the modelling of district heating systems, which are frequently connected to DCHP plants, as well as the use of heat storage for balancing between heat and power production. Furthermore, the accumulated effect from increasing number of DCHP plants on the total power production is considered. Methods for calculating dynamic temperature response in district heating (DH) pipes have been reviewed and analyzed numerically. Furthermore, it has been shown that a tree-structured DH network consisting of about one thousand pipes can be reduced to a simple chain structure of ten equivalent pipes without loosing much accuracy when temperature dynamics are calculated. A computationally efficient optimization method based on stochastic dynamic programming has been designed to find an optimum start-stop strategy for a DCHP plant with a heat storage. The method focuses on how to utilize heat storage in connection with CHP production. A model for the total power production in Eastern Denmark has been applied to the accumulated DCHP production. Probability production simulations have been extended from the traditional power-only analysis to include one or several heat supply areas. (au)

  10. A typology of health marketing research methods--combining public relations methods with organizational concern.

    Science.gov (United States)

    Rotarius, Timothy; Wan, Thomas T H; Liberman, Aaron

    2007-01-01

    Research plays a critical role throughout virtually every conduit of the health services industry. The key terms of research, public relations, and organizational interests are discussed. Combining public relations as a strategic methodology with the organizational concern as a factor, a typology of four different research methods emerges. These four health marketing research methods are: investigative, strategic, informative, and verification. The implications of these distinct and contrasting research methods are examined.

  11. Fluid history computation methods for reactor safeguards problems using MNODE computer program

    International Nuclear Information System (INIS)

    Huang, Y.S.; Savery, C.W.

    1976-10-01

    A method for predicting the pressure-temperature histories of air, water liquid, and vapor flowing in a zoned containment as a result of high energy pipe rupture is described. The computer code, MNODE, has been developed for 12 connected control volumes and 24 inertia flow paths. Predictions by the code are compared with the results of an analytical gas dynamic problem, semiscale blowdown experiments, full scale MARVIKEN test results, Battelle-Frankfurt model PWR containment test data. The MNODE solutions to NRC/AEC subcompartment benchmark problems are also compared with results predicted by other computer codes such as RELAP-3, FLASH-2, CONTEMPT-PS. The analytical consideration is consistent with Section 6.2.1.2 of the Standard Format (Rev. 2) issued by U.S. Nuclear Regulatory Commission in September 1975

  12. Effect of combined teaching method (role playing and storytelling ...

    African Journals Online (AJOL)

    Effect of combined teaching method (role playing and storytelling) on creative ... Remember me ... Background and Purpose: Storytelling promotes imagination and satisfies curiosity in children and creates learning opportunities in them.

  13. New Combined Electron-Beam Methods of Wastewater Purification

    International Nuclear Information System (INIS)

    Pikaev, A.K.; Makarov, I.E.; Ponomarev, A.V.; Kartasheva, L.I.; Podzorova, E.A.; Chulkov, V.N.; Han, B.; Kim, D.K.

    1999-01-01

    The paper is a brief review of the results obtained with the participation of the authors from the study on combined electron-beam methods for purification of some wastewaters. The data on purification of wastewaters containing dyes or hydrogen peroxide and municipal wastewater in the aerosol flow are considered

  14. The e/h method of energy reconstruction for combined calorimeter

    International Nuclear Information System (INIS)

    Kul'chitskij, Yu.A.; Kuz'min, M.V.; Vinogradov, V.B.

    1999-01-01

    The new simple method of the energy reconstruction for a combined calorimeter, which we called the e/h method, is suggested. It uses only the known e/h ratios and the electron calibration constants and does not require the determination of any parameters by a minimization technique. The method has been tested on the basis of the 1996 test beam data of the ATLAS barrel combined calorimeter and demonstrated the correctness of the reconstruction of the mean values of energies. The obtained fractional energy resolution is [(58 ± 3)%/√E + (2.5 ± 0.3)%] O+ (1.7 ± 0.2) GeV/E. This algorithm can be used for the fast energy reconstruction in the first level trigger

  15. Cardiac single-photon emission-computed tomography using combined cone-beam/fan-beam collimation

    International Nuclear Information System (INIS)

    Gullberg, Grant T.; Zeng, Gengsheng L.

    2004-01-01

    The objective of this work is to increase system sensitivity in cardiac single-photon emission-computed tomography (SPECT) studies without increasing patient imaging time. For imaging the heart, convergent collimation offers the potential of increased sensitivity over that of parallel-hole collimation. However, if a cone-beam collimated gamma camera is rotated in a planar orbit, the projection data obtained are not complete. Two cone-beam collimators and one fan-beam collimator are used with a three-detector SPECT system. The combined cone-beam/fan-beam collimation provides a complete set of data for image reconstruction. The imaging geometry is evaluated using data acquired from phantom and patient studies. For the Jaszazck cardiac torso phantom experiment, the combined cone-beam/fan-beam collimation provided 1.7 times greater sensitivity than standard parallel-hole collimation (low-energy high-resolution collimators). Also, phantom and patient comparison studies showed improved image quality. The combined cone-beam/fan-beam imaging geometry with appropriate weighting of the two data sets provides improved system sensitivity while measuring sufficient data for artifact free cardiac images

  16. Computing homography with RANSAC algorithm: a novel method of registration

    Science.gov (United States)

    Li, Xiaowei; Liu, Yue; Wang, Yongtian; Yan, Dayuan

    2005-02-01

    An AR (Augmented Reality) system can integrate computer-generated objects with the image sequences of real world scenes in either an off-line or a real-time way. Registration, or camera pose estimation, is one of the key techniques to determine its performance. The registration methods can be classified as model-based and move-matching. The former approach can accomplish relatively accurate registration results, but it requires the precise model of the scene, which is hard to be obtained. The latter approach carries out registration by computing the ego-motion of the camera. Because it does not require the prior-knowledge of the scene, its registration results sometimes turn out to be less accurate. When the model defined is as simple as a plane, a mixed method is introduced to take advantages of the virtues of the two methods mentioned above. Although unexpected objects often occlude this plane in an AR system, one can still try to detect corresponding points with a contract-expand method, while this will import erroneous correspondences. Computing homography with RANSAC algorithm is used to overcome such shortcomings. Using the robustly estimated homography resulted from RANSAC, the camera projective matrix can be recovered and thus registration is accomplished even when the markers are lost in the scene.

  17. Entropy method combined with extreme learning machine method for the short-term photovoltaic power generation forecasting

    International Nuclear Information System (INIS)

    Tang, Pingzhou; Chen, Di; Hou, Yushuo

    2016-01-01

    As the world’s energy problem becomes more severe day by day, photovoltaic power generation has opened a new door for us with no doubt. It will provide an effective solution for this severe energy problem and meet human’s needs for energy if we can apply photovoltaic power generation in real life, Similar to wind power generation, photovoltaic power generation is uncertain. Therefore, the forecast of photovoltaic power generation is very crucial. In this paper, entropy method and extreme learning machine (ELM) method were combined to forecast a short-term photovoltaic power generation. First, entropy method is used to process initial data, train the network through the data after unification, and then forecast electricity generation. Finally, the data results obtained through the entropy method with ELM were compared with that generated through generalized regression neural network (GRNN) and radial basis function neural network (RBF) method. We found that entropy method combining with ELM method possesses higher accuracy and the calculation is faster.

  18. Method and computer program product for maintenance and modernization backlogging

    Science.gov (United States)

    Mattimore, Bernard G; Reynolds, Paul E; Farrell, Jill M

    2013-02-19

    According to one embodiment, a computer program product for determining future facility conditions includes a computer readable medium having computer readable program code stored therein. The computer readable program code includes computer readable program code for calculating a time period specific maintenance cost, for calculating a time period specific modernization factor, and for calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. In another embodiment, a computer-implemented method for calculating future facility conditions includes calculating a time period specific maintenance cost, calculating a time period specific modernization factor, and calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. Other embodiments are also presented.

  19. Regularization and computational methods for precise solution of perturbed orbit transfer problems

    Science.gov (United States)

    Woollands, Robyn Michele

    The author has developed a suite of algorithms for solving the perturbed Lambert's problem in celestial mechanics. These algorithms have been implemented as a parallel computation tool that has broad applicability. This tool is composed of four component algorithms and each provides unique benefits for solving a particular type of orbit transfer problem. The first one utilizes a Keplerian solver (a-iteration) for solving the unperturbed Lambert's problem. This algorithm not only provides a "warm start" for solving the perturbed problem but is also used to identify which of several perturbed solvers is best suited for the job. The second algorithm solves the perturbed Lambert's problem using a variant of the modified Chebyshev-Picard iteration initial value solver that solves two-point boundary value problems. This method converges over about one third of an orbit and does not require a Newton-type shooting method and thus no state transition matrix needs to be computed. The third algorithm makes use of regularization of the differential equations through the Kustaanheimo-Stiefel transformation and extends the domain of convergence over which the modified Chebyshev-Picard iteration two-point boundary value solver will converge, from about one third of an orbit to almost a full orbit. This algorithm also does not require a Newton-type shooting method. The fourth algorithm uses the method of particular solutions and the modified Chebyshev-Picard iteration initial value solver to solve the perturbed two-impulse Lambert problem over multiple revolutions. The method of particular solutions is a shooting method but differs from the Newton-type shooting methods in that it does not require integration of the state transition matrix. The mathematical developments that underlie these four algorithms are derived in the chapters of this dissertation. For each of the algorithms, some orbit transfer test cases are included to provide insight on accuracy and efficiency of these

  20. Recent advances in computational methods and clinical applications for spine imaging

    CERN Document Server

    Glocker, Ben; Klinder, Tobias; Li, Shuo

    2015-01-01

    This book contains the full papers presented at the MICCAI 2014 workshop on Computational Methods and Clinical Applications for Spine Imaging. The workshop brought together scientists and clinicians in the field of computational spine imaging. The chapters included in this book present and discuss the new advances and challenges in these fields, using several methods and techniques in order to address more efficiently different and timely applications involving signal and image acquisition, image processing and analysis, image segmentation, image registration and fusion, computer simulation, image based modeling, simulation and surgical planning, image guided robot assisted surgical and image based diagnosis. The book also includes papers and reports from the first challenge on vertebra segmentation held at the workshop.

  1. The use of combined single photon emission computed tomography and X-ray computed tomography to assess the fate of inhaled aerosol.

    Science.gov (United States)

    Fleming, John; Conway, Joy; Majoral, Caroline; Tossici-Bolt, Livia; Katz, Ira; Caillibotte, Georges; Perchet, Diane; Pichelin, Marine; Muellinger, Bernhard; Martonen, Ted; Kroneberg, Philipp; Apiou-Sbirlea, Gabriela

    2011-02-01

    Gamma camera imaging is widely used to assess pulmonary aerosol deposition. Conventional planar imaging provides limited information on its regional distribution. In this study, single photon emission computed tomography (SPECT) was used to describe deposition in three dimensions (3D) and combined with X-ray computed tomography (CT) to relate this to lung anatomy. Its performance was compared to planar imaging. Ten SPECT/CT studies were performed on five healthy subjects following carefully controlled inhalation of radioaerosol from a nebulizer, using a variety of inhalation regimes. The 3D spatial distribution was assessed using a central-to-peripheral ratio (C/P) normalized to lung volume and for the right lung was compared to planar C/P analysis. The deposition by airway generation was calculated for each lung and the conducting airways deposition fraction compared to 24-h clearance. The 3D normalized C/P ratio correlated more closely with 24-h clearance than the 2D ratio for the right lung [coefficient of variation (COV), 9% compared to 15% p computer analysis is a useful approach for applications requiring regional information on deposition.

  2. Regression modeling methods, theory, and computation with SAS

    CERN Document Server

    Panik, Michael

    2009-01-01

    Regression Modeling: Methods, Theory, and Computation with SAS provides an introduction to a diverse assortment of regression techniques using SAS to solve a wide variety of regression problems. The author fully documents the SAS programs and thoroughly explains the output produced by the programs.The text presents the popular ordinary least squares (OLS) approach before introducing many alternative regression methods. It covers nonparametric regression, logistic regression (including Poisson regression), Bayesian regression, robust regression, fuzzy regression, random coefficients regression,

  3. A result-driven minimum blocking method for PageRank parallel computing

    Science.gov (United States)

    Tao, Wan; Liu, Tao; Yu, Wei; Huang, Gan

    2017-01-01

    Matrix blocking is a common method for improving computational efficiency of PageRank, but the blocking rules are hard to be determined, and the following calculation is complicated. In tackling these problems, we propose a minimum blocking method driven by result needs to accomplish a parallel implementation of PageRank algorithm. The minimum blocking just stores the element which is necessary for the result matrix. In return, the following calculation becomes simple and the consumption of the I/O transmission is cut down. We do experiments on several matrixes of different data size and different sparsity degree. The results show that the proposed method has better computational efficiency than traditional blocking methods.

  4. Semi top-down method combined with earth-bank, an effective method for basement construction.

    Science.gov (United States)

    Tuan, B. Q.; Tam, Ng M.

    2018-04-01

    Choosing an appropriate method of deep excavation not only plays a decisive role in technical success, but also in economics of the construction project. Presently, we mainly base on to key methods: “Bottom-up” and “Top-down” construction method. Right now, this paper presents an another method of construction that is “Semi Top-down method combining with earth-bank” in order to take the advantages and limit the weakness of the above methods. The Bottom-up method was improved by using the earth-bank to stabilize retaining walls instead of the bracing steel struts. The Top-down method was improved by using the open cut method for the half of the earthwork quantities.

  5. In silico toxicology: computational methods for the prediction of chemical toxicity

    KAUST Repository

    Raies, Arwa B.; Bajic, Vladimir B.

    2016-01-01

    Determining the toxicity of chemicals is necessary to identify their harmful effects on humans, animals, plants, or the environment. It is also one of the main steps in drug design. Animal models have been used for a long time for toxicity testing. However, in vivo animal tests are constrained by time, ethical considerations, and financial burden. Therefore, computational methods for estimating the toxicity of chemicals are considered useful. In silico toxicology is one type of toxicity assessment that uses computational methods to analyze, simulate, visualize, or predict the toxicity of chemicals. In silico toxicology aims to complement existing toxicity tests to predict toxicity, prioritize chemicals, guide toxicity tests, and minimize late-stage failures in drugs design. There are various methods for generating models to predict toxicity endpoints. We provide a comprehensive overview, explain, and compare the strengths and weaknesses of the existing modeling methods and algorithms for toxicity prediction with a particular (but not exclusive) emphasis on computational tools that can implement these methods and refer to expert systems that deploy the prediction models. Finally, we briefly review a number of new research directions in in silico toxicology and provide recommendations for designing in silico models.

  6. In silico toxicology: computational methods for the prediction of chemical toxicity

    KAUST Repository

    Raies, Arwa B.

    2016-01-06

    Determining the toxicity of chemicals is necessary to identify their harmful effects on humans, animals, plants, or the environment. It is also one of the main steps in drug design. Animal models have been used for a long time for toxicity testing. However, in vivo animal tests are constrained by time, ethical considerations, and financial burden. Therefore, computational methods for estimating the toxicity of chemicals are considered useful. In silico toxicology is one type of toxicity assessment that uses computational methods to analyze, simulate, visualize, or predict the toxicity of chemicals. In silico toxicology aims to complement existing toxicity tests to predict toxicity, prioritize chemicals, guide toxicity tests, and minimize late-stage failures in drugs design. There are various methods for generating models to predict toxicity endpoints. We provide a comprehensive overview, explain, and compare the strengths and weaknesses of the existing modeling methods and algorithms for toxicity prediction with a particular (but not exclusive) emphasis on computational tools that can implement these methods and refer to expert systems that deploy the prediction models. Finally, we briefly review a number of new research directions in in silico toxicology and provide recommendations for designing in silico models.

  7. Multigrid methods for the computation of propagators in gauge fields

    International Nuclear Information System (INIS)

    Kalkreuter, T.

    1992-11-01

    In the present work generalizations of multigrid methods for propagators in gauge fields are investigated. We discuss proper averaging operations for bosons and for staggered fermions. An efficient algorithm for computing C numerically is presented. The averaging kernels C can be used not only in deterministic multigrid computations, but also in multigrid Monte Carlo simulations, and for the definition of block spins and blocked gauge fields in Monte Carlo renormalization group studies of gauge theories. Actual numerical computations of kernels and propagators are performed in compact four-dimensional SU(2) gauge fields. (orig./HSI)

  8. DDR: Efficient computational method to predict drug–target interactions using graph mining and machine learning approaches

    KAUST Repository

    Olayan, Rawan S.

    2017-11-23

    Motivation Finding computationally drug-target interactions (DTIs) is a convenient strategy to identify new DTIs at low cost with reasonable accuracy. However, the current DTI prediction methods suffer the high false positive prediction rate. Results We developed DDR, a novel method that improves the DTI prediction accuracy. DDR is based on the use of a heterogeneous graph that contains known DTIs with multiple similarities between drugs and multiple similarities between target proteins. DDR applies non-linear similarity fusion method to combine different similarities. Before fusion, DDR performs a pre-processing step where a subset of similarities is selected in a heuristic process to obtain an optimized combination of similarities. Then, DDR applies a random forest model using different graph-based features extracted from the DTI heterogeneous graph. Using five repeats of 10-fold cross-validation, three testing setups, and the weighted average of area under the precision-recall curve (AUPR) scores, we show that DDR significantly reduces the AUPR score error relative to the next best start-of-the-art method for predicting DTIs by 34% when the drugs are new, by 23% when targets are new, and by 34% when the drugs and the targets are known but not all DTIs between them are not known. Using independent sources of evidence, we verify as correct 22 out of the top 25 DDR novel predictions. This suggests that DDR can be used as an efficient method to identify correct DTIs.

  9. Advanced scientific computational methods and their applications to nuclear technologies. (3) Introduction of continuum simulation methods and their applications (3)

    International Nuclear Information System (INIS)

    Satake, Shin-ichi; Kunugi, Tomoaki

    2006-01-01

    Scientific computational methods have advanced remarkably with the progress of nuclear development. They have played the role of weft connecting each realm of nuclear engineering and then an introductory course of advanced scientific computational methods and their applications to nuclear technologies were prepared in serial form. This is the third issue showing the introduction of continuum simulation methods and their applications. Spectral methods and multi-interface calculation methods in fluid dynamics are reviewed. (T. Tanaka)

  10. Method-independent, Computationally Frugal Convergence Testing for Sensitivity Analysis Techniques

    Science.gov (United States)

    Mai, J.; Tolson, B.

    2017-12-01

    The increasing complexity and runtime of environmental models lead to the current situation that the calibration of all model parameters or the estimation of all of their uncertainty is often computationally infeasible. Hence, techniques to determine the sensitivity of model parameters are used to identify most important parameters. All subsequent model calibrations or uncertainty estimation procedures focus then only on these subsets of parameters and are hence less computational demanding. While the examination of the convergence of calibration and uncertainty methods is state-of-the-art, the convergence of the sensitivity methods is usually not checked. If any, bootstrapping of the sensitivity results is used to determine the reliability of the estimated indexes. Bootstrapping, however, might as well become computationally expensive in case of large model outputs and a high number of bootstraps. We, therefore, present a Model Variable Augmentation (MVA) approach to check the convergence of sensitivity indexes without performing any additional model run. This technique is method- and model-independent. It can be applied either during the sensitivity analysis (SA) or afterwards. The latter case enables the checking of already processed sensitivity indexes. To demonstrate the method's independency of the convergence testing method, we applied it to two widely used, global SA methods: the screening method known as Morris method or Elementary Effects (Morris 1991) and the variance-based Sobol' method (Solbol' 1993). The new convergence testing method is first scrutinized using 12 analytical benchmark functions (Cuntz & Mai et al. 2015) where the true indexes of aforementioned three methods are known. This proof of principle shows that the method reliably determines the uncertainty of the SA results when different budgets are used for the SA. The results show that the new frugal method is able to test the convergence and therefore the reliability of SA results in an

  11. Inversion of potential field data using the finite element method on parallel computers

    Science.gov (United States)

    Gross, L.; Altinay, C.; Shaw, S.

    2015-11-01

    In this paper we present a formulation of the joint inversion of potential field anomaly data as an optimization problem with partial differential equation (PDE) constraints. The problem is solved using the iterative Broyden-Fletcher-Goldfarb-Shanno (BFGS) method with the Hessian operator of the regularization and cross-gradient component of the cost function as preconditioner. We will show that each iterative step requires the solution of several PDEs namely for the potential fields, for the adjoint defects and for the application of the preconditioner. In extension to the traditional discrete formulation the BFGS method is applied to continuous descriptions of the unknown physical properties in combination with an appropriate integral form of the dot product. The PDEs can easily be solved using standard conforming finite element methods (FEMs) with potentially different resolutions. For two examples we demonstrate that the number of PDE solutions required to reach a given tolerance in the BFGS iteration is controlled by weighting regularization and cross-gradient but is independent of the resolution of PDE discretization and that as a consequence the method is weakly scalable with the number of cells on parallel computers. We also show a comparison with the UBC-GIF GRAV3D code.

  12. Computer-modeling codes to improve exploration nuclear-logging methods. National Uranium Resource Evaluation

    International Nuclear Information System (INIS)

    Wilson, R.D.; Price, R.K.; Kosanke, K.L.

    1983-03-01

    As part of the Department of Energy's National Uranium Resource Evaluation (NURE) project's Technology Development effort, a number of computer codes and accompanying data bases were assembled for use in modeling responses of nuclear borehole logging Sondes. The logging methods include fission neutron, active and passive gamma-ray, and gamma-gamma. These CDC-compatible computer codes and data bases are available on magnetic tape from the DOE Technical Library at its Grand Junction Area Office. Some of the computer codes are standard radiation-transport programs that have been available to the radiation shielding community for several years. Other codes were specifically written to model the response of borehole radiation detectors or are specialized borehole modeling versions of existing Monte Carlo transport programs. Results from several radiation modeling studies are available as two large data bases (neutron and gamma-ray). These data bases are accompanied by appropriate processing programs that permit the user to model a wide range of borehole and formation-parameter combinations for fission-neutron, neutron-, activation and gamma-gamma logs. The first part of this report consists of a brief abstract for each code or data base. The abstract gives the code name and title, short description, auxiliary requirements, typical running time (CDC 6600), and a list of references. The next section gives format specifications and/or directory for the tapes. The final section of the report presents listings for programs used to convert data bases between machine floating-point and EBCDIC

  13. Practical methods to improve the development of computational software

    International Nuclear Information System (INIS)

    Osborne, A. G.; Harding, D. W.; Deinert, M. R.

    2013-01-01

    The use of computation has become ubiquitous in science and engineering. As the complexity of computer codes has increased, so has the need for robust methods to minimize errors. Past work has show that the number of functional errors is related the number of commands that a code executes. Since the late 1960's, major participants in the field of computation have encouraged the development of best practices for programming to help reduce coder induced error, and this has lead to the emergence of 'software engineering' as a field of study. Best practices for coding and software production have now evolved and become common in the development of commercial software. These same techniques, however, are largely absent from the development of computational codes by research groups. Many of the best practice techniques from the professional software community would be easy for research groups in nuclear science and engineering to adopt. This paper outlines the history of software engineering, as well as issues in modern scientific computation, and recommends practices that should be adopted by individual scientific programmers and university research groups. (authors)

  14. Magic Pointing for Eyewear Computers

    DEFF Research Database (Denmark)

    Jalaliniya, Shahram; Mardanbegi, Diako; Pederson, Thomas

    2015-01-01

    In this paper, we propose a combination of head and eye movements for touchlessly controlling the "mouse pointer" on eyewear devices, exploiting the speed of eye pointing and accuracy of head pointing. The method is a wearable computer-targeted variation of the original MAGIC pointing approach...... which combined gaze tracking with a classical mouse device. The result of our experiment shows that the combination of eye and head movements is faster than head pointing for far targets and more accurate than eye pointing....

  15. Automated lesion detection on MRI scans using combined unsupervised and supervised methods

    International Nuclear Information System (INIS)

    Guo, Dazhou; Fridriksson, Julius; Fillmore, Paul; Rorden, Christopher; Yu, Hongkai; Zheng, Kang; Wang, Song

    2015-01-01

    Accurate and precise detection of brain lesions on MR images (MRI) is paramount for accurately relating lesion location to impaired behavior. In this paper, we present a novel method to automatically detect brain lesions from a T1-weighted 3D MRI. The proposed method combines the advantages of both unsupervised and supervised methods. First, unsupervised methods perform a unified segmentation normalization to warp images from the native space into a standard space and to generate probability maps for different tissue types, e.g., gray matter, white matter and fluid. This allows us to construct an initial lesion probability map by comparing the normalized MRI to healthy control subjects. Then, we perform non-rigid and reversible atlas-based registration to refine the probability maps of gray matter, white matter, external CSF, ventricle, and lesions. These probability maps are combined with the normalized MRI to construct three types of features, with which we use supervised methods to train three support vector machine (SVM) classifiers for a combined classifier. Finally, the combined classifier is used to accomplish lesion detection. We tested this method using T1-weighted MRIs from 60 in-house stroke patients. Using leave-one-out cross validation, the proposed method can achieve an average Dice coefficient of 73.1 % when compared to lesion maps hand-delineated by trained neurologists. Furthermore, we tested the proposed method on the T1-weighted MRIs in the MICCAI BRATS 2012 dataset. The proposed method can achieve an average Dice coefficient of 66.5 % in comparison to the expert annotated tumor maps provided in MICCAI BRATS 2012 dataset. In addition, on these two test datasets, the proposed method shows competitive performance to three state-of-the-art methods, including Stamatakis et al., Seghier et al., and Sanjuan et al. In this paper, we introduced a novel automated procedure for lesion detection from T1-weighted MRIs by combining both an unsupervised and a

  16. An Adaptive Reordered Method for Computing PageRank

    Directory of Open Access Journals (Sweden)

    Yi-Ming Bu

    2013-01-01

    Full Text Available We propose an adaptive reordered method to deal with the PageRank problem. It has been shown that one can reorder the hyperlink matrix of PageRank problem to calculate a reduced system and get the full PageRank vector through forward substitutions. This method can provide a speedup for calculating the PageRank vector. We observe that in the existing reordered method, the cost of the recursively reordering procedure could offset the computational reduction brought by minimizing the dimension of linear system. With this observation, we introduce an adaptive reordered method to accelerate the total calculation, in which we terminate the reordering procedure appropriately instead of reordering to the end. Numerical experiments show the effectiveness of this adaptive reordered method.

  17. On the potential of computational methods and numerical simulation in ice mechanics

    International Nuclear Information System (INIS)

    Bergan, Paal G; Cammaert, Gus; Skeie, Geir; Tharigopula, Venkatapathi

    2010-01-01

    This paper deals with the challenge of developing better methods and tools for analysing interaction between sea ice and structures and, in particular, to be able to calculate ice loads on these structures. Ice loads have traditionally been estimated using empirical data and 'engineering judgment'. However, it is believed that computational mechanics and advanced computer simulations of ice-structure interaction can play an important role in developing safer and more efficient structures, especially for irregular structural configurations. The paper explains the complexity of ice as a material in computational mechanics terms. Some key words here are large displacements and deformations, multi-body contact mechanics, instabilities, multi-phase materials, inelasticity, time dependency and creep, thermal effects, fracture and crushing, and multi-scale effects. The paper points towards the use of advanced methods like ALE formulations, mesh-less methods, particle methods, XFEM, and multi-domain formulations in order to deal with these challenges. Some examples involving numerical simulation of interaction and loads between level sea ice and offshore structures are presented. It is concluded that computational mechanics may prove to become a very useful tool for analysing structures in ice; however, much research is still needed to achieve satisfactory reliability and versatility of these methods.

  18. An efficient method for computing the absorption of solar radiation by water vapor

    Science.gov (United States)

    Chou, M.-D.; Arking, A.

    1981-01-01

    Chou and Arking (1980) have developed a fast but accurate method for computing the IR cooling rate due to water vapor. Using a similar approach, the considered investigation develops a method for computing the heating rates due to the absorption of solar radiation by water vapor in the wavelength range from 4 to 8.3 micrometers. The validity of the method is verified by comparison with line-by-line calculations. An outline is provided of an efficient method for transmittance and flux computations based upon actual line parameters. High speed is achieved by employing a one-parameter scaling approximation to convert an inhomogeneous path into an equivalent homogeneous path at suitably chosen reference conditions.

  19. 5th International Workshop on Combinations of Intelligent Methods and Applications

    CERN Document Server

    Palade, Vasile; Prentzas, Jim

    2017-01-01

    Complex problems usually cannot be solved by individual methods or techniques and require the synergism of more than one of them to be solved. This book presents a number of current efforts that use combinations of methods or techniques to solve complex problems in the areas of sentiment analysis, search in GIS, graph-based social networking, intelligent e-learning systems, data mining and recommendation systems. Most of them are connected with specific applications, whereas the rest are combinations based on principles. Most of the chapters are extended versions of the corresponding papers presented in CIMA-15 Workshop, which took place in conjunction with IEEE ICTAI-15, in November 2015. The rest are invited papers that responded to special call for papers for the book. The book is addressed to researchers and practitioners from academia or industry, who are interested in using combined methods in solving complex problems in the above areas.

  20. A fast computation method for MUSIC spectrum function based on circular arrays

    Science.gov (United States)

    Du, Zhengdong; Wei, Ping

    2015-02-01

    The large computation amount of multiple signal classification (MUSIC) spectrum function seriously affects the timeliness of direction finding system using MUSIC algorithm, especially in the two-dimensional directions of arrival (DOA) estimation of azimuth and elevation with a large antenna array. This paper proposes a fast computation method for MUSIC spectrum. It is suitable for any circular array. First, the circular array is transformed into a virtual uniform circular array, in the process of calculating MUSIC spectrum, for the cyclic characteristics of steering vector, the inner product in the calculation of spatial spectrum is realised by cyclic convolution. The computational amount of MUSIC spectrum is obviously less than that of the conventional method. It is a very practical way for MUSIC spectrum computation in circular arrays.

  1. Computational methods for nuclear criticality safety analysis

    International Nuclear Information System (INIS)

    Maragni, M.G.

    1992-01-01

    Nuclear criticality safety analyses require the utilization of methods which have been tested and verified against benchmarks results. In this work, criticality calculations based on the KENO-IV and MCNP codes are studied aiming the qualification of these methods at the IPEN-CNEN/SP and COPESP. The utilization of variance reduction techniques is important to reduce the computer execution time, and several of them are analysed. As practical example of the above methods, a criticality safety analysis for the storage tubes for irradiated fuel elements from the IEA-R1 research has been carried out. This analysis showed that the MCNP code is more adequate for problems with complex geometries, and the KENO-IV code shows conservative results when it is not used the generalized geometry option. (author)

  2. Computational Methods and Function Theory

    CERN Document Server

    Saff, Edward; Salinas, Luis; Varga, Richard

    1990-01-01

    The volume is devoted to the interaction of modern scientific computation and classical function theory. Many problems in pure and more applied function theory can be tackled using modern computing facilities: numerically as well as in the sense of computer algebra. On the other hand, computer algorithms are often based on complex function theory, and dedicated research on their theoretical foundations can lead to great enhancements in performance. The contributions - original research articles, a survey and a collection of problems - cover a broad range of such problems.

  3. Advanced computational tools and methods for nuclear analyses of fusion technology systems

    International Nuclear Information System (INIS)

    Fischer, U.; Chen, Y.; Pereslavtsev, P.; Simakov, S.P.; Tsige-Tamirat, H.; Loughlin, M.; Perel, R.L.; Petrizzi, L.; Tautges, T.J.; Wilson, P.P.H.

    2005-01-01

    An overview is presented of advanced computational tools and methods developed recently for nuclear analyses of Fusion Technology systems such as the experimental device ITER ('International Thermonuclear Experimental Reactor') and the intense neutron source IFMIF ('International Fusion Material Irradiation Facility'). These include Monte Carlo based computational schemes for the calculation of three-dimensional shut-down dose rate distributions, methods, codes and interfaces for the use of CAD geometry models in Monte Carlo transport calculations, algorithms for Monte Carlo based sensitivity/uncertainty calculations, as well as computational techniques and data for IFMIF neutronics and activation calculations. (author)

  4. Soft Computing Methods for Disulfide Connectivity Prediction.

    Science.gov (United States)

    Márquez-Chamorro, Alfonso E; Aguilar-Ruiz, Jesús S

    2015-01-01

    The problem of protein structure prediction (PSP) is one of the main challenges in structural bioinformatics. To tackle this problem, PSP can be divided into several subproblems. One of these subproblems is the prediction of disulfide bonds. The disulfide connectivity prediction problem consists in identifying which nonadjacent cysteines would be cross-linked from all possible candidates. Determining the disulfide bond connectivity between the cysteines of a protein is desirable as a previous step of the 3D PSP, as the protein conformational search space is highly reduced. The most representative soft computing approaches for the disulfide bonds connectivity prediction problem of the last decade are summarized in this paper. Certain aspects, such as the different methodologies based on soft computing approaches (artificial neural network or support vector machine) or features of the algorithms, are used for the classification of these methods.

  5. Advanced methods for the computation of particle beam transport and the computation of electromagnetic fields and beam-cavity interactions

    International Nuclear Information System (INIS)

    Dragt, A.J.; Gluckstern, R.L.

    1992-11-01

    The University of Maryland Dynamical Systems and Accelerator Theory Group carries out research in two broad areas: the computation of charged particle beam transport using Lie algebraic methods and advanced methods for the computation of electromagnetic fields and beam-cavity interactions. Important improvements in the state of the art are believed to be possible in both of these areas. In addition, applications of these methods are made to problems of current interest in accelerator physics including the theoretical performance of present and proposed high energy machines. The Lie algebraic method of computing and analyzing beam transport handles both linear and nonlinear beam elements. Tests show this method to be superior to the earlier matrix or numerical integration methods. It has wide application to many areas including accelerator physics, intense particle beams, ion microprobes, high resolution electron microscopy, and light optics. With regard to the area of electromagnetic fields and beam cavity interactions, work is carried out on the theory of beam breakup in single pulses. Work is also done on the analysis of the high frequency behavior of longitudinal and transverse coupling impedances, including the examination of methods which may be used to measure these impedances. Finally, work is performed on the electromagnetic analysis of coupled cavities and on the coupling of cavities to waveguides

  6. Stacking interactions between carbohydrate and protein quantified by combination of theoretical and experimental methods.

    Directory of Open Access Journals (Sweden)

    Michaela Wimmerová

    Full Text Available Carbohydrate-receptor interactions are an integral part of biological events. They play an important role in many cellular processes, such as cell-cell adhesion, cell differentiation and in-cell signaling. Carbohydrates can interact with a receptor by using several types of intermolecular interactions. One of the most important is the interaction of a carbohydrate's apolar part with aromatic amino acid residues, known as dispersion interaction or CH/π interaction. In the study presented here, we attempted for the first time to quantify how the CH/π interaction contributes to a more general carbohydrate-protein interaction. We used a combined experimental approach, creating single and double point mutants with high level computational methods, and applied both to Ralstonia solanacearum (RSL lectin complexes with α-L-Me-fucoside. Experimentally measured binding affinities were compared with computed carbohydrate-aromatic amino acid residue interaction energies. Experimental binding affinities for the RSL wild type, phenylalanine and alanine mutants were -8.5, -7.1 and -4.1 kcal x mol(-1, respectively. These affinities agree with the computed dispersion interaction energy between carbohydrate and aromatic amino acid residues for RSL wild type and phenylalanine, with values -8.8, -7.9 kcal x mol(-1, excluding the alanine mutant where the interaction energy was -0.9 kcal x mol(-1. Molecular dynamics simulations show that discrepancy can be caused by creation of a new hydrogen bond between the α-L-Me-fucoside and RSL. Observed results suggest that in this and similar cases the carbohydrate-receptor interaction can be driven mainly by a dispersion interaction.

  7. Fast calculation method for computer-generated cylindrical holograms.

    Science.gov (United States)

    Yamaguchi, Takeshi; Fujii, Tomohiko; Yoshikawa, Hiroshi

    2008-07-01

    Since a general flat hologram has a limited viewable area, we usually cannot see the other side of a reconstructed object. There are some holograms that can solve this problem. A cylindrical hologram is well known to be viewable in 360 deg. Most cylindrical holograms are optical holograms, but there are few reports of computer-generated cylindrical holograms. The lack of computer-generated cylindrical holograms is because the spatial resolution of output devices is not great enough; therefore, we have to make a large hologram or use a small object to fulfill the sampling theorem. In addition, in calculating the large fringe, the calculation amount increases in proportion to the hologram size. Therefore, we propose what we believe to be a new calculation method for fast calculation. Then, we print these fringes with our prototype fringe printer. As a result, we obtain a good reconstructed image from a computer-generated cylindrical hologram.

  8. Reconstruction method for fluorescent X-ray computed tomography by least-squares method using singular value decomposition

    Science.gov (United States)

    Yuasa, T.; Akiba, M.; Takeda, T.; Kazama, M.; Hoshino, A.; Watanabe, Y.; Hyodo, K.; Dilmanian, F. A.; Akatsuka, T.; Itai, Y.

    1997-02-01

    We describe a new attenuation correction method for fluorescent X-ray computed tomography (FXCT) applied to image nonradioactive contrast materials in vivo. The principle of the FXCT imaging is that of computed tomography of the first generation. Using monochromatized synchrotron radiation from the BLNE-5A bending-magnet beam line of Tristan Accumulation Ring in KEK, Japan, we studied phantoms with the FXCT method, and we succeeded in delineating a 4-mm-diameter channel filled with a 500 /spl mu/g I/ml iodine solution in a 20-mm-diameter acrylic cylindrical phantom. However, to detect smaller iodine concentrations, attenuation correction is needed. We present a correction method based on the equation representing the measurement process. The discretized equation system is solved by the least-squares method using the singular value decomposition. The attenuation correction method is applied to the projections by the Monte Carlo simulation and the experiment to confirm its effectiveness.

  9. Mathematical modellings and computational methods for structural analysis of LMFBR's

    International Nuclear Information System (INIS)

    Liu, W.K.; Lam, D.

    1983-01-01

    In this paper, two aspects of nuclear reactor problems are discussed, modelling techniques and computational methods for large scale linear and nonlinear analyses of LMFBRs. For nonlinear fluid-structure interaction problem with large deformation, arbitrary Lagrangian-Eulerian description is applicable. For certain linear fluid-structure interaction problem, the structural response spectrum can be found via 'added mass' approach. In a sense, the fluid inertia is accounted by a mass matrix added to the structural mass. The fluid/structural modes of certain fluid-structure problem can be uncoupled to get the reduced added mass. The advantage of this approach is that it can account for the many repeated structures of nuclear reactor. In regard to nonlinear dynamic problem, the coupled nonlinear fluid-structure equations usually have to be solved by direct time integration. The computation can be very expensive and time consuming for nonlinear problems. Thus, it is desirable to optimize the accuracy and computation effort by using implicit-explicit mixed time integration method. (orig.)

  10. Automated uncertainty analysis methods in the FRAP computer codes

    International Nuclear Information System (INIS)

    Peck, S.O.

    1980-01-01

    A user oriented, automated uncertainty analysis capability has been incorporated in the Fuel Rod Analysis Program (FRAP) computer codes. The FRAP codes have been developed for the analysis of Light Water Reactor fuel rod behavior during steady state (FRAPCON) and transient (FRAP-T) conditions as part of the United States Nuclear Regulatory Commission's Water Reactor Safety Research Program. The objective of uncertainty analysis of these codes is to obtain estimates of the uncertainty in computed outputs of the codes is to obtain estimates of the uncertainty in computed outputs of the codes as a function of known uncertainties in input variables. This paper presents the methods used to generate an uncertainty analysis of a large computer code, discusses the assumptions that are made, and shows techniques for testing them. An uncertainty analysis of FRAP-T calculated fuel rod behavior during a hypothetical loss-of-coolant transient is presented as an example and carried through the discussion to illustrate the various concepts

  11. A hybrid method for flood simulation in small catchments combining hydrodynamic and hydrological techniques

    Science.gov (United States)

    Bellos, Vasilis; Tsakiris, George

    2016-09-01

    The study presents a new hybrid method for the simulation of flood events in small catchments. It combines a physically-based two-dimensional hydrodynamic model and the hydrological unit hydrograph theory. Unit hydrographs are derived using the FLOW-R2D model which is based on the full form of two-dimensional Shallow Water Equations, solved by a modified McCormack numerical scheme. The method is tested at a small catchment in a suburb of Athens-Greece for a storm event which occurred in February 2013. The catchment is divided into three friction zones and unit hydrographs of 15 and 30 min are produced. The infiltration process is simulated by the empirical Kostiakov equation and the Green-Ampt model. The results from the implementation of the proposed hybrid method are compared with recorded data at the hydrometric station at the outlet of the catchment and the results derived from the fully hydrodynamic model FLOW-R2D. It is concluded that for the case studied, the proposed hybrid method produces results close to those of the fully hydrodynamic simulation at substantially shorter computational time. This finding, if further verified in a variety of case studies, can be useful in devising effective hybrid tools for the two-dimensional flood simulations, which are lead to accurate and considerably faster results than those achieved by the fully hydrodynamic simulations.

  12. Robust fault detection of linear systems using a computationally efficient set-membership method

    DEFF Research Database (Denmark)

    Tabatabaeipour, Mojtaba; Bak, Thomas

    2014-01-01

    In this paper, a computationally efficient set-membership method for robust fault detection of linear systems is proposed. The method computes an interval outer-approximation of the output of the system that is consistent with the model, the bounds on noise and disturbance, and the past measureme...... is trivially parallelizable. The method is demonstrated for fault detection of a hydraulic pitch actuator of a wind turbine. We show the effectiveness of the proposed method by comparing our results with two zonotope-based set-membership methods....

  13. Complex data modeling and computationally intensive methods for estimation and prediction

    CERN Document Server

    Secchi, Piercesare; Advances in Complex Data Modeling and Computational Methods in Statistics

    2015-01-01

    The book is addressed to statisticians working at the forefront of the statistical analysis of complex and high dimensional data and offers a wide variety of statistical models, computer intensive methods and applications: network inference from the analysis of high dimensional data; new developments for bootstrapping complex data; regression analysis for measuring the downsize reputational risk; statistical methods for research on the human genome dynamics; inference in non-euclidean settings and for shape data; Bayesian methods for reliability and the analysis of complex data; methodological issues in using administrative data for clinical and epidemiological research; regression models with differential regularization; geostatistical methods for mobility analysis through mobile phone data exploration. This volume is the result of a careful selection among the contributions presented at the conference "S.Co.2013: Complex data modeling and computationally intensive methods for estimation and prediction" held...

  14. Methods of physical experiment and installation automation on the base of computers

    International Nuclear Information System (INIS)

    Stupin, Yu.V.

    1983-01-01

    Peculiarities of using computers for physical experiment and installation automation are considered. Systems for data acquisition and processing on the base of microprocessors, micro- and mini-computers, CAMAC equipment and real time operational systems as well as systems intended for automation of physical experiments on accelerators and installations of laser thermonuclear fusion and installations for plasma investigation are dpscribed. The problems of multimachine complex and multi-user system, arrangement, development of automated systems for collective use, arrangement of intermachine data exchange and control of experimental data base are discussed. Data on software systems used for complex experimental data processing are presented. It is concluded that application of new computers in combination with new possibilities provided for users by universal operational systems essentially exceeds efficiency of a scientist work

  15. Prediction of the Thermal Conductivity of Refrigerants by Computational Methods and Artificial Neural Network.

    Science.gov (United States)

    Ghaderi, Forouzan; Ghaderi, Amir H; Ghaderi, Noushin; Najafi, Bijan

    2017-01-01

    Background: The thermal conductivity of fluids can be calculated by several computational methods. However, these methods are reliable only at the confined levels of density, and there is no specific computational method for calculating thermal conductivity in the wide ranges of density. Methods: In this paper, two methods, an Artificial Neural Network (ANN) approach and a computational method established upon the Rainwater-Friend theory, were used to predict the value of thermal conductivity in all ranges of density. The thermal conductivity of six refrigerants, R12, R14, R32, R115, R143, and R152 was predicted by these methods and the effectiveness of models was specified and compared. Results: The results show that the computational method is a usable method for predicting thermal conductivity at low levels of density. However, the efficiency of this model is considerably reduced in the mid-range of density. It means that this model cannot be used at density levels which are higher than 6. On the other hand, the ANN approach is a reliable method for thermal conductivity prediction in all ranges of density. The best accuracy of ANN is achieved when the number of units is increased in the hidden layer. Conclusion: The results of the computational method indicate that the regular dependence between thermal conductivity and density at higher densities is eliminated. It can develop a nonlinear problem. Therefore, analytical approaches are not able to predict thermal conductivity in wide ranges of density. Instead, a nonlinear approach such as, ANN is a valuable method for this purpose.

  16. Comparison of 18F-fluorodeoxyglucose positron emission tomography/computed tomography, hydro-stomach computed tomography, and their combination for detecting primary gastric cancer

    International Nuclear Information System (INIS)

    Jang, Hye Young; Chung, Woo Suk; Song, E Rang; Kim, Jin Suk

    2015-01-01

    To retrospectively compare the diagnostic accuracy for detecting primary gastric cancer on positron emission tomography/computed tomography (PET/CT) and hydro-stomach CT (S-CT) and determine whether the combination of the two techniques improves diagnostic performance. A total of 253 patients with pathologically proven primary gastric cancer underwent PET/CT and S-CT for the preoperative evaluation. Two radiologists independently reviewed the three sets (PET/CT set, S-CT set, and the combined set) of PET/CT and S-CT in a random order. They graded the likelihood for the presence of primary gastric cancer based on a 4-point scale. The diagnostic accuracy of the PET/CT set, the S-CT set, and the combined set were determined by the area under the alternative-free receiver operating characteristic curve, and sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were calculated. Diagnostic accuracy, sensitivity, and NPV for detecting all gastric cancers and early gastric cancers (EGCs) were significantly higher with the combined set than those with the PET/CT and S-CT sets. Specificity and PPV were significantly higher with the PET/CT set than those with the combined and S-CT set for detecting all gastric cancers and EGCs. The combination of PET/CT and S-CT is more accurate than S-CT alone, particularly for detecting EGCs.

  17. Modeling of the inhomogeneity of grain refinement during combined metal forming process by finite element and cellular automata methods

    Energy Technology Data Exchange (ETDEWEB)

    Majta, Janusz; Madej, Łukasz; Svyetlichnyy, Dmytro S.; Perzyński, Konrad; Kwiecień, Marcin, E-mail: mkwiecie@agh.edu.pl; Muszka, Krzysztof

    2016-08-01

    The potential of discrete cellular automata technique to predict the grain refinement in wires produced using combined metal forming process is presented and discussed within the paper. The developed combined metal forming process can be treated as one of the Severe Plastic Deformation (SPD) techniques that consists of three different modes of deformation: asymmetric drawing with bending, namely accumulated angular drawing (AAD), wire drawing (WD) and wire flattening (WF). To accurately replicate complex stress state both at macro and micro scales during subsequent deformations two stage modeling approach was used. First, the Finite Element Method (FEM), implemented in commercial ABAQUS software, was applied to simulate entire combined forming process at the macro scale level. Then, based on FEM results, the Cellular Automata (CA) method was applied for simulation of grain refinement at the microstructure level. Data transferred between FEM and CA methods included set of files with strain tensor components obtained from selected integration points in the macro scale model. As a result of CA simulation, detailed information on microstructure evolution during severe plastic deformation conditions was obtained, namely: changes of shape and sizes of modeled representative volume with imposed microstructure, changes of the number of grains, subgrains and dislocation cells, development of grain boundaries angle distribution as well as changes in the pole figures. To evaluate CA model predictive capabilities, results of computer simulation were compared with scanning electron microscopy and electron back scattered diffraction images (SEM/EBSD) studies of samples after AAD+WD+WF process.

  18. A new computational method for reactive power market clearing

    International Nuclear Information System (INIS)

    Zhang, T.; Elkasrawy, A.; Venkatesh, B.

    2009-01-01

    After deregulation of electricity markets, ancillary services such as reactive power supply are priced separately. However, unlike real power supply, procedures for costing and pricing reactive power supply are still evolving and spot markets for reactive power do not exist as of now. Further, traditional formulations proposed for clearing reactive power markets use a non-linear mixed integer programming formulation that are difficult to solve. This paper proposes a new reactive power supply market clearing scheme. Novelty of this formulation lies in the pricing scheme that rewards transformers for tap shifting while participating in this market. The proposed model is a non-linear mixed integer challenge. A significant portion of the manuscript is devoted towards the development of a new successive mixed integer linear programming (MILP) technique to solve this formulation. The successive MILP method is computationally robust and fast. The IEEE 6-bus and 300-bus systems are used to test the proposed method. These tests serve to demonstrate computational speed and rigor of the proposed method. (author)

  19. Energetics of 2- and 3-coumaranone isomers: A combined calorimetric and computational study

    International Nuclear Information System (INIS)

    Sousa, Clara C.S.; Matos, M. Agostinha R.; Santos, Luís M.N.B.F.; Morais, Victor M.F.

    2013-01-01

    Highlights: • Experimental standard molar enthalpies of formation, sublimation of 2- and 3-coumaranone. • Mini-bomb combustion calorimetry, sublimation Calvet microcalorimetry. • DFT methods and high level composite ab initio calculations. • Theoretical estimate of the enthalpy of formation of isobenzofuranone. • Chemical shift (NICS) and the relative stability of the isomers. -- Abstract: Condensed phase standard (p° = 0.1 MPa) molar enthalpies of formation for 2-coumaranone and 3-coumaranone were derived from the standard molar enthalpies of combustion, in oxygen, at T = 298.15 K, measured by mini-bomb combustion calorimetry. Standard molar enthalpies of sublimation of both isomers were determined by Calvet microcalorimetry. These results were combined to derive the standard molar enthalpies of formation of the compounds, in gas phase, at T = 298.15 K. Additionally, accurate quantum chemical calculations have been performed using DFT methods and high level composite ab initio calculations. Theoretical estimates of the enthalpies of formation of the compounds are in good agreement with the experimental values thus supporting the predictions of the same parameters for isobenzofuranone, an isomer which has not been experimentally studied. The relative stability of these isomers has been evaluated by experimental and computational results. The importance of some stabilizing electronic intramolecular interactions has been studied and quantitatively evaluated through Natural Bonding Orbital (NBO) analysis of the wave functions and the nucleus independent chemical shift (NICS) of the studied systems have been calculated in order to study and establish the effect of electronic delocalization upon the relative stability of the isomers

  20. Computer-Aided Modeling Framework

    DEFF Research Database (Denmark)

    Fedorova, Marina; Sin, Gürkan; Gani, Rafiqul

    Models are playing important roles in design and analysis of chemicals based products and the processes that manufacture them. Computer-aided methods and tools have the potential to reduce the number of experiments, which can be expensive and time consuming, and there is a benefit of working...... development and application. The proposed work is a part of the project for development of methods and tools that will allow systematic generation, analysis and solution of models for various objectives. It will use the computer-aided modeling framework that is based on a modeling methodology, which combines....... In this contribution, the concept of template-based modeling is presented and application is highlighted for the specific case of catalytic membrane fixed bed models. The modeling template is integrated in a generic computer-aided modeling framework. Furthermore, modeling templates enable the idea of model reuse...

  1. Depth compensating calculation method of computer-generated holograms using symmetry and similarity of zone plates

    Science.gov (United States)

    Wei, Hui; Gong, Guanghong; Li, Ni

    2017-10-01

    Computer-generated hologram (CGH) is a promising 3D display technology while it is challenged by heavy computation load and vast memory requirement. To solve these problems, a depth compensating CGH calculation method based on symmetry and similarity of zone plates is proposed and implemented on graphics processing unit (GPU). An improved LUT method is put forward to compute the distances between object points and hologram pixels in the XY direction. The concept of depth compensating factor is defined and used for calculating the holograms of points with different depth positions instead of layer-based methods. The proposed method is suitable for arbitrary sampling objects with lower memory usage and higher computational efficiency compared to other CGH methods. The effectiveness of the proposed method is validated by numerical and optical experiments.

  2. Computational Biochemistry-Enzyme Mechanisms Explored.

    Science.gov (United States)

    Culka, Martin; Gisdon, Florian J; Ullmann, G Matthias

    2017-01-01

    Understanding enzyme mechanisms is a major task to achieve in order to comprehend how living cells work. Recent advances in biomolecular research provide huge amount of data on enzyme kinetics and structure. The analysis of diverse experimental results and their combination into an overall picture is, however, often challenging. Microscopic details of the enzymatic processes are often anticipated based on several hints from macroscopic experimental data. Computational biochemistry aims at creation of a computational model of an enzyme in order to explain microscopic details of the catalytic process and reproduce or predict macroscopic experimental findings. Results of such computations are in part complementary to experimental data and provide an explanation of a biochemical process at the microscopic level. In order to evaluate the mechanism of an enzyme, a structural model is constructed which can be analyzed by several theoretical approaches. Several simulation methods can and should be combined to get a reliable picture of the process of interest. Furthermore, abstract models of biological systems can be constructed combining computational and experimental data. In this review, we discuss structural computational models of enzymatic systems. We first discuss various models to simulate enzyme catalysis. Furthermore, we review various approaches how to characterize the enzyme mechanism both qualitatively and quantitatively using different modeling approaches. © 2017 Elsevier Inc. All rights reserved.

  3. Combined surface and volumetric occlusion shading

    KAUST Repository

    Schott, Matthias O.; Martin, Tobias; Grosset, A. V Pascal; Brownlee, Carson; Hollt, Thomas; Brown, Benjamin P.; Smith, Sean T.; Hansen, Charles D.

    2012-01-01

    In this paper, a method for interactive direct volume rendering is proposed that computes ambient occlusion effects for visualizations that combine both volumetric and geometric primitives, specifically tube shaped geometric objects representing streamlines, magnetic field lines or DTI fiber tracts. The proposed algorithm extends the recently proposed Directional Occlusion Shading model to allow the rendering of those geometric shapes in combination with a context providing 3D volume, considering mutual occlusion between structures represented by a volume or geometry. © 2012 IEEE.

  4. Combined surface and volumetric occlusion shading

    KAUST Repository

    Schott, Matthias O.

    2012-02-01

    In this paper, a method for interactive direct volume rendering is proposed that computes ambient occlusion effects for visualizations that combine both volumetric and geometric primitives, specifically tube shaped geometric objects representing streamlines, magnetic field lines or DTI fiber tracts. The proposed algorithm extends the recently proposed Directional Occlusion Shading model to allow the rendering of those geometric shapes in combination with a context providing 3D volume, considering mutual occlusion between structures represented by a volume or geometry. © 2012 IEEE.

  5. Moment matrices, border bases and radical computation

    NARCIS (Netherlands)

    B. Mourrain; J.B. Lasserre; M. Laurent (Monique); P. Rostalski; P. Trebuchet (Philippe)

    2013-01-01

    htmlabstractIn this paper, we describe new methods to compute the radical (resp. real radical) of an ideal, assuming it complex (resp. real) variety is nte. The aim is to combine approaches for solving a system of polynomial equations with dual methods which involve moment matrices and

  6. Moment matrices, border bases and radical computation

    NARCIS (Netherlands)

    B. Mourrain; J.B. Lasserre; M. Laurent (Monique); P. Rostalski; P. Trebuchet (Philippe)

    2011-01-01

    htmlabstractIn this paper, we describe new methods to compute the radical (resp. real radical) of an ideal, assuming it complex (resp. real) variety is nte. The aim is to combine approaches for solving a system of polynomial equations with dual methods which involve moment matrices and

  7. Co-design of RAD and ETHICS methodologies: a combination of information system development methods

    Science.gov (United States)

    Nasehi, Arezo; Shahriyari, Salman

    2011-12-01

    Co-design is a new trend in the social world which tries to capture different ideas in order to use the most appropriate features for a system. In this paper, co-design of two information system methodologies is regarded; rapid application development (RAD) and effective technical and human implementation of computer-based systems (ETHICS). We tried to consider the characteristics of these methodologies to see the possibility of having a co-design or combination of them for developing an information system. To reach this purpose, four different aspects of them are analyzed: social or technical approach, user participation and user involvement, job satisfaction, and overcoming change resistance. Finally, a case study using the quantitative method is analyzed in order to examine the possibility of co-design using these factors. The paper concludes that RAD and ETHICS are appropriate to be co-designed and brings some suggestions for the co-design.

  8. Ensemble approach combining multiple methods improves human transcription start site prediction.

    LENUS (Irish Health Repository)

    Dineen, David G

    2010-01-01

    The computational prediction of transcription start sites is an important unsolved problem. Some recent progress has been made, but many promoters, particularly those not associated with CpG islands, are still difficult to locate using current methods. These methods use different features and training sets, along with a variety of machine learning techniques and result in different prediction sets.

  9. Complex Data Modeling and Computationally Intensive Statistical Methods

    CERN Document Server

    Mantovan, Pietro

    2010-01-01

    The last years have seen the advent and development of many devices able to record and store an always increasing amount of complex and high dimensional data; 3D images generated by medical scanners or satellite remote sensing, DNA microarrays, real time financial data, system control datasets. The analysis of this data poses new challenging problems and requires the development of novel statistical models and computational methods, fueling many fascinating and fast growing research areas of modern statistics. The book offers a wide variety of statistical methods and is addressed to statistici

  10. Thermoelectricity analogy method for computing the periodic heat transfer in external building envelopes

    International Nuclear Information System (INIS)

    Peng Changhai; Wu Zhishen

    2008-01-01

    Simple and effective computation methods are needed to calculate energy efficiency in buildings for building thermal comfort and HVAC system simulations. This paper, which is based upon the theory of thermoelectricity analogy, develops a new harmonic method, the thermoelectricity analogy method (TEAM), to compute the periodic heat transfer in external building envelopes (EBE). It presents, in detail, the principles and specific techniques of TEAM to calculate both the decay rates and time lags of EBE. First, a set of linear equations is established using the theory of thermoelectricity analogy. Second, the temperature of each node is calculated by solving the linear equations set. Finally, decay rates and time lags are found by solving simple mathematical expressions. Comparisons show that this method is highly accurate and efficient. Moreover, relative to the existing harmonic methods, which are based on the classical control theory and the method of separation of variables, TEAM does not require complicated derivation and is amenable to hand computation and programming

  11. A Fuzzy Integral Ensemble Method in Visual P300 Brain-Computer Interface

    Directory of Open Access Journals (Sweden)

    Francesco Cavrini

    2016-01-01

    Full Text Available We evaluate the possibility of application of combination of classifiers using fuzzy measures and integrals to Brain-Computer Interface (BCI based on electroencephalography. In particular, we present an ensemble method that can be applied to a variety of systems and evaluate it in the context of a visual P300-based BCI. Offline analysis of data relative to 5 subjects lets us argue that the proposed classification strategy is suitable for BCI. Indeed, the achieved performance is significantly greater than the average of the base classifiers and, broadly speaking, similar to that of the best one. Thus the proposed methodology allows realizing systems that can be used by different subjects without the need for a preliminary configuration phase in which the best classifier for each user has to be identified. Moreover, the ensemble is often capable of detecting uncertain situations and turning them from misclassifications into abstentions, thereby improving the level of safety in BCI for environmental or device control.

  12. Computational chemical product design problems under property uncertainties

    DEFF Research Database (Denmark)

    Frutiger, Jerome; Cignitti, Stefano; Abildskov, Jens

    2017-01-01

    Three different strategies of how to combine computational chemical product design with Monte Carlo based methods for uncertainty analysis of chemical properties are outlined. One method consists of a computer-aided molecular design (CAMD) solution and a post-processing property uncertainty...... fluid design. While the higher end of the uncertainty range of the process model output is similar for the best performing fluids, the lower end of the uncertainty range differs largely....

  13. Combining Fog Computing with Sensor Mote Machine Learning for Industrial IoT.

    Science.gov (United States)

    Lavassani, Mehrzad; Forsström, Stefan; Jennehag, Ulf; Zhang, Tingting

    2018-05-12

    Digitalization is a global trend becoming ever more important to our connected and sustainable society. This trend also affects industry where the Industrial Internet of Things is an important part, and there is a need to conserve spectrum as well as energy when communicating data to a fog or cloud back-end system. In this paper we investigate the benefits of fog computing by proposing a novel distributed learning model on the sensor device and simulating the data stream in the fog, instead of transmitting all raw sensor values to the cloud back-end. To save energy and to communicate as few packets as possible, the updated parameters of the learned model at the sensor device are communicated in longer time intervals to a fog computing system. The proposed framework is implemented and tested in a real world testbed in order to make quantitative measurements and evaluate the system. Our results show that the proposed model can achieve a 98% decrease in the number of packets sent over the wireless link, and the fog node can still simulate the data stream with an acceptable accuracy of 97%. We also observe an end-to-end delay of 180 ms in our proposed three-layer framework. Hence, the framework shows that a combination of fog and cloud computing with a distributed data modeling at the sensor device for wireless sensor networks can be beneficial for Industrial Internet of Things applications.

  14. Oxygen Distributions-Evaluation of Computational Methods, Using a Stochastic Model for Large Tumour Vasculature, to Elucidate the Importance of Considering a Complete Vascular Network.

    Directory of Open Access Journals (Sweden)

    Jakob H Lagerlöf

    Full Text Available To develop a general model that utilises a stochastic method to generate a vessel tree based on experimental data, and an associated irregular, macroscopic tumour. These will be used to evaluate two different methods for computing oxygen distribution.A vessel tree structure, and an associated tumour of 127 cm3, were generated, using a stochastic method and Bresenham's line algorithm to develop trees on two different scales and fusing them together. The vessel dimensions were adjusted through convolution and thresholding and each vessel voxel was assigned an oxygen value. Diffusion and consumption were modelled using a Green's function approach together with Michaelis-Menten kinetics. The computations were performed using a combined tree method (CTM and an individual tree method (ITM. Five tumour sub-sections were compared, to evaluate the methods.The oxygen distributions of the same tissue samples, using different methods of computation, were considerably less similar (root mean square deviation, RMSD≈0.02 than the distributions of different samples using CTM (0.001< RMSD<0.01. The deviations of ITM from CTM increase with lower oxygen values, resulting in ITM severely underestimating the level of hypoxia in the tumour. Kolmogorov Smirnov (KS tests showed that millimetre-scale samples may not represent the whole.The stochastic model managed to capture the heterogeneous nature of hypoxic fractions and, even though the simplified computation did not considerably alter the oxygen distribution, it leads to an evident underestimation of tumour hypoxia, and thereby radioresistance. For a trustworthy computation of tumour oxygenation, the interaction between adjacent microvessel trees must not be neglected, why evaluation should be made using high resolution and the CTM, applied to the entire tumour.

  15. Moment-based method for computing the two-dimensional discrete Hartley transform

    Science.gov (United States)

    Dong, Zhifang; Wu, Jiasong; Shu, Huazhong

    2009-10-01

    In this paper, we present a fast algorithm for computing the two-dimensional (2-D) discrete Hartley transform (DHT). By using kernel transform and Taylor expansion, the 2-D DHT is approximated by a linear sum of 2-D geometric moments. This enables us to use the fast algorithms developed for computing the 2-D moments to efficiently calculate the 2-D DHT. The proposed method achieves a simple computational structure and is suitable to deal with any sequence lengths.

  16. The failure combination method: presentation, application to a simple collection of systems

    International Nuclear Information System (INIS)

    Llory, M.; Villemeur, A.

    1981-11-01

    The main advantages of this particular method for analyzing the reliability and safety of systems, the method of failure combinations, are presented. This is an inductive method of analysis; it makes it possible to pursue the Failure Modes and Effect Analysis (FMEA) until overall failures are obtained. In this manner, through an inductive approach all the combinations of failure modes leading to abnormal functioning of systems are obtained. It also makes it possible to carry out the overall study of complex systems in interaction and the systematic inventory of abnormal functioning of these systems, as from the failure modes of the components and their combinations. It can be used as from the design stages of systems and is an excellent dialogue tool between the various specialists concerned in problems of safety, operation and reliability [fr

  17. A virtual component method in numerical computation of cascades for isotope separation

    International Nuclear Information System (INIS)

    Zeng Shi; Cheng Lu

    2014-01-01

    The analysis, optimization, design and operation of cascades for isotope separation involve computations of cascades. In analytical analysis of cascades, using virtual components is a very useful analysis method. For complicated cases of cascades, numerical analysis has to be employed. However, bound up to the conventional idea that the concentration of a virtual component should be vanishingly small, virtual component is not yet applied to numerical computations. Here a method of introducing the method of using virtual components to numerical computations is elucidated, and its application to a few types of cascades is explained and tested by means of numerical experiments. The results show that the concentration of a virtual component is not restrained at all by the 'vanishingly small' idea. For the same requirements on cascades, the cascades obtained do not depend on the concentrations of virtual components. (authors)

  18. Method for Statically Checking an Object-oriented Computer Program Module

    Science.gov (United States)

    Bierhoff, Kevin M. (Inventor); Aldrich, Jonathan (Inventor)

    2012-01-01

    A method for statically checking an object-oriented computer program module includes the step of identifying objects within a computer program module, at least one of the objects having a plurality of references thereto, possibly from multiple clients. A discipline of permissions is imposed on the objects identified within the computer program module. The permissions enable tracking, from among a discrete set of changeable states, a subset of states each object might be in. A determination is made regarding whether the imposed permissions are violated by a potential reference to any of the identified objects. The results of the determination are output to a user.

  19. Combined computational and experimental study of Ar beam induced defect formation in graphite

    International Nuclear Information System (INIS)

    Pregler, Sharon K.; Hayakawa, Tetsuichiro; Yasumatsu, Hisato; Kondow, Tamotsu; Sinnott, Susan B.

    2007-01-01

    Irradiation of graphite, commonly used in nuclear power plants, is known to produce structural damage. Here, experimental and computational methods are used to study defect formation in graphite during Ar irradiation at incident energies of 50 eV. The experimental samples are analyzed with scanning tunneling microscopy to quantify the size distribution of the defects that form. The computational approach is classical molecular dynamic simulations that illustrate the mechanisms by which the defects are produced. The results indicate that defects in graphite grow in concentrated areas and are nucleated by the presence of existing defects

  20. An Overview of the Computational Physics and Methods Group at Los Alamos National Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Baker, Randal Scott [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2018-02-22

    CCS Division was formed to strengthen the visibility and impact of computer science and computational physics research on strategic directions for the Laboratory. Both computer science and computational science are now central to scientific discovery and innovation. They have become indispensable tools for all other scientific missions at the Laboratory. CCS Division forms a bridge between external partners and Laboratory programs, bringing new ideas and technologies to bear on today’s important problems and attracting high-quality technical staff members to the Laboratory. The Computational Physics and Methods Group CCS-2 conducts methods research and develops scientific software aimed at the latest and emerging HPC systems.

  1. An Accurate Method for Computing the Absorption of Solar Radiation by Water Vapor

    Science.gov (United States)

    Chou, M. D.

    1980-01-01

    The method is based upon molecular line parameters and makes use of a far wing scaling approximation and k distribution approach previously applied to the computation of the infrared cooling rate due to water vapor. Taking into account the wave number dependence of the incident solar flux, the solar heating rate is computed for the entire water vapor spectrum and for individual absorption bands. The accuracy of the method is tested against line by line calculations. The method introduces a maximum error of 0.06 C/day. The method has the additional advantage over previous methods in that it can be applied to any portion of the spectral region containing the water vapor bands. The integrated absorptances and line intensities computed from the molecular line parameters were compared with laboratory measurements. The comparison reveals that, among the three different sources, absorptance is the largest for the laboratory measurements.

  2. A substructure method to compute the 3D fluid-structure interaction during blowdown

    International Nuclear Information System (INIS)

    Guilbaud, D.; Axisa, F.; Gantenbein, F.; Gibert, R.J.

    1983-08-01

    The waves generated by a sudden rupture of a PWR primary pipe have an important mechanical effect on the internal structures of the vessel. This fluid-structure interaction has a strong 3D aspect. 3D finite element explicit methods can be applied. These methods take into account the non linearities of the problem but the calculation is heavy and expensive. We describe in this paper another type of method based on a substructure procedure: the vessel, internals and contained fluid are axisymmetrically described (AQUAMODE computer code). The pipes and contained fluid are monodimensionaly described (TEDEL-FLUIDE Computer Code). These substructures are characterized by their natural modes. Then, they are connected to another (connection of both structural and fluid nodes) the TRISTANA Computer Code. This method allows to compute correctly and cheaply the 3D fluid-structure effects. The treatment of certain non linearities is difficult because of the modal characterization of the substructures. However variations of contact conditions versus time can be introduced. We present here some validation tests and comparison with experimental results of the litterature

  3. Computation of mode eigenfunctions in graded-index optical fibers by the propagating beam method

    International Nuclear Information System (INIS)

    Feit, M.D.; Fleck, J.A. Jr.

    1980-01-01

    The propagating beam method utilizes discrete Fourier transforms for generating configuration-space solutions to optical waveguide problems without reference to modes. The propagating beam method can also give a complete description of the field in terms of modes by a Fourier analysis with respect to axial distance of the computed fields. Earlier work dealt with the accurate determination of mode propagation constants and group delays. In this paper the method is extended to the computation of mode eigenfunctions. The method is efficient, allowing generation of a large number of eigenfunctions from a single propagation run. Computations for parabolic-index profiles show excellent agreement between analytic and numerically generated eigenfunctions

  4. Computer-Based Job and Occupational Data Collection Methods: Feasibility Study

    National Research Council Canada - National Science Library

    Mitchell, Judith I

    1998-01-01

    .... The feasibility study was conducted to assess the operational and logistical problems involved with the development, implementation, and evaluation of computer-based job and occupational data collection methods...

  5. Solution of 3D inverse scattering problems by combined inverse equivalent current and finite element methods

    International Nuclear Information System (INIS)

    Kılıç, Emre; Eibert, Thomas F.

    2015-01-01

    An approach combining boundary integral and finite element methods is introduced for the solution of three-dimensional inverse electromagnetic medium scattering problems. Based on the equivalence principle, unknown equivalent electric and magnetic surface current densities on a closed surface are utilized to decompose the inverse medium problem into two parts: a linear radiation problem and a nonlinear cavity problem. The first problem is formulated by a boundary integral equation, the computational burden of which is reduced by employing the multilevel fast multipole method (MLFMM). Reconstructed Cauchy data on the surface allows the utilization of the Lorentz reciprocity and the Poynting's theorems. Exploiting these theorems, the noise level and an initial guess are estimated for the cavity problem. Moreover, it is possible to determine whether the material is lossy or not. In the second problem, the estimated surface currents form inhomogeneous boundary conditions of the cavity problem. The cavity problem is formulated by the finite element technique and solved iteratively by the Gauss–Newton method to reconstruct the properties of the object. Regularization for both the first and the second problems is achieved by a Krylov subspace method. The proposed method is tested against both synthetic and experimental data and promising reconstruction results are obtained

  6. Solution of 3D inverse scattering problems by combined inverse equivalent current and finite element methods

    Energy Technology Data Exchange (ETDEWEB)

    Kılıç, Emre, E-mail: emre.kilic@tum.de; Eibert, Thomas F.

    2015-05-01

    An approach combining boundary integral and finite element methods is introduced for the solution of three-dimensional inverse electromagnetic medium scattering problems. Based on the equivalence principle, unknown equivalent electric and magnetic surface current densities on a closed surface are utilized to decompose the inverse medium problem into two parts: a linear radiation problem and a nonlinear cavity problem. The first problem is formulated by a boundary integral equation, the computational burden of which is reduced by employing the multilevel fast multipole method (MLFMM). Reconstructed Cauchy data on the surface allows the utilization of the Lorentz reciprocity and the Poynting's theorems. Exploiting these theorems, the noise level and an initial guess are estimated for the cavity problem. Moreover, it is possible to determine whether the material is lossy or not. In the second problem, the estimated surface currents form inhomogeneous boundary conditions of the cavity problem. The cavity problem is formulated by the finite element technique and solved iteratively by the Gauss–Newton method to reconstruct the properties of the object. Regularization for both the first and the second problems is achieved by a Krylov subspace method. The proposed method is tested against both synthetic and experimental data and promising reconstruction results are obtained.

  7. Research on the Teaching System of the University Computer Foundation

    Directory of Open Access Journals (Sweden)

    Ji Xiaoyun

    2016-01-01

    Full Text Available Inonal students, the teaching contents, classification, hierarchical teaching methods with the combination of professional level training, as well as for top-notch students after class to promote comprehensive training methods for different students, establish online Q & A, test platform, to strengthen the integration professional education and computer education and training system of college computer basic course of study and exploration, and the popularization and application of the basic programming course, promote the cultivation of university students in the computer foundation, thinking methods and innovative practice ability, achieve the goal of individualized educ the College of computer basic course teaching, the specific circumstances of the need for students, professiation.

  8. Moment matrices, border bases and radical computation

    NARCIS (Netherlands)

    Lasserre, J.B.; Laurent, M.; Mourrain, B.; Rostalski, P.; Trébuchet, P.

    2013-01-01

    In this paper, we describe new methods to compute the radical (resp. real radical) of an ideal, assuming its complex (resp. real) variety is finite. The aim is to combine approaches for solving a system of polynomial equations with dual methods which involve moment matrices and semi-definite

  9. Computer-aided methods of determining thyristor thermal transients

    International Nuclear Information System (INIS)

    Lu, E.; Bronner, G.

    1988-08-01

    An accurate tracing of the thyristor thermal response is investigated. This paper offers several alternatives for thermal modeling and analysis by using an electrical circuit analog: topological method, convolution integral method, etc. These methods are adaptable to numerical solutions and well suited to the use of the digital computer. The thermal analysis of thyristors was performed for the 1000 MVA converter system at the Princeton Plasma Physics Laboratory. Transient thermal impedance curves for individual thyristors in a given cooling arrangement were known from measurements and from manufacturer's data. The analysis pertains to almost any loading case, and the results are obtained in a numerical or a graphical format. 6 refs., 9 figs

  10. CT findings of pancreatic carcinoma. Evaluation with the combined method of early enhancement CT and high dose enhancement CT

    International Nuclear Information System (INIS)

    Itoh, Shigeki; Endo, Tokiko; Isomura, Takayuki; Ishigaki, Takeo; Ikeda, Mitsuru; Senda, Kouhei.

    1995-01-01

    Computed tomographic (CT) findings of pancreatic ductal adenocarcinoma were studied with the combined method of early enhancement CT and high dose enhancement CT in 72 carcinomas. Common Findings were change in pancreatic contour, abnormal attenuation in a tumor and dilatation of the main pancreatic duct. The incidence of abnormal attenuation and dilatation of the main pancreatic duct and bile duct was constant regardless of tumor size. The finding of hypoattenuation at early enhancement CT was most useful for demonstrating a carcinoma. However, this finding was negative in ten cases, five of which showed inhomogenous hyperattenuation at high dose enhancement CT. The detection of change in pancreatic contour and dilatation of the main pancreatic duct was most frequent at high dose enhancement CT. The finding of change in pancreatic contour and/or abnormal attenuation in a tumor could be detected in 47 cases at plain CT, 66 at early enhancement CT and 65 at high dose enhancement CT. Since the four cases in which neither finding was detected by any CT method showed dilatated main pancreatic duct, there was no case without abnormal CT findings. This combined CT method will be a reliable diagnostic technique in the imaging of pancreatic carcinoma. (author)

  11. The stress and stress intensity factors computation by BEM and FEM combination for nozzle junction under pressure and thermal loads

    International Nuclear Information System (INIS)

    Du, Q.; Cen, Z.; Zhu, H.

    1989-01-01

    This paper reports linear elastic fracture analysis based upon the stress intensity factor evaluation successfully applied to safety assessments of cracked structures. The nozzle junction are usually subjected to high pressure and thermal loads simultaneously. In validity of linear elastic fracture analysis, K can be decomposed into K P (caused by mechanic loads) and K τ (caused by thermal loads). Under thermal transient loading, explicit analysis (say by the FEM or BEM) of K tracing an entire history respectively for a range of crack depth may be much more time consuming. The techniques of weight function provide efficient means for transforming the problem into the stress computation of the uncracked structure and generation of influence function (for the given structure and size of crack). In this paper, a combination of BE-FEM has been used for the analysis of the cracked nozzle structure by techniques of weight function. The influence functions are obtained by coupled BE-FEM and the uncracked structure stress are computed by finite element methods

  12. Choosing Learning Methods Suitable for Teaching and Learning in Computer Science

    Science.gov (United States)

    Taylor, Estelle; Breed, Marnus; Hauman, Ilette; Homann, Armando

    2013-01-01

    Our aim is to determine which teaching methods students in Computer Science and Information Systems prefer. There are in total 5 different paradigms (behaviorism, cognitivism, constructivism, design-based and humanism) with 32 models between them. Each model is unique and states different learning methods. Recommendations are made on methods that…

  13. Reliability of Lyapunov characteristic exponents computed by the two-particle method

    Science.gov (United States)

    Mei, Lijie; Huang, Li

    2018-03-01

    For highly complex problems, such as the post-Newtonian formulation of compact binaries, the two-particle method may be a better, or even the only, choice to compute the Lyapunov characteristic exponent (LCE). This method avoids the complex calculations of variational equations compared with the variational method. However, the two-particle method sometimes provides spurious estimates to LCEs. In this paper, we first analyze the equivalence in the definition of LCE between the variational and two-particle methods for Hamiltonian systems. Then, we develop a criterion to determine the reliability of LCEs computed by the two-particle method by considering the magnitude of the initial tangent (or separation) vector ξ0 (or δ0), renormalization time interval τ, machine precision ε, and global truncation error ɛT. The reliable Lyapunov characteristic indicators estimated by the two-particle method form a V-shaped region, which is restricted by d0, ε, and ɛT. Finally, the numerical experiments with the Hénon-Heiles system, the spinning compact binaries, and the post-Newtonian circular restricted three-body problem strongly support the theoretical results.

  14. Method of Computer-aided Instruction in Situation Control Systems

    Directory of Open Access Journals (Sweden)

    Anatoliy O. Kargin

    2013-01-01

    Full Text Available The article considers the problem of computer-aided instruction in context-chain motivated situation control system of the complex technical system behavior. The conceptual and formal models of situation control with practical instruction are considered. Acquisition of new behavior knowledge is presented as structural changes in system memory in the form of situational agent set. Model and method of computer-aided instruction represent formalization, based on the nondistinct theories by physiologists and cognitive psychologists.The formal instruction model describes situation and reaction formation and dependence on different parameters, effecting education, such as the reinforcement value, time between the stimulus, action and the reinforcement. The change of the contextual link between situational elements when using is formalized.The examples and results of computer instruction experiments of the robot device “LEGO MINDSTORMS NXT”, equipped with ultrasonic distance, touch, light sensors.

  15. Application of computational aerodynamics methods to the design and analysis of transport aircraft

    Science.gov (United States)

    Da Costa, A. L.

    1978-01-01

    The application and validation of several computational aerodynamic methods in the design and analysis of transport aircraft is established. An assessment is made concerning more recently developed methods that solve three-dimensional transonic flow and boundary layers on wings. Capabilities of subsonic aerodynamic methods are demonstrated by several design and analysis efforts. Among the examples cited are the B747 Space Shuttle Carrier Aircraft analysis, nacelle integration for transport aircraft, and winglet optimization. The accuracy and applicability of a new three-dimensional viscous transonic method is demonstrated by comparison of computed results to experimental data

  16. Computational methods using weighed-extreme learning machine to predict protein self-interactions with protein evolutionary information.

    Science.gov (United States)

    An, Ji-Yong; Zhang, Lei; Zhou, Yong; Zhao, Yu-Jun; Wang, Da-Fu

    2017-08-18

    Self-interactions Proteins (SIPs) is important for their biological activity owing to the inherent interaction amongst their secondary structures or domains. However, due to the limitations of experimental Self-interactions detection, one major challenge in the study of prediction SIPs is how to exploit computational approaches for SIPs detection based on evolutionary information contained protein sequence. In the work, we presented a novel computational approach named WELM-LAG, which combined the Weighed-Extreme Learning Machine (WELM) classifier with Local Average Group (LAG) to predict SIPs based on protein sequence. The major improvement of our method lies in presenting an effective feature extraction method used to represent candidate Self-interactions proteins by exploring the evolutionary information embedded in PSI-BLAST-constructed position specific scoring matrix (PSSM); and then employing a reliable and robust WELM classifier to carry out classification. In addition, the Principal Component Analysis (PCA) approach is used to reduce the impact of noise. The WELM-LAG method gave very high average accuracies of 92.94 and 96.74% on yeast and human datasets, respectively. Meanwhile, we compared it with the state-of-the-art support vector machine (SVM) classifier and other existing methods on human and yeast datasets, respectively. Comparative results indicated that our approach is very promising and may provide a cost-effective alternative for predicting SIPs. In addition, we developed a freely available web server called WELM-LAG-SIPs to predict SIPs. The web server is available at http://219.219.62.123:8888/WELMLAG/ .

  17. Grid computing for LHC and methods for W boson mass measurement at CMS

    International Nuclear Information System (INIS)

    Jung, Christopher

    2007-01-01

    Two methods for measuring the W boson mass with the CMS detector have been presented in this thesis. Both methods use similarities between W boson and Z boson decays. Their statistical and systematic precisions have been determined for W → μν; the statistics corresponds to one inverse femtobarn of data. A large number of events needed to be simulated for this analysis; it was not possible to use the full simulation software because of the enormous computing time which would have been needed. Instead, a fast simulation tool for the CMS detector was used. Still, the computing requirements for the fast simulation exceeded the capacity of the local compute cluster. Since the data taken and processed at the LHC will be extremely large, the LHC experiments rely on the emerging grid computing tools. The computing capabilities of the grid have been used for simulating all physics events needed for this thesis. To achieve this, the local compute cluster had to be integrated into the grid and the administration of the grid components had to be secured. As this was the first installation of its kind, several contributions to grid training events could be made: courses on grid installation, administration and grid-enabled applications were given. The two methods for the W mass measurement are the morphing method and the scaling method. The morphing method relies on an analytical transformation of Z boson events into W boson events and determines the W boson mass by comparing the transverse mass distributions; the scaling method relies on scaled observables from W boson and Z boson events, e.g. the transverse muon momentum as studied in this thesis. In both cases, a re-weighting technique applied to Monte Carlo generated events is used to take into account different selection cuts, detector acceptances, and differences in production and decay of W boson and Z boson events. (orig.)

  18. Grid computing for LHC and methods for W boson mass measurement at CMS

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Christopher

    2007-12-14

    Two methods for measuring the W boson mass with the CMS detector have been presented in this thesis. Both methods use similarities between W boson and Z boson decays. Their statistical and systematic precisions have been determined for W {yields} {mu}{nu}; the statistics corresponds to one inverse femtobarn of data. A large number of events needed to be simulated for this analysis; it was not possible to use the full simulation software because of the enormous computing time which would have been needed. Instead, a fast simulation tool for the CMS detector was used. Still, the computing requirements for the fast simulation exceeded the capacity of the local compute cluster. Since the data taken and processed at the LHC will be extremely large, the LHC experiments rely on the emerging grid computing tools. The computing capabilities of the grid have been used for simulating all physics events needed for this thesis. To achieve this, the local compute cluster had to be integrated into the grid and the administration of the grid components had to be secured. As this was the first installation of its kind, several contributions to grid training events could be made: courses on grid installation, administration and grid-enabled applications were given. The two methods for the W mass measurement are the morphing method and the scaling method. The morphing method relies on an analytical transformation of Z boson events into W boson events and determines the W boson mass by comparing the transverse mass distributions; the scaling method relies on scaled observables from W boson and Z boson events, e.g. the transverse muon momentum as studied in this thesis. In both cases, a re-weighting technique applied to Monte Carlo generated events is used to take into account different selection cuts, detector acceptances, and differences in production and decay of W boson and Z boson events. (orig.)

  19. Computer science: Data analysis meets quantum physics

    Science.gov (United States)

    Schramm, Steven

    2017-10-01

    A technique that combines machine learning and quantum computing has been used to identify the particles known as Higgs bosons. The method could find applications in many areas of science. See Letter p.375

  20. Simulation of computational fluid dynamics and comparison of cephalosporin C fermentation performance with different impeller combinations

    Energy Technology Data Exchange (ETDEWEB)

    Duan, Shengbing; Ni, Weijia; Luo, Hongzhen; Shi, Zhongping; Liu, Fan [Jiangnan University, Wuxi (China); Yuan, Guoqiang; Zhao, Yanli [CSPC Hebei Zhongrun Pharmaceutical Co. Ltd., Shijiazhuang (China)

    2013-05-15

    Cephalosporin C (CPC) fermentation by Acremonium chrysogenum is an extremely high oxygen-consuming process and oxygen transfer rate in a bioreactor directly affects fermentation performance. In this study, fluid dynamics and oxygen transfer in a 7 L bioreactor with different impellers combinations were simulated by computational fluid dynamics (CFD) model. Based on the simulation results, two impeller combinations with higher oxygen transfer rate (K{sub L}a) were selected to conduct CPC fermentations, aiming at achieving high CPC concentration and low accumulation of major by-product, deacetoxycephalosporin (DAOC). It was found that an impeller combination with a higher K{sub L}a and moderate shear force is the prerequisite for efficient CPC production in a stirred bioreactor. The best impeller combination, which installed a six-bladed turbine and a four-pitched-blade turbine at bottom and upper layers but with a shortened impellers inter-distance, produced the highest CPC concentration of 35.77 g/L and lowest DAOC/CPC ratio of 0.5%.

  1. Simulation of computational fluid dynamics and comparison of cephalosporin C fermentation performance with different impeller combinations

    International Nuclear Information System (INIS)

    Duan, Shengbing; Ni, Weijia; Luo, Hongzhen; Shi, Zhongping; Liu, Fan; Yuan, Guoqiang; Zhao, Yanli

    2013-01-01

    Cephalosporin C (CPC) fermentation by Acremonium chrysogenum is an extremely high oxygen-consuming process and oxygen transfer rate in a bioreactor directly affects fermentation performance. In this study, fluid dynamics and oxygen transfer in a 7 L bioreactor with different impellers combinations were simulated by computational fluid dynamics (CFD) model. Based on the simulation results, two impeller combinations with higher oxygen transfer rate (K_La) were selected to conduct CPC fermentations, aiming at achieving high CPC concentration and low accumulation of major by-product, deacetoxycephalosporin (DAOC). It was found that an impeller combination with a higher K_La and moderate shear force is the prerequisite for efficient CPC production in a stirred bioreactor. The best impeller combination, which installed a six-bladed turbine and a four-pitched-blade turbine at bottom and upper layers but with a shortened impellers inter-distance, produced the highest CPC concentration of 35.77 g/L and lowest DAOC/CPC ratio of 0.5%

  2. Exploring combinations of auditory and visual stimuli for gaze-independent brain-computer interfaces.

    Directory of Open Access Journals (Sweden)

    Xingwei An

    Full Text Available For Brain-Computer Interface (BCI systems that are designed for users with severe impairments of the oculomotor system, an appropriate mode of presenting stimuli to the user is crucial. To investigate whether multi-sensory integration can be exploited in the gaze-independent event-related potentials (ERP speller and to enhance BCI performance, we designed a visual-auditory speller. We investigate the possibility to enhance stimulus presentation by combining visual and auditory stimuli within gaze-independent spellers. In this study with N = 15 healthy users, two different ways of combining the two sensory modalities are proposed: simultaneous redundant streams (Combined-Speller and interleaved independent streams (Parallel-Speller. Unimodal stimuli were applied as control conditions. The workload, ERP components, classification accuracy and resulting spelling speed were analyzed for each condition. The Combined-speller showed a lower workload than uni-modal paradigms, without the sacrifice of spelling performance. Besides, shorter latencies, lower amplitudes, as well as a shift of the temporal and spatial distribution of discriminative information were observed for Combined-speller. These results are important and are inspirations for future studies to search the reason for these differences. For the more innovative and demanding Parallel-Speller, where the auditory and visual domains are independent from each other, a proof of concept was obtained: fifteen users could spell online with a mean accuracy of 87.7% (chance level <3% showing a competitive average speed of 1.65 symbols per minute. The fact that it requires only one selection period per symbol makes it a good candidate for a fast communication channel. It brings a new insight into the true multisensory stimuli paradigms. Novel approaches for combining two sensory modalities were designed here, which are valuable for the development of ERP-based BCI paradigms.

  3. Combined use of nanocarriers and physical methods for percutaneous penetration enhancement.

    Science.gov (United States)

    Dragicevic, Nina; Maibach, Howard

    2018-02-06

    Dermal and transdermal drug delivery (due to its non-invasiveness, avoidance of the first-pass metabolism, controlling the rate of drug input over a prolonged time, etc.) have gained significant acceptance. Several methods are employed to overcome the permeability barrier of the skin, improving drug penetration into/through skin. Among chemical penetration enhancement methods, nanocarriers have been extensively studied. When applied alone, nanocarriers mostly deliver drugs to skin and can be used to treat skin diseases. To achieve effective transdermal drug delivery, nanocarriers should be applied with physical methods, as they act synergistically in enhancing drug penetration. This review describes combined use of frequently used nanocarriers (liposomes, novel elastic vesicles, lipid-based and polymer-based nanoparticles and dendrimers) with the most efficient physical methods (microneedles, iontophoresis, ultrasound and electroporation) and demonstrates superiority of the combined use of nanocarriers and physical methods in drug penetration enhancement compared to their single use. Copyright © 2018. Published by Elsevier B.V.

  4. Computer-Aided Design Method of Warp-Knitted Jacquard Spacer Fabrics

    Directory of Open Access Journals (Sweden)

    Li Xinxin

    2016-06-01

    Full Text Available Based on a further study on knitting and jacquard principles, this paper presents a mathematical design model to make computer-aided design of warp-knitted jacquard spacer fabrics more efficient. The mathematical model with matrix method employs three essential elements of chain notation, threading and Jacquard designing. With this model, the processing to design warp-knitted jacquard spacer fabrics with CAD software is also introduced. In this study, the sports shoes which have separated functional areas according to the feet structure and characteristics of movement are analysed. The results show the different patterns on Jacquard spacer fabrics that are seamlessly stitched with jacquard technics. The computer-aided design method of warp-knitted jacquard spacer fabrics is efficient and simple.

  5. A Decentralized Eigenvalue Computation Method for Spectrum Sensing Based on Average Consensus

    Science.gov (United States)

    Mohammadi, Jafar; Limmer, Steffen; Stańczak, Sławomir

    2016-07-01

    This paper considers eigenvalue estimation for the decentralized inference problem for spectrum sensing. We propose a decentralized eigenvalue computation algorithm based on the power method, which is referred to as generalized power method GPM; it is capable of estimating the eigenvalues of a given covariance matrix under certain conditions. Furthermore, we have developed a decentralized implementation of GPM by splitting the iterative operations into local and global computation tasks. The global tasks require data exchange to be performed among the nodes. For this task, we apply an average consensus algorithm to efficiently perform the global computations. As a special case, we consider a structured graph that is a tree with clusters of nodes at its leaves. For an accelerated distributed implementation, we propose to use computation over multiple access channel (CoMAC) as a building block of the algorithm. Numerical simulations are provided to illustrate the performance of the two algorithms.

  6. Computationally efficient video restoration for Nyquist sampled imaging sensors combining an affine-motion-based temporal Kalman filter and adaptive Wiener filter.

    Science.gov (United States)

    Rucci, Michael; Hardie, Russell C; Barnard, Kenneth J

    2014-05-01

    In this paper, we present a computationally efficient video restoration algorithm to address both blur and noise for a Nyquist sampled imaging system. The proposed method utilizes a temporal Kalman filter followed by a correlation-model based spatial adaptive Wiener filter (AWF). The Kalman filter employs an affine background motion model and novel process-noise variance estimate. We also propose and demonstrate a new multidelay temporal Kalman filter designed to more robustly treat local motion. The AWF is a spatial operation that performs deconvolution and adapts to the spatially varying residual noise left in the Kalman filter stage. In image areas where the temporal Kalman filter is able to provide significant noise reduction, the AWF can be aggressive in its deconvolution. In other areas, where less noise reduction is achieved with the Kalman filter, the AWF balances the deconvolution with spatial noise reduction. In this way, the Kalman filter and AWF work together effectively, but without the computational burden of full joint spatiotemporal processing. We also propose a novel hybrid system that combines a temporal Kalman filter and BM3D processing. To illustrate the efficacy of the proposed methods, we test the algorithms on both simulated imagery and video collected with a visible camera.

  7. Computing UV/vis spectra using a combined molecular dynamics and quantum chemistry approach: bis-triazin-pyridine (BTP) ligands studied in solution.

    Science.gov (United States)

    Höfener, Sebastian; Trumm, Michael; Koke, Carsten; Heuser, Johannes; Ekström, Ulf; Skerencak-Frech, Andrej; Schimmelpfennig, Bernd; Panak, Petra J

    2016-03-21

    We report a combined computational and experimental study to investigate the UV/vis spectra of 2,6-bis(5,6-dialkyl-1,2,4-triazin-3-yl)pyridine (BTP) ligands in solution. In order to study molecules in solution using theoretical methods, force-field parameters for the ligand-water interaction are adjusted to ab initio quantum chemical calculations. Based on these parameters, molecular dynamics (MD) simulations are carried out from which snapshots are extracted as input to quantum chemical excitation-energy calculations to obtain UV/vis spectra of BTP ligands in solution using time-dependent density functional theory (TDDFT) employing the Tamm-Dancoff approximation (TDA). The range-separated CAM-B3LYP functional is used to avoid large errors for charge-transfer states occurring in the electronic spectra. In order to study environment effects with theoretical methods, the frozen-density embedding scheme is applied. This computational procedure allows to obtain electronic spectra calculated at the (range-separated) DFT level of theory in solution, revealing solvatochromic shifts upon solvation of up to about 0.6 eV. Comparison to experimental data shows a significantly improved agreement compared to vacuum calculations and enables the analysis of relevant excitations for the line shape in solution.

  8. On a computational method for modelling complex ecosystems by superposition procedure

    International Nuclear Information System (INIS)

    He Shanyu.

    1986-12-01

    In this paper, the Superposition Procedure is concisely described, and a computational method for modelling a complex ecosystem is proposed. With this method, the information contained in acceptable submodels and observed data can be utilized to maximal degree. (author). 1 ref

  9. A new fault detection method for computer networks

    International Nuclear Information System (INIS)

    Lu, Lu; Xu, Zhengguo; Wang, Wenhai; Sun, Youxian

    2013-01-01

    Over the past few years, fault detection for computer networks has attracted extensive attentions for its importance in network management. Most existing fault detection methods are based on active probing techniques which can detect the occurrence of faults fast and precisely. But these methods suffer from the limitation of traffic overhead, especially in large scale networks. To relieve traffic overhead induced by active probing based methods, a new fault detection method, whose key is to divide the detection process into multiple stages, is proposed in this paper. During each stage, only a small region of the network is detected by using a small set of probes. Meanwhile, it also ensures that the entire network can be covered after multiple detection stages. This method can guarantee that the traffic used by probes during each detection stage is small sufficiently so that the network can operate without severe disturbance from probes. Several simulation results verify the effectiveness of the proposed method

  10. Optimal Combinations of Diagnostic Tests Based on AUC.

    Science.gov (United States)

    Huang, Xin; Qin, Gengsheng; Fang, Yixin

    2011-06-01

    When several diagnostic tests are available, one can combine them to achieve better diagnostic accuracy. This article considers the optimal linear combination that maximizes the area under the receiver operating characteristic curve (AUC); the estimates of the combination's coefficients can be obtained via a nonparametric procedure. However, for estimating the AUC associated with the estimated coefficients, the apparent estimation by re-substitution is too optimistic. To adjust for the upward bias, several methods are proposed. Among them the cross-validation approach is especially advocated, and an approximated cross-validation is developed to reduce the computational cost. Furthermore, these proposed methods can be applied for variable selection to select important diagnostic tests. The proposed methods are examined through simulation studies and applications to three real examples. © 2010, The International Biometric Society.

  11. Multi-scale calculation based on dual domain material point method combined with molecular dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Dhakal, Tilak Raj [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-02-27

    This dissertation combines the dual domain material point method (DDMP) with molecular dynamics (MD) in an attempt to create a multi-scale numerical method to simulate materials undergoing large deformations with high strain rates. In these types of problems, the material is often in a thermodynamically non-equilibrium state, and conventional constitutive relations are often not available. In this method, the closure quantities, such as stress, at each material point are calculated from a MD simulation of a group of atoms surrounding the material point. Rather than restricting the multi-scale simulation in a small spatial region, such as phase interfaces, or crack tips, this multi-scale method can be used to consider non-equilibrium thermodynamic e ects in a macroscopic domain. This method takes advantage that the material points only communicate with mesh nodes, not among themselves; therefore MD simulations for material points can be performed independently in parallel. First, using a one-dimensional shock problem as an example, the numerical properties of the original material point method (MPM), the generalized interpolation material point (GIMP) method, the convected particle domain interpolation (CPDI) method, and the DDMP method are investigated. Among these methods, only the DDMP method converges as the number of particles increases, but the large number of particles needed for convergence makes the method very expensive especially in our multi-scale method where we calculate stress in each material point using MD simulation. To improve DDMP, the sub-point method is introduced in this dissertation, which provides high quality numerical solutions with a very small number of particles. The multi-scale method based on DDMP with sub-points is successfully implemented for a one dimensional problem of shock wave propagation in a cerium crystal. The MD simulation to calculate stress in each material point is performed in GPU using CUDA to accelerate the

  12. Original method to compute epipoles using variable homography: application to measure emergent fibers on textile fabrics

    Science.gov (United States)

    Xu, Jun; Cudel, Christophe; Kohler, Sophie; Fontaine, Stéphane; Haeberlé, Olivier; Klotz, Marie-Louise

    2012-04-01

    Fabric's smoothness is a key factor in determining the quality of finished textile products and has great influence on the functionality of industrial textiles and high-end textile products. With popularization of the zero defect industrial concept, identifying and measuring defective material in the early stage of production is of great interest to the industry. In the current market, many systems are able to achieve automatic monitoring and control of fabric, paper, and nonwoven material during the entire production process, however online measurement of hairiness is still an open topic and highly desirable for industrial applications. We propose a computer vision approach to compute epipole by using variable homography, which can be used to measure emergent fiber length on textile fabrics. The main challenges addressed in this paper are the application of variable homography on textile monitoring and measurement, as well as the accuracy of the estimated calculation. We propose that a fibrous structure can be considered as a two-layer structure, and then we show how variable homography combined with epipolar geometry can estimate the length of the fiber defects. Simulations are carried out to show the effectiveness of this method. The true length of selected fibers is measured precisely using a digital optical microscope, and then the same fibers are tested by our method. Our experimental results suggest that smoothness monitored by variable homography is an accurate and robust method of quality control for important industrial fabrics.

  13. A comparison of methods for the assessment of postural load and duration of computer use

    NARCIS (Netherlands)

    Heinrich, J.; Blatter, B.M.; Bongers, P.M.

    2004-01-01

    Aim: To compare two different methods for assessment of postural load and duration of computer use in office workers. Methods: The study population existed of 87 computer workers. Questionnaire data about exposure were compared with exposures measured by a standardised or objective method. Measuring

  14. Review on pen-and-paper-based observational methods for assessing ergonomic risk factors of computer work.

    Science.gov (United States)

    Rahman, Mohd Nasrull Abdol; Mohamad, Siti Shafika

    2017-01-01

    Computer works are associated with Musculoskeletal Disorders (MSDs). There are several methods have been developed to assess computer work risk factor related to MSDs. This review aims to give an overview of current techniques available for pen-and-paper-based observational methods in assessing ergonomic risk factors of computer work. We searched an electronic database for materials from 1992 until 2015. The selected methods were focused on computer work, pen-and-paper observational methods, office risk factors and musculoskeletal disorders. This review was developed to assess the risk factors, reliability and validity of pen-and-paper observational method associated with computer work. Two evaluators independently carried out this review. Seven observational methods used to assess exposure to office risk factor for work-related musculoskeletal disorders were identified. The risk factors involved in current techniques of pen and paper based observational tools were postures, office components, force and repetition. From the seven methods, only five methods had been tested for reliability. They were proven to be reliable and were rated as moderate to good. For the validity testing, from seven methods only four methods were tested and the results are moderate. Many observational tools already exist, but no single tool appears to cover all of the risk factors including working posture, office component, force, repetition and office environment at office workstations and computer work. Although the most important factor in developing tool is proper validation of exposure assessment techniques, the existing observational method did not test reliability and validity. Futhermore, this review could provide the researchers with ways on how to improve the pen-and-paper-based observational method for assessing ergonomic risk factors of computer work.

  15. Comparison of {sup 18}F-fluorodeoxyglucose positron emission tomography/computed tomography, hydro-stomach computed tomography, and their combination for detecting primary gastric cancer

    Energy Technology Data Exchange (ETDEWEB)

    Jang, Hye Young; Chung, Woo Suk; Song, E Rang; Kim, Jin Suk [Konyang University Myunggok Medical Research Institute, Konyang University Hospital, Konyang University College of Medicine, Daejeon (Korea, Republic of)

    2015-01-15

    To retrospectively compare the diagnostic accuracy for detecting primary gastric cancer on positron emission tomography/computed tomography (PET/CT) and hydro-stomach CT (S-CT) and determine whether the combination of the two techniques improves diagnostic performance. A total of 253 patients with pathologically proven primary gastric cancer underwent PET/CT and S-CT for the preoperative evaluation. Two radiologists independently reviewed the three sets (PET/CT set, S-CT set, and the combined set) of PET/CT and S-CT in a random order. They graded the likelihood for the presence of primary gastric cancer based on a 4-point scale. The diagnostic accuracy of the PET/CT set, the S-CT set, and the combined set were determined by the area under the alternative-free receiver operating characteristic curve, and sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were calculated. Diagnostic accuracy, sensitivity, and NPV for detecting all gastric cancers and early gastric cancers (EGCs) were significantly higher with the combined set than those with the PET/CT and S-CT sets. Specificity and PPV were significantly higher with the PET/CT set than those with the combined and S-CT set for detecting all gastric cancers and EGCs. The combination of PET/CT and S-CT is more accurate than S-CT alone, particularly for detecting EGCs.

  16. A Novel Automated Method for Analyzing Cylindrical Computed Tomography Data

    Science.gov (United States)

    Roth, D. J.; Burke, E. R.; Rauser, R. W.; Martin, R. E.

    2011-01-01

    A novel software method is presented that is applicable for analyzing cylindrical and partially cylindrical objects inspected using computed tomography. This method involves unwrapping and re-slicing data so that the CT data from the cylindrical object can be viewed as a series of 2-D sheets in the vertical direction in addition to volume rendering and normal plane views provided by traditional CT software. The method is based on interior and exterior surface edge detection and under proper conditions, is FULLY AUTOMATED and requires no input from the user except the correct voxel dimension from the CT scan. The software is available from NASA in 32- and 64-bit versions that can be applied to gigabyte-sized data sets, processing data either in random access memory or primarily on the computer hard drive. Please inquire with the presenting author if further interested. This software differentiates itself in total from other possible re-slicing software solutions due to complete automation and advanced processing and analysis capabilities.

  17. A combination of the acoustic radiosity and the image source method

    DEFF Research Database (Denmark)

    Koutsouris, Georgios I.; Brunskog, Jonas; Jeong, Cheol-Ho

    2012-01-01

    A combined model for room acoustic predictions is developed, aiming to treat both diffuse and specular reflections in a unified way. Two established methods are incorporated: acoustical radiosity, accounting for the diffuse part, and the image source method, accounting for the specular part...

  18. A multiple-scaling method of the computation of threaded structures

    International Nuclear Information System (INIS)

    Andrieux, S.; Leger, A.

    1989-01-01

    The numerical computation of threaded structures usually leads to very large finite elements problems. It was therefore very difficult to carry out some parametric studies, especially in non-linear cases involving plasticity or unilateral contact conditions. Nevertheless, these parametric studies are essential in many industrial problems, for instance for the evaluation of various repairing processes of the closure studs of PWR. It is well known that such repairing generally involves several modifications of the thread geometry, of the number of active threads, of the flange clamping conditions, and so on. This paper is devoted to the description of a two-scale method, which easily allows parametric studies. The main idea of this method consists of dividing the problem into a global part, and a local part. The local problem is solved by F.E.M. on the precise geometry of the thread of some elementary loadings. The global one is formulated on the gudgeon scale and is reduced to a monodimensional one. The resolution of this global problem leads to the unsignificant computational cost. Then, a post-processing gives the stress field at the thread scale anywhere in the assembly. After recalling some principles of the two-scales approach, the method is described. The validation by comparison with a direct F.E. computation and some further applications are presented

  19. Computer-Based Methods for Collecting Peer Nomination Data: Utility, Practice, and Empirical Support.

    Science.gov (United States)

    van den Berg, Yvonne H M; Gommans, Rob

    2017-09-01

    New technologies have led to several major advances in psychological research over the past few decades. Peer nomination research is no exception. Thanks to these technological innovations, computerized data collection is becoming more common in peer nomination research. However, computer-based assessment is more than simply programming the questionnaire and asking respondents to fill it in on computers. In this chapter the advantages and challenges of computer-based assessments are discussed. In addition, a list of practical recommendations and considerations is provided to inform researchers on how computer-based methods can be applied to their own research. Although the focus is on the collection of peer nomination data in particular, many of the requirements, considerations, and implications are also relevant for those who consider the use of other sociometric assessment methods (e.g., paired comparisons, peer ratings, peer rankings) or computer-based assessments in general. © 2017 Wiley Periodicals, Inc.

  20. A fast inverse consistent deformable image registration method based on symmetric optical flow computation

    International Nuclear Information System (INIS)

    Yang Deshan; Li Hua; Low, Daniel A; Deasy, Joseph O; Naqa, Issam El

    2008-01-01

    Deformable image registration is widely used in various radiation therapy applications including daily treatment planning adaptation to map planned tissue or dose to changing anatomy. In this work, a simple and efficient inverse consistency deformable registration method is proposed with aims of higher registration accuracy and faster convergence speed. Instead of registering image I to a second image J, the two images are symmetrically deformed toward one another in multiple passes, until both deformed images are matched and correct registration is therefore achieved. In each pass, a delta motion field is computed by minimizing a symmetric optical flow system cost function using modified optical flow algorithms. The images are then further deformed with the delta motion field in the positive and negative directions respectively, and then used for the next pass. The magnitude of the delta motion field is forced to be less than 0.4 voxel for every pass in order to guarantee smoothness and invertibility for the two overall motion fields that are accumulating the delta motion fields in both positive and negative directions, respectively. The final motion fields to register the original images I and J, in either direction, are calculated by inverting one overall motion field and combining the inversion result with the other overall motion field. The final motion fields are inversely consistent and this is ensured by the symmetric way that registration is carried out. The proposed method is demonstrated with phantom images, artificially deformed patient images and 4D-CT images. Our results suggest that the proposed method is able to improve the overall accuracy (reducing registration error by 30% or more, compared to the original and inversely inconsistent optical flow algorithms), reduce the inverse consistency error (by 95% or more) and increase the convergence rate (by 100% or more). The overall computation speed may slightly decrease, or increase in most cases

  1. Combining Brain–Computer Interfaces and Assistive Technologies: State-of-the-Art and Challenges

    Science.gov (United States)

    Millán, J. d. R.; Rupp, R.; Müller-Putz, G. R.; Murray-Smith, R.; Giugliemma, C.; Tangermann, M.; Vidaurre, C.; Cincotti, F.; Kübler, A.; Leeb, R.; Neuper, C.; Müller, K.-R.; Mattia, D.

    2010-01-01

    In recent years, new research has brought the field of electroencephalogram (EEG)-based brain–computer interfacing (BCI) out of its infancy and into a phase of relative maturity through many demonstrated prototypes such as brain-controlled wheelchairs, keyboards, and computer games. With this proof-of-concept phase in the past, the time is now ripe to focus on the development of practical BCI technologies that can be brought out of the lab and into real-world applications. In particular, we focus on the prospect of improving the lives of countless disabled individuals through a combination of BCI technology with existing assistive technologies (AT). In pursuit of more practical BCIs for use outside of the lab, in this paper, we identify four application areas where disabled individuals could greatly benefit from advancements in BCI technology, namely, “Communication and Control”, “Motor Substitution”, “Entertainment”, and “Motor Recovery”. We review the current state of the art and possible future developments, while discussing the main research issues in these four areas. In particular, we expect the most progress in the development of technologies such as hybrid BCI architectures, user–machine adaptation algorithms, the exploitation of users’ mental states for BCI reliability and confidence measures, the incorporation of principles in human–computer interaction (HCI) to improve BCI usability, and the development of novel BCI technology including better EEG devices. PMID:20877434

  2. Method and system for environmentally adaptive fault tolerant computing

    Science.gov (United States)

    Copenhaver, Jason L. (Inventor); Jeremy, Ramos (Inventor); Wolfe, Jeffrey M. (Inventor); Brenner, Dean (Inventor)

    2010-01-01

    A method and system for adapting fault tolerant computing. The method includes the steps of measuring an environmental condition representative of an environment. An on-board processing system's sensitivity to the measured environmental condition is measured. It is determined whether to reconfigure a fault tolerance of the on-board processing system based in part on the measured environmental condition. The fault tolerance of the on-board processing system may be reconfigured based in part on the measured environmental condition.

  3. An analytical method for computing atomic contact areas in biomolecules.

    Science.gov (United States)

    Mach, Paul; Koehl, Patrice

    2013-01-15

    We propose a new analytical method for detecting and computing contacts between atoms in biomolecules. It is based on the alpha shape theory and proceeds in three steps. First, we compute the weighted Delaunay triangulation of the union of spheres representing the molecule. In the second step, the Delaunay complex is filtered to derive the dual complex. Finally, contacts between spheres are collected. In this approach, two atoms i and j are defined to be in contact if their centers are connected by an edge in the dual complex. The contact areas between atom i and its neighbors are computed based on the caps formed by these neighbors on the surface of i; the total area of all these caps is partitioned according to their spherical Laguerre Voronoi diagram on the surface of i. This method is analytical and its implementation in a new program BallContact is fast and robust. We have used BallContact to study contacts in a database of 1551 high resolution protein structures. We show that with this new definition of atomic contacts, we generate realistic representations of the environments of atoms and residues within a protein. In particular, we establish the importance of nonpolar contact areas that complement the information represented by the accessible surface areas. This new method bears similarity to the tessellation methods used to quantify atomic volumes and contacts, with the advantage that it does not require the presence of explicit solvent molecules if the surface of the protein is to be considered. © 2012 Wiley Periodicals, Inc. Copyright © 2012 Wiley Periodicals, Inc.

  4. Laboratory Sequence in Computational Methods for Introductory Chemistry

    Science.gov (United States)

    Cody, Jason A.; Wiser, Dawn C.

    2003-07-01

    A four-exercise laboratory sequence for introductory chemistry integrating hands-on, student-centered experience with computer modeling has been designed and implemented. The progression builds from exploration of molecular shapes to intermolecular forces and the impact of those forces on chemical separations made with gas chromatography and distillation. The sequence ends with an exploration of molecular orbitals. The students use the computers as a tool; they build the molecules, submit the calculations, and interpret the results. Because of the construction of the sequence and its placement spanning the semester break, good laboratory notebook practices are reinforced and the continuity of course content and methods between semesters is emphasized. The inclusion of these techniques in the first year of chemistry has had a positive impact on student perceptions and student learning.

  5. Computational Methods for Large Spatio-temporal Datasets and Functional Data Ranking

    KAUST Repository

    Huang, Huang

    2017-07-16

    This thesis focuses on two topics, computational methods for large spatial datasets and functional data ranking. Both are tackling the challenges of big and high-dimensional data. The first topic is motivated by the prohibitive computational burden in fitting Gaussian process models to large and irregularly spaced spatial datasets. Various approximation methods have been introduced to reduce the computational cost, but many rely on unrealistic assumptions about the process and retaining statistical efficiency remains an issue. We propose a new scheme to approximate the maximum likelihood estimator and the kriging predictor when the exact computation is infeasible. The proposed method provides different types of hierarchical low-rank approximations that are both computationally and statistically efficient. We explore the improvement of the approximation theoretically and investigate the performance by simulations. For real applications, we analyze a soil moisture dataset with 2 million measurements with the hierarchical low-rank approximation and apply the proposed fast kriging to fill gaps for satellite images. The second topic is motivated by rank-based outlier detection methods for functional data. Compared to magnitude outliers, it is more challenging to detect shape outliers as they are often masked among samples. We develop a new notion of functional data depth by taking the integration of a univariate depth function. Having a form of the integrated depth, it shares many desirable features. Furthermore, the novel formation leads to a useful decomposition for detecting both shape and magnitude outliers. Our simulation studies show the proposed outlier detection procedure outperforms competitors in various outlier models. We also illustrate our methodology using real datasets of curves, images, and video frames. Finally, we introduce the functional data ranking technique to spatio-temporal statistics for visualizing and assessing covariance properties, such as

  6. [Efficiency of combined methods of hemorroid treatment using hal-rar and laser destruction].

    Science.gov (United States)

    Rodoman, G V; Kornev, L V; Shalaeva, T I; Malushenko, R N

    2017-01-01

    To develop the combined method of treatment of hemorrhoids with arterial ligation under Doppler control and laser destruction of internal and external hemorrhoids. The study included 100 patients with chronic hemorrhoids stage II and III. Combined method of HAL-laser was used in study group, HAL RAR-technique in control group 1 and closed hemorrhoidectomy with linear stapler in control group 2. Сomparative evaluation of results in both groups was performed. Combined method overcomes the drawbacks of traditional surgical treatment and limitations in external components elimination which are inherent for HAL-RAR. Moreover, it has a higher efficiency in treating of hemorrhoids stage II-III compared with HAL-RAR and is equally safe and well tolerable for patients. This method does not increase the risk of recurrence, reduces incidence of complications and time of disability.

  7. A New Method of Histogram Computation for Efficient Implementation of the HOG Algorithm

    Directory of Open Access Journals (Sweden)

    Mariana-Eugenia Ilas

    2018-03-01

    Full Text Available In this paper we introduce a new histogram computation method to be used within the histogram of oriented gradients (HOG algorithm. The new method replaces the arctangent with the slope computation and the classical magnitude allocation based on interpolation with a simpler algorithm. The new method allows a more efficient implementation of HOG in general, and particularly in field-programmable gate arrays (FPGAs, by considerably reducing the area (thus increasing the level of parallelism, while maintaining very close classification accuracy compared to the original algorithm. Thus, the new method is attractive for many applications, including car detection and classification.

  8. Design and study of parallel computing environment of Monte Carlo simulation for particle therapy planning using a public cloud-computing infrastructure

    International Nuclear Information System (INIS)

    Yokohama, Noriya

    2013-01-01

    This report was aimed at structuring the design of architectures and studying performance measurement of a parallel computing environment using a Monte Carlo simulation for particle therapy using a high performance computing (HPC) instance within a public cloud-computing infrastructure. Performance measurements showed an approximately 28 times faster speed than seen with single-thread architecture, combined with improved stability. A study of methods of optimizing the system operations also indicated lower cost. (author)

  9. Computations of finite temperature QCD with the pseudofermion method

    International Nuclear Information System (INIS)

    Fucito, F.; Solomon, S.

    1985-01-01

    The authors discuss the phase diagram of finite temperature QCD as it is obtained including the effects of dynamical quarks by the pseudofermion method. They compare their results with the results obtained by other groups and comment on the actual state of the art for these kind of computations

  10. Analysis Method of Combine Harvesters Technical Level by Functional and Structural Parameters

    Directory of Open Access Journals (Sweden)

    E. V. Zhalnin

    2018-01-01

    Full Text Available The analysis of modern methods of evaluation of the grain harvesters technical level revealed a discrepancy in various criteria: comparative parameters, dimensionless series, the names of firms, the power of the motor, the width of the capture of the harvester, the capacity at the location of the manufacturer plant, advertising brands. (Purpose of research This led to a variety in the name of harvester models, which significantly complicates the assessment of their technical level, complicates the choice of agricultural necessary to him fashion, does not give the perception of the continuity of the change of generations of combines, makes it impossible to analyze trends in their development, does not disclose the technological essence of a model, but - most importantly - combines can not be compared with each other. The figures in the name of the harvester model are not related functionally to the main parameters and performance capabilities. (Materials and methods The close correlation in the form of a linear equation between their design parameters and the capacity of combines was revealed. Verification of this equation in the process of operation of the combine showed that it statistically stable and the estimates are always within the confidence interval with an error of 5-8 percent. It was found that four parameters of the variety of factors, that affect the performance of the harvester per hour net time, having most close correlation with it are: the motor power and the square of the separation concave, straw walkers and sieves for cleaning. (Results and discussion On the basis of the revealed correlation dependence we proposed a new method of assessment of the technical level of combines, which is based on the throughput (kg/s of the wetted material and the size series, indicating the nominal productivity of the combine in centners of grain harvested in 1 hour of basic time. The methodological background and mathematical apparatus

  11. Bayesian optimization for computationally extensive probability distributions.

    Science.gov (United States)

    Tamura, Ryo; Hukushima, Koji

    2018-01-01

    An efficient method for finding a better maximizer of computationally extensive probability distributions is proposed on the basis of a Bayesian optimization technique. A key idea of the proposed method is to use extreme values of acquisition functions by Gaussian processes for the next training phase, which should be located near a local maximum or a global maximum of the probability distribution. Our Bayesian optimization technique is applied to the posterior distribution in the effective physical model estimation, which is a computationally extensive probability distribution. Even when the number of sampling points on the posterior distributions is fixed to be small, the Bayesian optimization provides a better maximizer of the posterior distributions in comparison to those by the random search method, the steepest descent method, or the Monte Carlo method. Furthermore, the Bayesian optimization improves the results efficiently by combining the steepest descent method and thus it is a powerful tool to search for a better maximizer of computationally extensive probability distributions.

  12. Efficient Numerical Methods for Stochastic Differential Equations in Computational Finance

    KAUST Repository

    Happola, Juho

    2017-09-19

    Stochastic Differential Equations (SDE) offer a rich framework to model the probabilistic evolution of the state of a system. Numerical approximation methods are typically needed in evaluating relevant Quantities of Interest arising from such models. In this dissertation, we present novel effective methods for evaluating Quantities of Interest relevant to computational finance when the state of the system is described by an SDE.

  13. Efficient Numerical Methods for Stochastic Differential Equations in Computational Finance

    KAUST Repository

    Happola, Juho

    2017-01-01

    Stochastic Differential Equations (SDE) offer a rich framework to model the probabilistic evolution of the state of a system. Numerical approximation methods are typically needed in evaluating relevant Quantities of Interest arising from such models. In this dissertation, we present novel effective methods for evaluating Quantities of Interest relevant to computational finance when the state of the system is described by an SDE.

  14. Experiences using DAKOTA stochastic expansion methods in computational simulations.

    Energy Technology Data Exchange (ETDEWEB)

    Templeton, Jeremy Alan; Ruthruff, Joseph R.

    2012-01-01

    Uncertainty quantification (UQ) methods bring rigorous statistical connections to the analysis of computational and experiment data, and provide a basis for probabilistically assessing margins associated with safety and reliability. The DAKOTA toolkit developed at Sandia National Laboratories implements a number of UQ methods, which are being increasingly adopted by modeling and simulation teams to facilitate these analyses. This report disseminates results as to the performance of DAKOTA's stochastic expansion methods for UQ on a representative application. Our results provide a number of insights that may be of interest to future users of these methods, including the behavior of the methods in estimating responses at varying probability levels, and the expansion levels for the methodologies that may be needed to achieve convergence.

  15. Effects Of Combinations Of Patternmaking Methods And Dress Forms On Garment Appearance

    Directory of Open Access Journals (Sweden)

    Fujii Chinami

    2017-09-01

    Full Text Available We investigated the effects of the combinations of patternmaking methods and dress forms on the appearance of a garment. Six upper garments were made using three patternmaking methods used in France, Italy, and Japan, and two dress forms made in Japan and France. The patterns and the appearances of the garments were compared using geometrical measurements. Sensory evaluations of the differences in garment appearance and fit on each dress form were also carried out. In the patterns, the positions of bust and waist darts were different. The waist dart length, bust dart length, and positions of the bust top were different depending on the patternmaking method, even when the same dress form was used. This was a result of differences in the measurements used and the calculation methods employed for other dimensions. This was because the ideal body shape was different for each patternmaking method. Even for garments produced for the same dress form, the appearances of the shoulder, bust, and waist from the front, side, and back views were different depending on the patternmaking method. As a result of the sensory evaluation, it was also found that the bust and waist shapes of the garments were different depending on the combination of patternmaking method and dress form. Therefore, to obtain a garment with better appearance, it is necessary to understand the effects of the combinations of patternmaking methods and body shapes.

  16. Computing the dynamics of biomembranes by combining conservative level set and adaptive finite element methods

    OpenAIRE

    Laadhari , Aymen; Saramito , Pierre; Misbah , Chaouqi

    2014-01-01

    International audience; The numerical simulation of the deformation of vesicle membranes under simple shear external fluid flow is considered in this paper. A new saddle-point approach is proposed for the imposition of the fluid incompressibility and the membrane inextensibility constraints, through Lagrange multipliers defined in the fluid and on the membrane respectively. Using a level set formulation, the problem is approximated by mixed finite elements combined with an automatic adaptive ...

  17. Structural dynamics in LMFBR containment analysis: a brief survey of computational methods and codes

    International Nuclear Information System (INIS)

    Chang, Y.W.; Gvildys, J.

    1977-01-01

    In recent years, the use of computer codes to study the response of primary containment of large, liquid-metal fast breeder reactors (LMFBR) under postulated accident conditions has been adopted by most fast reactor projects. Since the first introduction of REXCO-H containment code in 1969, a number of containment codes have evolved and been reported in the literature. The paper briefly summarizes the various numerical methods commonly used in containment analysis in computer programs. They are compared on the basis of truncation errors resulting in the numerical approximation, the method of integration, the resolution of the computed results, and the ease of programming in computer codes. The aim of the paper is to provide enough information to an analyst so that he can suitably define his choice of method, and hence his choice of programs

  18. An efficient and general numerical method to compute steady uniform vortices

    Science.gov (United States)

    Luzzatto-Fegiz, Paolo; Williamson, Charles H. K.

    2011-07-01

    Steady uniform vortices are widely used to represent high Reynolds number flows, yet their efficient computation still presents some challenges. Existing Newton iteration methods become inefficient as the vortices develop fine-scale features; in addition, these methods cannot, in general, find solutions with specified Casimir invariants. On the other hand, available relaxation approaches are computationally inexpensive, but can fail to converge to a solution. In this paper, we overcome these limitations by introducing a new discretization, based on an inverse-velocity map, which radically increases the efficiency of Newton iteration methods. In addition, we introduce a procedure to prescribe Casimirs and remove the degeneracies in the steady vorticity equation, thus ensuring convergence for general vortex configurations. We illustrate our methodology by considering several unbounded flows involving one or two vortices. Our method enables the computation, for the first time, of steady vortices that do not exhibit any geometric symmetry. In addition, we discover that, as the limiting vortex state for each flow is approached, each family of solutions traces a clockwise spiral in a bifurcation plot consisting of a velocity-impulse diagram. By the recently introduced "IVI diagram" stability approach [Phys. Rev. Lett. 104 (2010) 044504], each turn of this spiral is associated with a loss of stability for the steady flows. Such spiral structure is suggested to be a universal feature of steady, uniform-vorticity flows.

  19. Alternative majority-voting methods for real-time computing systems

    Science.gov (United States)

    Shin, Kang G.; Dolter, James W.

    1989-01-01

    Two techniques that provide a compromise between the high time overhead in maintaining synchronous voting and the difficulty of combining results in asynchronous voting are proposed. These techniques are specifically suited for real-time applications with a single-source/single-sink structure that need instantaneous error masking. They provide a compromise between a tightly synchronized system in which the synchronization overhead can be quite high, and an asynchronous system which lacks suitable algorithms for combining the output data. Both quorum-majority voting (QMV) and compare-majority voting (CMV) are most applicable to distributed real-time systems with single-source/single-sink tasks. All real-time systems eventually have to resolve their outputs into a single action at some stage. The development of the advanced information processing system (AIPS) and other similar systems serve to emphasize the importance of these techniques. Time bounds suggest that it is possible to reduce the overhead for quorum-majority voting to below that for synchronous voting. All the bounds assume that the computation phase is nonpreemptive and that there is no multitasking.

  20. How to compute isomerization energies of organic molecules with quantum chemical methods.

    Science.gov (United States)

    Grimme, Stefan; Steinmetz, Marc; Korth, Martin

    2007-03-16

    The reaction energies for 34 typical organic isomerizations including oxygen and nitrogen heteroatoms are investigated with modern quantum chemical methods that have the perspective of also being applicable to large systems. The experimental reaction enthalpies are corrected for vibrational and thermal effects, and the thus derived "experimental" reaction energies are compared to corresponding theoretical data. A series of standard AO basis sets in combination with second-order perturbation theory (MP2, SCS-MP2), conventional density functionals (e.g., PBE, TPSS, B3-LYP, MPW1K, BMK), and new perturbative functionals (B2-PLYP, mPW2-PLYP) are tested. In three cases, obvious errors of the experimental values could be detected, and accurate coupled-cluster [CCSD(T)] reference values have been used instead. It is found that only triple-zeta quality AO basis sets provide results close enough to the basis set limit and that sets like the popular 6-31G(d) should be avoided in accurate work. Augmentation of small basis sets with diffuse functions has a notable effect in B3-LYP calculations that is attributed to intramolecular basis set superposition error and covers basic deficiencies of the functional. The new methods based on perturbation theory (SCS-MP2, X2-PLYP) are found to be clearly superior to many other approaches; that is, they provide mean absolute deviations of less than 1.2 kcal mol-1 and only a few (computational thermochemistry methods.

  1. Two-phase flow steam generator simulations on parallel computers using domain decomposition method

    International Nuclear Information System (INIS)

    Belliard, M.

    2003-01-01

    Within the framework of the Domain Decomposition Method (DDM), we present industrial steady state two-phase flow simulations of PWR Steam Generators (SG) using iteration-by-sub-domain methods: standard and Adaptive Dirichlet/Neumann methods (ADN). The averaged mixture balance equations are solved by a Fractional-Step algorithm, jointly with the Crank-Nicholson scheme and the Finite Element Method. The algorithm works with overlapping or non-overlapping sub-domains and with conforming or nonconforming meshing. Computations are run on PC networks or on massively parallel mainframe computers. A CEA code-linker and the PVM package are used (master-slave context). SG mock-up simulations, involving up to 32 sub-domains, highlight the efficiency (speed-up, scalability) and the robustness of the chosen approach. With the DDM, the computational problem size is easily increased to about 1,000,000 cells and the CPU time is significantly reduced. The difficulties related to industrial use are also discussed. (author)

  2. A Method for Identifying Contours in Processing Digital Images from Computer Tomograph

    Science.gov (United States)

    Roşu, Şerban; Pater, Flavius; Costea, Dan; Munteanu, Mihnea; Roşu, Doina; Fratila, Mihaela

    2011-09-01

    The first step in digital processing of two-dimensional computed tomography images is to identify the contour of component elements. This paper deals with the collective work of specialists in medicine and applied mathematics in computer science on elaborating new algorithms and methods in medical 2D and 3D imagery.

  3. Magnetic field computations of the magnetic circuits with permanent magnets by infinite element method

    International Nuclear Information System (INIS)

    Hahn, Song Yop

    1985-01-01

    A method employing infinite elements is described for the magnetic field computations of the magnetic circuits with permanent magnet. The system stiffness matrix is derived by a variational approach, while the interfacial boundary conditions between the finite element regions and the infinite element regions are dealt with using collocation method. The proposed method is applied to a simple linear problems, and the numerical results are compared with those of the standard finite element method and the analytic solutions. It is observed that the proposed method gives more accurate results than those of the standard finite element method under the same computing efforts. (Author)

  4. Advanced display object selection methods for enhancing user-computer productivity

    Science.gov (United States)

    Osga, Glenn A.

    1993-01-01

    The User-Interface Technology Branch at NCCOSC RDT&E Division has been conducting a series of studies to address the suitability of commercial off-the-shelf (COTS) graphic user-interface (GUI) methods for efficiency and performance in critical naval combat systems. This paper presents an advanced selection algorithm and method developed to increase user performance when making selections on tactical displays. The method has also been applied with considerable success to a variety of cursor and pointing tasks. Typical GUI's allow user selection by: (1) moving a cursor with a pointing device such as a mouse, trackball, joystick, touchscreen; and (2) placing the cursor on the object. Examples of GUI objects are the buttons, icons, folders, scroll bars, etc. used in many personal computer and workstation applications. This paper presents an improved method of selection and the theoretical basis for the significant performance gains achieved with various input devices tested. The method is applicable to all GUI styles and display sizes, and is particularly useful for selections on small screens such as notebook computers. Considering the amount of work-hours spent pointing and clicking across all styles of available graphic user-interfaces, the cost/benefit in applying this method to graphic user-interfaces is substantial, with the potential for increasing productivity across thousands of users and applications.

  5. Application of a hybrid method based on the combination of genetic algorithm and Hopfield neural network for burnable poison placement

    International Nuclear Information System (INIS)

    Khoshahval, F.; Fadaei, A.

    2012-01-01

    Highlights: ► The performance of GA, HNN and combination of them in BPP optimization in PWR core are adequate. ► It seems HNN + GA arrives to better final parameter value in comparison with the two other methods. ► The computation time for HNN + GA is higher than GA and HNN. Thus a trade-off is necessary. - Abstract: In the last decades genetic algorithm (GA) and Hopfield Neural Network (HNN) have attracted considerable attention for the solution of optimization problems. In this paper, a hybrid optimization method based on the combination of the GA and HNN is introduced and applied to the burnable poison placement (BPP) problem to increase the quality of the results. BPP in a nuclear reactor core is a combinatorial and complicated problem. Arrangement and the worth of the burnable poisons (BPs) has an impressive effect on the main control parameters of a nuclear reactor. Improper design and arrangement of the BPs can be dangerous with respect to the nuclear reactor safety. In this paper, increasing BP worth along with minimizing the radial power peaking are considered as objective functions. Three optimization algorithms, genetic algorithm, Hopfield neural network optimization and a hybrid optimization method, are applied to the BPP problem and their efficiencies are compared. The hybrid optimization method gives better result in finding a better BP arrangement.

  6. Finite element analysis of multi-material models using a balancing domain decomposition method combined with the diagonal scaling preconditioner

    International Nuclear Information System (INIS)

    Ogino, Masao

    2016-01-01

    Actual problems in science and industrial applications are modeled by multi-materials and large-scale unstructured mesh, and the finite element analysis has been widely used to solve such problems on the parallel computer. However, for large-scale problems, the iterative methods for linear finite element equations suffer from slow or no convergence. Therefore, numerical methods having both robust convergence and scalable parallel efficiency are in great demand. The domain decomposition method is well known as an iterative substructuring method, and is an efficient approach for parallel finite element methods. Moreover, the balancing preconditioner achieves robust convergence. However, in case of problems consisting of very different materials, the convergence becomes bad. There are some research to solve this issue, however not suitable for cases of complex shape and composite materials. In this study, to improve convergence of the balancing preconditioner for multi-materials, a balancing preconditioner combined with the diagonal scaling preconditioner, called Scaled-BDD method, is proposed. Some numerical results are included which indicate that the proposed method has robust convergence for the number of subdomains and shows high performances compared with the original balancing preconditioner. (author)

  7. Computed tomography angiography and perfusion to assess coronary artery stenosis causing perfusion defects by single photon emission computed tomography

    DEFF Research Database (Denmark)

    Rochitte, Carlos E; George, Richard T; Chen, Marcus Y

    2014-01-01

    AIMS: To evaluate the diagnostic power of integrating the results of computed tomography angiography (CTA) and CT myocardial perfusion (CTP) to identify coronary artery disease (CAD) defined as a flow limiting coronary artery stenosis causing a perfusion defect by single photon emission computed...... emission computed tomography (SPECT/MPI). Sixteen centres enroled 381 patients who underwent combined CTA-CTP and SPECT/MPI prior to conventional coronary angiography. All four image modalities were analysed in blinded independent core laboratories. The prevalence of obstructive CAD defined by combined ICA...... tomography (SPECT). METHODS AND RESULTS: We conducted a multicentre study to evaluate the accuracy of integrated CTA-CTP for the identification of patients with flow-limiting CAD defined by ≥50% stenosis by invasive coronary angiography (ICA) with a corresponding perfusion deficit on stress single photon...

  8. An algebraic substructuring using multiple shifts for eigenvalue computations

    International Nuclear Information System (INIS)

    Ko, Jin Hwan; Jung, Sung Nam; Byun, Do Young; Bai, Zhaojun

    2008-01-01

    Algebraic substructuring (AS) is a state-of-the-art method in eigenvalue computations, especially for large-sized problems, but originally it was designed to calculate only the smallest eigenvalues. Recently, an updated version of AS has been introduced to calculate the interior eigenvalues over a specified range by using a shift concept that is referred to as the shifted AS. In this work, we propose a combined method of both AS and the shifted AS by using multiple shifts for solving a considerable number of eigensolutions in a large-sized problem, which is an emerging computational issue of noise or vibration analysis in vehicle design. In addition, we investigated the accuracy of the shifted AS by presenting an error criterion. The proposed method has been applied to the FE model of an automobile body. The combined method yielded a higher efficiency without loss of accuracy in comparison to the original AS

  9. Computation of the free energy change associated with one-electron reduction of coenzyme immersed in water: a novel approach within the framework of the quantum mechanical/molecular mechanical method combined with the theory of energy representation.

    Science.gov (United States)

    Takahashi, Hideaki; Ohno, Hajime; Kishi, Ryohei; Nakano, Masayoshi; Matubayasi, Nobuyuki

    2008-11-28

    The isoalloxazine ring (flavin ring) is a part of the coenzyme flavin adenine dinucleotide and acts as an active site in the oxidation of a substrate. We have computed the free energy change Deltamicro(red) associated with one-electron reduction of the flavin ring immersed in water by utilizing the quantum mechanical/molecular mechanical method combined with the theory of energy representation (QM/MM-ER method) recently developed. As a novel treatment in implementing the QM/MM-ER method, we have identified the excess charge to be attached on the flavin ring as a solute while the remaining molecules, i.e., flavin ring and surrounding water molecules, are treated as solvent species. Then, the reduction free energy can be decomposed into the contribution Deltamicro(red)(QM) due to the oxidant described quantum chemically and the free energy Deltamicro(red)(MM) due to the water molecules represented by a classical model. By the sum of these contributions, the total reduction free energy Deltamicro(red) has been given as -80.1 kcal/mol. To examine the accuracy and efficiency of this approach, we have also conducted the Deltamicro(red) calculation using the conventional scheme that Deltamicro(red) is constructed from the solvation free energies of the flavin rings at the oxidized and reduced states. The conventional scheme has been implemented with the QM/MM-ER method and the calculated Deltamicro(red) has been estimated as -81.0 kcal/mol, showing excellent agreement with the value given by the new approach. The present approach is efficient, in particular, to compute free energy change for the reaction occurring in a protein since it enables ones to circumvent the numerical problem brought about by subtracting the huge solvation free energies of the proteins in two states before and after the reduction.

  10. Computer-aided head film analysis: the University of California San Francisco method.

    Science.gov (United States)

    Baumrind, S; Miller, D M

    1980-07-01

    Computer technology is already assuming an important role in the management of orthodontic practices. The next 10 years are likely to see expansion in computer usage into the areas of diagnosis, treatment planning, and treatment-record keeping. In the areas of diagnosis and treatment planning, one of the first problems to be attacked will be the automation of head film analysis. The problems of constructing computer-aided systems for this purpose are considered herein in the light of the authors' 10 years of experience in developing a similar system for research purposes. The need for building in methods for automatic detection and correction of gross errors is discussed and the authors' method for doing so is presented. The construction of a rudimentary machine-readable data base for research and clinical purposes is described.

  11. A brief introduction to computer-intensive methods, with a view towards applications in spatial statistics and stereology.

    Science.gov (United States)

    Mattfeldt, Torsten

    2011-04-01

    Computer-intensive methods may be defined as data analytical procedures involving a huge number of highly repetitive computations. We mention resampling methods with replacement (bootstrap methods), resampling methods without replacement (randomization tests) and simulation methods. The resampling methods are based on simple and robust principles and are largely free from distributional assumptions. Bootstrap methods may be used to compute confidence intervals for a scalar model parameter and for summary statistics from replicated planar point patterns, and for significance tests. For some simple models of planar point processes, point patterns can be simulated by elementary Monte Carlo methods. The simulation of models with more complex interaction properties usually requires more advanced computing methods. In this context, we mention simulation of Gibbs processes with Markov chain Monte Carlo methods using the Metropolis-Hastings algorithm. An alternative to simulations on the basis of a parametric model consists of stochastic reconstruction methods. The basic ideas behind the methods are briefly reviewed and illustrated by simple worked examples in order to encourage novices in the field to use computer-intensive methods. © 2010 The Authors Journal of Microscopy © 2010 Royal Microscopical Society.

  12. Combining gene prediction methods to improve metagenomic gene annotation

    Directory of Open Access Journals (Sweden)

    Rosen Gail L

    2011-01-01

    Full Text Available Abstract Background Traditional gene annotation methods rely on characteristics that may not be available in short reads generated from next generation technology, resulting in suboptimal performance for metagenomic (environmental samples. Therefore, in recent years, new programs have been developed that optimize performance on short reads. In this work, we benchmark three metagenomic gene prediction programs and combine their predictions to improve metagenomic read gene annotation. Results We not only analyze the programs' performance at different read-lengths like similar studies, but also separate different types of reads, including intra- and intergenic regions, for analysis. The main deficiencies are in the algorithms' ability to predict non-coding regions and gene edges, resulting in more false-positives and false-negatives than desired. In fact, the specificities of the algorithms are notably worse than the sensitivities. By combining the programs' predictions, we show significant improvement in specificity at minimal cost to sensitivity, resulting in 4% improvement in accuracy for 100 bp reads with ~1% improvement in accuracy for 200 bp reads and above. To correctly annotate the start and stop of the genes, we find that a consensus of all the predictors performs best for shorter read lengths while a unanimous agreement is better for longer read lengths, boosting annotation accuracy by 1-8%. We also demonstrate use of the classifier combinations on a real dataset. Conclusions To optimize the performance for both prediction and annotation accuracies, we conclude that the consensus of all methods (or a majority vote is the best for reads 400 bp and shorter, while using the intersection of GeneMark and Orphelia predictions is the best for reads 500 bp and longer. We demonstrate that most methods predict over 80% coding (including partially coding reads on a real human gut sample sequenced by Illumina technology.

  13. Assessing different parameters estimation methods of Weibull distribution to compute wind power density

    International Nuclear Information System (INIS)

    Mohammadi, Kasra; Alavi, Omid; Mostafaeipour, Ali; Goudarzi, Navid; Jalilvand, Mahdi

    2016-01-01

    Highlights: • Effectiveness of six numerical methods is evaluated to determine wind power density. • More appropriate method for computing the daily wind power density is estimated. • Four windy stations located in the south part of Alberta, Canada namely is investigated. • The more appropriate parameters estimation method was not identical among all examined stations. - Abstract: In this study, the effectiveness of six numerical methods is evaluated to determine the shape (k) and scale (c) parameters of Weibull distribution function for the purpose of calculating the wind power density. The selected methods are graphical method (GP), empirical method of Justus (EMJ), empirical method of Lysen (EML), energy pattern factor method (EPF), maximum likelihood method (ML) and modified maximum likelihood method (MML). The purpose of this study is to identify the more appropriate method for computing the wind power density in four stations distributed in Alberta province of Canada namely Edmonton City Center Awos, Grande Prairie A, Lethbridge A and Waterton Park Gate. To provide a complete analysis, the evaluations are performed on both daily and monthly scales. The results indicate that the precision of computed wind power density values change when different parameters estimation methods are used to determine the k and c parameters. Four methods of EMJ, EML, EPF and ML present very favorable efficiency while the GP method shows weak ability for all stations. However, it is found that the more effective method is not similar among stations owing to the difference in the wind characteristics.

  14. Efficient computation of the elastography inverse problem by combining variational mesh adaption and a clustering technique

    International Nuclear Information System (INIS)

    Arnold, Alexander; Bruhns, Otto T; Reichling, Stefan; Mosler, Joern

    2010-01-01

    This paper is concerned with an efficient implementation suitable for the elastography inverse problem. More precisely, the novel algorithm allows us to compute the unknown stiffness distribution in soft tissue by means of the measured displacement field by considerably reducing the numerical cost compared to previous approaches. This is realized by combining and further elaborating variational mesh adaption with a clustering technique similar to those known from digital image compression. Within the variational mesh adaption, the underlying finite element discretization is only locally refined if this leads to a considerable improvement of the numerical solution. Additionally, the numerical complexity is reduced by the aforementioned clustering technique, in which the parameters describing the stiffness of the respective soft tissue are sorted according to a predefined number of intervals. By doing so, the number of unknowns associated with the elastography inverse problem can be chosen explicitly. A positive side effect of this method is the reduction of artificial noise in the data (smoothing of the solution). The performance and the rate of convergence of the resulting numerical formulation are critically analyzed by numerical examples.

  15. IV international conference on computational methods in marine engineering : selected papers

    CERN Document Server

    Oñate, Eugenio; García-Espinosa, Julio; Kvamsdal, Trond; Bergan, Pål; MARINE 2011

    2013-01-01

    This book contains selected papers from the Fourth International Conference on Computational Methods in Marine Engineering, held at Instituto Superior Técnico, Technical University of Lisbon, Portugal in September 2011.  Nowadays, computational methods are an essential tool of engineering, which includes a major field of interest in marine applications, such as the maritime and offshore industries and engineering challenges related to the marine environment and renewable energies. The 2011 Conference included 8 invited plenary lectures and 86 presentations distributed through 10 thematic sessions that covered many of the most relevant topics of marine engineering today. This book contains 16 selected papers from the Conference that cover “CFD for Offshore Applications”, “Fluid-Structure Interaction”, “Isogeometric Methods for Marine Engineering”, “Marine/Offshore Renewable Energy”, “Maneuvering and Seakeeping”, “Propulsion and Cavitation” and “Ship Hydrodynamics”.  The papers we...

  16. AI/OR computational model for integrating qualitative and quantitative design methods

    Science.gov (United States)

    Agogino, Alice M.; Bradley, Stephen R.; Cagan, Jonathan; Jain, Pramod; Michelena, Nestor

    1990-01-01

    A theoretical framework for integrating qualitative and numerical computational methods for optimally-directed design is described. The theory is presented as a computational model and features of implementations are summarized where appropriate. To demonstrate the versatility of the methodology we focus on four seemingly disparate aspects of the design process and their interaction: (1) conceptual design, (2) qualitative optimal design, (3) design innovation, and (4) numerical global optimization.

  17. HemeBIND: a novel method for heme binding residue prediction by combining structural and sequence information

    Directory of Open Access Journals (Sweden)

    Hu Jianjun

    2011-05-01

    Full Text Available Abstract Background Accurate prediction of binding residues involved in the interactions between proteins and small ligands is one of the major challenges in structural bioinformatics. Heme is an essential and commonly used ligand that plays critical roles in electron transfer, catalysis, signal transduction and gene expression. Although much effort has been devoted to the development of various generic algorithms for ligand binding site prediction over the last decade, no algorithm has been specifically designed to complement experimental techniques for identification of heme binding residues. Consequently, an urgent need is to develop a computational method for recognizing these important residues. Results Here we introduced an efficient algorithm HemeBIND for predicting heme binding residues by integrating structural and sequence information. We systematically investigated the characteristics of binding interfaces based on a non-redundant dataset of heme-protein complexes. It was found that several sequence and structural attributes such as evolutionary conservation, solvent accessibility, depth and protrusion clearly illustrate the differences between heme binding and non-binding residues. These features can then be separately used or combined to build the structure-based classifiers using support vector machine (SVM. The results showed that the information contained in these features is largely complementary and their combination achieved the best performance. To further improve the performance, an attempt has been made to develop a post-processing procedure to reduce the number of false positives. In addition, we built a sequence-based classifier based on SVM and sequence profile as an alternative when only sequence information can be used. Finally, we employed a voting method to combine the outputs of structure-based and sequence-based classifiers, which demonstrated remarkably better performance than the individual classifier alone

  18. A combined brain-computer interface based on P300 potentials and motion-onset visual evoked potentials.

    Science.gov (United States)

    Jin, Jing; Allison, Brendan Z; Wang, Xingyu; Neuper, Christa

    2012-04-15

    Brain-computer interfaces (BCIs) allow users to communicate via brain activity alone. Many BCIs rely on the P300 and other event-related potentials (ERPs) that are elicited when target stimuli flash. Although there have been considerable research exploring ways to improve P300 BCIs, surprisingly little work has focused on new ways to change visual stimuli to elicit more recognizable ERPs. In this paper, we introduce a "combined" BCI based on P300 potentials and motion-onset visual evoked potentials (M-VEPs) and compare it with BCIs based on each simple approach (P300 and M-VEP). Offline data suggested that performance would be best in the combined paradigm. Online tests with adaptive BCIs confirmed that our combined approach is practical in an online BCI, and yielded better performance than the other two approaches (P<0.05) without annoying or overburdening the subject. The highest mean classification accuracy (96%) and practical bit rate (26.7bit/s) were obtained from the combined condition. Copyright © 2012 Elsevier B.V. All rights reserved.

  19. Immersed boundary method combined with a high order compact scheme on half-staggered meshes

    International Nuclear Information System (INIS)

    Księżyk, M; Tyliszczak, A

    2014-01-01

    This paper presents the results of computations of incompressible flows performed with a high-order compact scheme and the immersed boundary method. The solution algorithm is based on the projection method implemented using the half-staggered grid arrangement in which the velocity components are stored in the same locations while the pressure nodes are shifted half a cell size. The time discretization is performed using the predictor-corrector method in which the forcing terms used in the immersed boundary method acts in both steps. The solution algorithm is verified based on 2D flow problems (flow in a lid-driven skewed cavity, flow over a backward facing step) and turns out to be very accurate on computational meshes comparable with ones used in the classical approaches, i.e. not based on the immersed boundary method.

  20. High Performance Computing in Science and Engineering '99 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    2000-01-01

    The book contains reports about the most significant projects from science and engineering of the Federal High Performance Computing Center Stuttgart (HLRS). They were carefully selected in a peer-review process and are showcases of an innovative combination of state-of-the-art modeling, novel algorithms and the use of leading-edge parallel computer technology. The projects of HLRS are using supercomputer systems operated jointly by university and industry and therefore a special emphasis has been put on the industrial relevance of results and methods.