WorldWideScience

Sample records for computational method based

  1. Computer Animation Based on Particle Methods

    Directory of Open Access Journals (Sweden)

    Rafal Wcislo

    1999-01-01

    Full Text Available The paper presents the main issues of a computer animation of a set of elastic macroscopic objects based on the particle method. The main assumption of the generated animations is to achieve very realistic movements in a scene observed on the computer display. The objects (solid bodies interact mechanically with each other, The movements and deformations of solids are calculated using the particle method. Phenomena connected with the behaviour of solids in the gravitational field, their defomtations caused by collisions and interactions with the optional liquid medium are simulated. The simulation ofthe liquid is performed using the cellular automata method. The paper presents both simulation schemes (particle method and cellular automata rules an the method of combining them in the single animation program. ln order to speed up the execution of the program the parallel version based on the network of workstation was developed. The paper describes the methods of the parallelization and it considers problems of load-balancing, collision detection, process synchronization and distributed control of the animation.

  2. Big data mining analysis method based on cloud computing

    Science.gov (United States)

    Cai, Qing Qiu; Cui, Hong Gang; Tang, Hao

    2017-08-01

    Information explosion era, large data super-large, discrete and non-(semi) structured features have gone far beyond the traditional data management can carry the scope of the way. With the arrival of the cloud computing era, cloud computing provides a new technical way to analyze the massive data mining, which can effectively solve the problem that the traditional data mining method cannot adapt to massive data mining. This paper introduces the meaning and characteristics of cloud computing, analyzes the advantages of using cloud computing technology to realize data mining, designs the mining algorithm of association rules based on MapReduce parallel processing architecture, and carries out the experimental verification. The algorithm of parallel association rule mining based on cloud computing platform can greatly improve the execution speed of data mining.

  3. USING COMPUTER-BASED TESTING AS ALTERNATIVE ASSESSMENT METHOD OF STUDENT LEARNING IN DISTANCE EDUCATION

    Directory of Open Access Journals (Sweden)

    Amalia SAPRIATI

    2010-04-01

    Full Text Available This paper addresses the use of computer-based testing in distance education, based on the experience of Universitas Terbuka (UT, Indonesia. Computer-based testing has been developed at UT for reasons of meeting the specific needs of distance students as the following: Ø students’ inability to sit for the scheduled test, Ø conflicting test schedules, and Ø students’ flexibility to take examination to improve their grades. In 2004, UT initiated a pilot project in the development of system and program for computer-based testing method. Then in 2005 and 2006 tryouts in the use of computer-based testing methods were conducted in 7 Regional Offices that were considered as having sufficient supporting recourses. The results of the tryouts revealed that students were enthusiastic in taking computer-based tests and they expected that the test method would be provided by UT as alternative to the traditional paper and pencil test method. UT then implemented computer-based testing method in 6 and 12 Regional Offices in 2007 and 2008 respectively. The computer-based testing was administered in the city of the designated Regional Office and was supervised by the Regional Office staff. The development of the computer-based testing was initiated with conducting tests using computers in networked configuration. The system has been continually improved, and it currently uses devices linked to the internet or the World Wide Web. The construction of the test involves the generation and selection of the test items from the item bank collection of the UT Examination Center. Thus the combination of the selected items compromises the test specification. Currently UT has offered 250 courses involving the use of computer-based testing. Students expect that more courses are offered with computer-based testing in Regional Offices within easy access by students.

  4. GRAPH-BASED POST INCIDENT INTERNAL AUDIT METHOD OF COMPUTER EQUIPMENT

    Directory of Open Access Journals (Sweden)

    I. S. Pantiukhin

    2016-05-01

    Full Text Available Graph-based post incident internal audit method of computer equipment is proposed. The essence of the proposed solution consists in the establishing of relationships among hard disk damps (image, RAM and network. This method is intended for description of information security incident properties during the internal post incident audit of computer equipment. Hard disk damps receiving and formation process takes place at the first step. It is followed by separation of these damps into the set of components. The set of components includes a large set of attributes that forms the basis for the formation of the graph. Separated data is recorded into the non-relational database management system (NoSQL that is adapted for graph storage, fast access and processing. Damps linking application method is applied at the final step. The presented method gives the possibility to human expert in information security or computer forensics for more precise, informative internal audit of computer equipment. The proposed method allows reducing the time spent on internal audit of computer equipment, increasing accuracy and informativeness of such audit. The method has a development potential and can be applied along with the other components in the tasks of users’ identification and computer forensics.

  5. Computational simulation in architectural and environmental acoustics methods and applications of wave-based computation

    CERN Document Server

    Sakamoto, Shinichi; Otsuru, Toru

    2014-01-01

    This book reviews a variety of methods for wave-based acoustic simulation and recent applications to architectural and environmental acoustic problems. Following an introduction providing an overview of computational simulation of sound environment, the book is in two parts: four chapters on methods and four chapters on applications. The first part explains the fundamentals and advanced techniques for three popular methods, namely, the finite-difference time-domain method, the finite element method, and the boundary element method, as well as alternative time-domain methods. The second part demonstrates various applications to room acoustics simulation, noise propagation simulation, acoustic property simulation for building components, and auralization. This book is a valuable reference that covers the state of the art in computational simulation for architectural and environmental acoustics.  

  6. Computer-Based Job and Occupational Data Collection Methods: Feasibility Study

    National Research Council Canada - National Science Library

    Mitchell, Judith I

    1998-01-01

    .... The feasibility study was conducted to assess the operational and logistical problems involved with the development, implementation, and evaluation of computer-based job and occupational data collection methods...

  7. Computer-Based Methods for Collecting Peer Nomination Data: Utility, Practice, and Empirical Support.

    Science.gov (United States)

    van den Berg, Yvonne H M; Gommans, Rob

    2017-09-01

    New technologies have led to several major advances in psychological research over the past few decades. Peer nomination research is no exception. Thanks to these technological innovations, computerized data collection is becoming more common in peer nomination research. However, computer-based assessment is more than simply programming the questionnaire and asking respondents to fill it in on computers. In this chapter the advantages and challenges of computer-based assessments are discussed. In addition, a list of practical recommendations and considerations is provided to inform researchers on how computer-based methods can be applied to their own research. Although the focus is on the collection of peer nomination data in particular, many of the requirements, considerations, and implications are also relevant for those who consider the use of other sociometric assessment methods (e.g., paired comparisons, peer ratings, peer rankings) or computer-based assessments in general. © 2017 Wiley Periodicals, Inc.

  8. Paper-Based and Computer-Based Concept Mappings: The Effects on Computer Achievement, Computer Anxiety and Computer Attitude

    Science.gov (United States)

    Erdogan, Yavuz

    2009-01-01

    The purpose of this paper is to compare the effects of paper-based and computer-based concept mappings on computer hardware achievement, computer anxiety and computer attitude of the eight grade secondary school students. The students were randomly allocated to three groups and were given instruction on computer hardware. The teaching methods used…

  9. Decomposition and Cross-Product-Based Method for Computing the Dynamic Equation of Robots

    Directory of Open Access Journals (Sweden)

    Ching-Long Shih

    2012-08-01

    Full Text Available This paper aims to demonstrate a clear relationship between Lagrange equations and Newton-Euler equations regarding computational methods for robot dynamics, from which we derive a systematic method for using either symbolic or on-line numerical computations. Based on the decomposition approach and cross-product operation, a computing method for robot dynamics can be easily developed. The advantages of this computing framework are that: it can be used for both symbolic and on-line numeric computation purposes, and it can also be applied to biped systems, as well as some simple closed-chain robot systems.

  10. Essential numerical computer methods

    CERN Document Server

    Johnson, Michael L

    2010-01-01

    The use of computers and computational methods has become ubiquitous in biological and biomedical research. During the last 2 decades most basic algorithms have not changed, but what has is the huge increase in computer speed and ease of use, along with the corresponding orders of magnitude decrease in cost. A general perception exists that the only applications of computers and computer methods in biological and biomedical research are either basic statistical analysis or the searching of DNA sequence data bases. While these are important applications they only scratch the surface of the current and potential applications of computers and computer methods in biomedical research. The various chapters within this volume include a wide variety of applications that extend far beyond this limited perception. As part of the Reliable Lab Solutions series, Essential Numerical Computer Methods brings together chapters from volumes 210, 240, 321, 383, 384, 454, and 467 of Methods in Enzymology. These chapters provide ...

  11. A method of non-contact reading code based on computer vision

    Science.gov (United States)

    Zhang, Chunsen; Zong, Xiaoyu; Guo, Bingxuan

    2018-03-01

    With the purpose of guarantee the computer information exchange security between internal and external network (trusted network and un-trusted network), A non-contact Reading code method based on machine vision has been proposed. Which is different from the existing network physical isolation method. By using the computer monitors, camera and other equipment. Deal with the information which will be on exchanged, Include image coding ,Generate the standard image , Display and get the actual image , Calculate homography matrix, Image distort correction and decoding in calibration, To achieve the computer information security, Non-contact, One-way transmission between the internal and external network , The effectiveness of the proposed method is verified by experiments on real computer text data, The speed of data transfer can be achieved 24kb/s. The experiment shows that this algorithm has the characteristics of high security, fast velocity and less loss of information. Which can meet the daily needs of the confidentiality department to update the data effectively and reliably, Solved the difficulty of computer information exchange between Secret network and non-secret network, With distinctive originality, practicability, and practical research value.

  12. A fast point-cloud computing method based on spatial symmetry of Fresnel field

    Science.gov (United States)

    Wang, Xiangxiang; Zhang, Kai; Shen, Chuan; Zhu, Wenliang; Wei, Sui

    2017-10-01

    Aiming at the great challenge for Computer Generated Hologram (CGH) duo to the production of high spatial-bandwidth product (SBP) is required in the real-time holographic video display systems. The paper is based on point-cloud method and it takes advantage of the propagating reversibility of Fresnel diffraction in the propagating direction and the fringe pattern of a point source, known as Gabor zone plate has spatial symmetry, so it can be used as a basis for fast calculation of diffraction field in CGH. A fast Fresnel CGH method based on the novel look-up table (N-LUT) method is proposed, the principle fringe patterns (PFPs) at the virtual plane is pre-calculated by the acceleration algorithm and be stored. Secondly, the Fresnel diffraction fringe pattern at dummy plane can be obtained. Finally, the Fresnel propagation from dummy plan to hologram plane. The simulation experiments and optical experiments based on Liquid Crystal On Silicon (LCOS) is setup to demonstrate the validity of the proposed method under the premise of ensuring the quality of 3D reconstruction the method proposed in the paper can be applied to shorten the computational time and improve computational efficiency.

  13. A Computer Game-Based Method for Studying Bullying and Cyberbullying

    Science.gov (United States)

    Mancilla-Caceres, Juan F.; Espelage, Dorothy; Amir, Eyal

    2015-01-01

    Even though previous studies have addressed the relation between face-to-face bullying and cyberbullying, none have studied both phenomena simultaneously. In this article, we present a computer game-based method to study both types of peer aggression among youth. Study participants included fifth graders (N = 93) in two U.S. Midwestern middle…

  14. Review on pen-and-paper-based observational methods for assessing ergonomic risk factors of computer work.

    Science.gov (United States)

    Rahman, Mohd Nasrull Abdol; Mohamad, Siti Shafika

    2017-01-01

    Computer works are associated with Musculoskeletal Disorders (MSDs). There are several methods have been developed to assess computer work risk factor related to MSDs. This review aims to give an overview of current techniques available for pen-and-paper-based observational methods in assessing ergonomic risk factors of computer work. We searched an electronic database for materials from 1992 until 2015. The selected methods were focused on computer work, pen-and-paper observational methods, office risk factors and musculoskeletal disorders. This review was developed to assess the risk factors, reliability and validity of pen-and-paper observational method associated with computer work. Two evaluators independently carried out this review. Seven observational methods used to assess exposure to office risk factor for work-related musculoskeletal disorders were identified. The risk factors involved in current techniques of pen and paper based observational tools were postures, office components, force and repetition. From the seven methods, only five methods had been tested for reliability. They were proven to be reliable and were rated as moderate to good. For the validity testing, from seven methods only four methods were tested and the results are moderate. Many observational tools already exist, but no single tool appears to cover all of the risk factors including working posture, office component, force, repetition and office environment at office workstations and computer work. Although the most important factor in developing tool is proper validation of exposure assessment techniques, the existing observational method did not test reliability and validity. Futhermore, this review could provide the researchers with ways on how to improve the pen-and-paper-based observational method for assessing ergonomic risk factors of computer work.

  15. A Decentralized Eigenvalue Computation Method for Spectrum Sensing Based on Average Consensus

    Science.gov (United States)

    Mohammadi, Jafar; Limmer, Steffen; Stańczak, Sławomir

    2016-07-01

    This paper considers eigenvalue estimation for the decentralized inference problem for spectrum sensing. We propose a decentralized eigenvalue computation algorithm based on the power method, which is referred to as generalized power method GPM; it is capable of estimating the eigenvalues of a given covariance matrix under certain conditions. Furthermore, we have developed a decentralized implementation of GPM by splitting the iterative operations into local and global computation tasks. The global tasks require data exchange to be performed among the nodes. For this task, we apply an average consensus algorithm to efficiently perform the global computations. As a special case, we consider a structured graph that is a tree with clusters of nodes at its leaves. For an accelerated distributed implementation, we propose to use computation over multiple access channel (CoMAC) as a building block of the algorithm. Numerical simulations are provided to illustrate the performance of the two algorithms.

  16. Computer game-based and traditional learning method: a comparison regarding students' knowledge retention.

    Science.gov (United States)

    Rondon, Silmara; Sassi, Fernanda Chiarion; Furquim de Andrade, Claudia Regina

    2013-02-25

    Educational computer games are examples of computer-assisted learning objects, representing an educational strategy of growing interest. Given the changes in the digital world over the last decades, students of the current generation expect technology to be used in advancing their learning requiring a need to change traditional passive learning methodologies to an active multisensory experimental learning methodology. The objective of this study was to compare a computer game-based learning method with a traditional learning method, regarding learning gains and knowledge retention, as means of teaching head and neck Anatomy and Physiology to Speech-Language and Hearing pathology undergraduate students. Students were randomized to participate to one of the learning methods and the data analyst was blinded to which method of learning the students had received. Students' prior knowledge (i.e. before undergoing the learning method), short-term knowledge retention and long-term knowledge retention (i.e. six months after undergoing the learning method) were assessed with a multiple choice questionnaire. Students' performance was compared considering the three moments of assessment for both for the mean total score and for separated mean scores for Anatomy questions and for Physiology questions. Students that received the game-based method performed better in the pos-test assessment only when considering the Anatomy questions section. Students that received the traditional lecture performed better in both post-test and long-term post-test when considering the Anatomy and Physiology questions. The game-based learning method is comparable to the traditional learning method in general and in short-term gains, while the traditional lecture still seems to be more effective to improve students' short and long-term knowledge retention.

  17. Curvature computation in volume-of-fluid method based on point-cloud sampling

    Science.gov (United States)

    Kassar, Bruno B. M.; Carneiro, João N. E.; Nieckele, Angela O.

    2018-01-01

    This work proposes a novel approach to compute interface curvature in multiphase flow simulation based on Volume of Fluid (VOF) method. It is well documented in the literature that curvature and normal vector computation in VOF may lack accuracy mainly due to abrupt changes in the volume fraction field across the interfaces. This may cause deterioration on the interface tension forces estimates, often resulting in inaccurate results for interface tension dominated flows. Many techniques have been presented over the last years in order to enhance accuracy in normal vectors and curvature estimates including height functions, parabolic fitting of the volume fraction, reconstructing distance functions, coupling Level Set method with VOF, convolving the volume fraction field with smoothing kernels among others. We propose a novel technique based on a representation of the interface by a cloud of points. The curvatures and the interface normal vectors are computed geometrically at each point of the cloud and projected onto the Eulerian grid in a Front-Tracking manner. Results are compared to benchmark data and significant reduction on spurious currents as well as improvement in the pressure jump are observed. The method was developed in the open source suite OpenFOAM® extending its standard VOF implementation, the interFoam solver.

  18. Computer game-based and traditional learning method: a comparison regarding students’ knowledge retention

    Directory of Open Access Journals (Sweden)

    Rondon Silmara

    2013-02-01

    Full Text Available Abstract Background Educational computer games are examples of computer-assisted learning objects, representing an educational strategy of growing interest. Given the changes in the digital world over the last decades, students of the current generation expect technology to be used in advancing their learning requiring a need to change traditional passive learning methodologies to an active multisensory experimental learning methodology. The objective of this study was to compare a computer game-based learning method with a traditional learning method, regarding learning gains and knowledge retention, as means of teaching head and neck Anatomy and Physiology to Speech-Language and Hearing pathology undergraduate students. Methods Students were randomized to participate to one of the learning methods and the data analyst was blinded to which method of learning the students had received. Students’ prior knowledge (i.e. before undergoing the learning method, short-term knowledge retention and long-term knowledge retention (i.e. six months after undergoing the learning method were assessed with a multiple choice questionnaire. Students’ performance was compared considering the three moments of assessment for both for the mean total score and for separated mean scores for Anatomy questions and for Physiology questions. Results Students that received the game-based method performed better in the pos-test assessment only when considering the Anatomy questions section. Students that received the traditional lecture performed better in both post-test and long-term post-test when considering the Anatomy and Physiology questions. Conclusions The game-based learning method is comparable to the traditional learning method in general and in short-term gains, while the traditional lecture still seems to be more effective to improve students’ short and long-term knowledge retention.

  19. Computational methods in drug discovery

    Directory of Open Access Journals (Sweden)

    Sumudu P. Leelananda

    2016-12-01

    Full Text Available The process for drug discovery and development is challenging, time consuming and expensive. Computer-aided drug discovery (CADD tools can act as a virtual shortcut, assisting in the expedition of this long process and potentially reducing the cost of research and development. Today CADD has become an effective and indispensable tool in therapeutic development. The human genome project has made available a substantial amount of sequence data that can be used in various drug discovery projects. Additionally, increasing knowledge of biological structures, as well as increasing computer power have made it possible to use computational methods effectively in various phases of the drug discovery and development pipeline. The importance of in silico tools is greater than ever before and has advanced pharmaceutical research. Here we present an overview of computational methods used in different facets of drug discovery and highlight some of the recent successes. In this review, both structure-based and ligand-based drug discovery methods are discussed. Advances in virtual high-throughput screening, protein structure prediction methods, protein–ligand docking, pharmacophore modeling and QSAR techniques are reviewed.

  20. Distributed-Lagrange-Multiplier-based computational method for particulate flow with collisions

    Science.gov (United States)

    Ardekani, Arezoo; Rangel, Roger

    2006-11-01

    A Distributed-Lagrange-Multiplier-based computational method is developed for colliding particles in a solid-fluid system. A numerical simulation is conducted in two dimensions using the finite volume method. The entire domain is treated as a fluid but the fluid in the particle domains satisfies a rigidity constraint. We present an efficient method for predicting the collision between particles. In earlier methods, a repulsive force was applied to the particles when their distance was less than a critical value. In this method, an impulsive force is computed. During the frictionless collision process between two particles, linear momentum is conserved while the tangential forces are zero. Thus, instead of satisfying a condition of rigid body motion for each particle separately, as done when particles are not in contact, both particles are rigidified together along their line of centers. Particles separate from each other when the impulsive force is less than zero and after this time, a rigidity constraint is satisfied for each particle separately. Grid independency is implemented to ensure the accuracy of the numerical simulation. A comparison between this method and previous collision strategies is presented and discussed.

  1. Offloading Method for Efficient Use of Local Computational Resources in Mobile Location-Based Services Using Clouds

    Directory of Open Access Journals (Sweden)

    Yunsik Son

    2017-01-01

    Full Text Available With the development of mobile computing, location-based services (LBSs have been developed to provide services based on location information through communication networks or the global positioning system. In recent years, LBSs have evolved into smart LBSs, which provide many services using only location information. These include basic services such as traffic, logistic, and entertainment services. However, a smart LBS may require relatively complicated operations, which may not be effectively performed by the mobile computing system. To overcome this problem, a computation offloading technique can be used to perform certain tasks on mobile devices in cloud and fog environments. Furthermore, mobile platforms exist that provide smart LBSs. The smart cross-platform is a solution based on a virtual machine (VM that enables compatibility of content in various mobile and smart device environments. However, owing to the nature of the VM-based execution method, the execution performance is degraded compared to that of the native execution method. In this paper, we introduce a computation offloading technique that utilizes fog computing to improve the performance of VMs running on mobile devices. We applied the proposed method to smart devices with a smart VM (SVM and HTML5 SVM to compare their performances.

  2. Analysis of Protein by Spectrophotometric and Computer Colour Based Intensity Method from Stem of Pea (Pisum sativum at Different Stages

    Directory of Open Access Journals (Sweden)

    Afsheen Mushtaque Shah

    2010-12-01

    Full Text Available In this study proteins were analyzed from pea plants at three different growth stages of stem by spectrophotometric i.e Lowry and Bradford quantitative methods and computer colour intensity based method. Though Spectrophotometric methods are regarded as classical methods, we report an alternate computer based method which gave comparable results. Computer software was developed the for protein analysis which is easier, time and money saving method as compared to the classical methods.

  3. Numerical Methods for Stochastic Computations A Spectral Method Approach

    CERN Document Server

    Xiu, Dongbin

    2010-01-01

    The first graduate-level textbook to focus on fundamental aspects of numerical methods for stochastic computations, this book describes the class of numerical methods based on generalized polynomial chaos (gPC). These fast, efficient, and accurate methods are an extension of the classical spectral methods of high-dimensional random spaces. Designed to simulate complex systems subject to random inputs, these methods are widely used in many areas of computer science and engineering. The book introduces polynomial approximation theory and probability theory; describes the basic theory of gPC meth

  4. A cognition-based method to ease the computational load for an extended Kalman filter.

    Science.gov (United States)

    Li, Yanpeng; Li, Xiang; Deng, Bin; Wang, Hongqiang; Qin, Yuliang

    2014-12-03

    The extended Kalman filter (EKF) is the nonlinear model of a Kalman filter (KF). It is a useful parameter estimation method when the observation model and/or the state transition model is not a linear function. However, the computational requirements in EKF are a difficulty for the system. With the help of cognition-based designation and the Taylor expansion method, a novel algorithm is proposed to ease the computational load for EKF in azimuth predicting and localizing under a nonlinear observation model. When there are nonlinear functions and inverse calculations for matrices, this method makes use of the major components (according to current performance and the performance requirements) in the Taylor expansion. As a result, the computational load is greatly lowered and the performance is ensured. Simulation results show that the proposed measure will deliver filtering output with a similar precision compared to the regular EKF. At the same time, the computational load is substantially lowered.

  5. Development of an Evaluation Method for the Design Complexity of Computer-Based Displays

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hyoung Ju; Lee, Seung Woo; Kang, Hyun Gook; Seong, Poong Hyun [KAIST, Daejeon (Korea, Republic of); Park, Jin Kyun [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2011-10-15

    The importance of the design of human machine interfaces (HMIs) for human performance and the safety of process industries has long been continuously recognized for many decades. Especially, in the case of nuclear power plants (NPPs), HMIs have significant implications for the safety of the NPPs because poor HMIs can impair the decision making ability of human operators. In order to support and increase the decision making ability of human operators, advanced HMIs based on the up-to-date computer technology are provided. Human operators in advanced main control room (MCR) acquire information through video display units (VDUs) and large display panel (LDP), which is required for the operation of NPPs. These computer-based displays contain a huge amount of information and present it with a variety of formats compared to those of a conventional MCR. For example, these displays contain more display elements such as abbreviations, labels, icons, symbols, coding, etc. As computer-based displays contain more information, the complexity of advanced displays becomes greater due to less distinctiveness of each display element. A greater understanding is emerging about the effectiveness of designs of computer-based displays, including how distinctively display elements should be designed. This study covers the early phase in the development of an evaluation method for the design complexity of computer-based displays. To this end, a series of existing studies were reviewed to suggest an appropriate concept that is serviceable to unravel this problem

  6. Equation of teachers primary school course computer with learning method based on imaging

    Directory of Open Access Journals (Sweden)

    Елена Сергеевна Пучкова

    2011-03-01

    Full Text Available The paper considers the possibility of training future teachers with the rate of computer methods of teaching through the creation of visual imagery and operate them, еxamples of practice-oriented assignments, formative professional quality based on explicit and implicit use of a visual image, which decision is based on the cognitive function of visibility.

  7. Work in process level definition: a method based on computer simulation and electre tri

    Directory of Open Access Journals (Sweden)

    Isaac Pergher

    2014-09-01

    Full Text Available This paper proposes a method for defining the levels of work in progress (WIP in productive environments managed by constant work in process (CONWIP policies. The proposed method combines the approaches of Computer Simulation and Electre TRI to support estimation of the adequate level of WIP and is presented in eighteen steps. The paper also presents an application example, performed on a metalworking company. The research method is based on Computer Simulation, supported by quantitative data analysis. The main contribution of the paper is its provision of a structured way to define inventories according to demand. With this method, the authors hope to contribute to the establishment of better capacity plans in production environments.

  8. The computer coordination method and research of inland river traffic based on ship database

    Science.gov (United States)

    Liu, Shanshan; Li, Gen

    2018-04-01

    A computer coordinated management method for inland river ship traffic is proposed in this paper, Get the inland ship's position, speed and other navigation information by VTS, building ship's statics and dynamic data bases, writing a program of computer coordinated management of inland river traffic by VB software, Automatic simulation and calculation of the meeting states of ships, Providing ship's long-distance collision avoidance information. The long-distance collision avoidance of ships will be realized. The results show that, Ships avoid or reduce meetings, this method can effectively control the macro collision avoidance of ships.

  9. Computing eigenvalue sensitivity coefficients to nuclear data based on the CLUTCH method with RMC code

    International Nuclear Information System (INIS)

    Qiu, Yishu; She, Ding; Tang, Xiao; Wang, Kan; Liang, Jingang

    2016-01-01

    Highlights: • A new algorithm is proposed to reduce memory consumption for sensitivity analysis. • The fission matrix method is used to generate adjoint fission source distributions. • Sensitivity analysis is performed on a detailed 3D full-core benchmark with RMC. - Abstract: Recently, there is a need to develop advanced methods of computing eigenvalue sensitivity coefficients to nuclear data in the continuous-energy Monte Carlo codes. One of these methods is the iterated fission probability (IFP) method, which is adopted by most of Monte Carlo codes of having the capabilities of computing sensitivity coefficients, including the Reactor Monte Carlo code RMC. Though it is accurate theoretically, the IFP method faces the challenge of huge memory consumption. Therefore, it may sometimes produce poor sensitivity coefficients since the number of particles in each active cycle is not sufficient enough due to the limitation of computer memory capacity. In this work, two algorithms of the Contribution-Linked eigenvalue sensitivity/Uncertainty estimation via Tracklength importance CHaracterization (CLUTCH) method, namely, the collision-event-based algorithm (C-CLUTCH) which is also implemented in SCALE and the fission-event-based algorithm (F-CLUTCH) which is put forward in this work, are investigated and implemented in RMC to reduce memory requirements for computing eigenvalue sensitivity coefficients. While the C-CLUTCH algorithm requires to store concerning reaction rates of every collision, the F-CLUTCH algorithm only stores concerning reaction rates of every fission point. In addition, the fission matrix method is put forward to generate the adjoint fission source distribution for the CLUTCH method to compute sensitivity coefficients. These newly proposed approaches implemented in RMC code are verified by a SF96 lattice model and the MIT BEAVRS benchmark problem. The numerical results indicate the accuracy of the F-CLUTCH algorithm is the same as the C

  10. Three-phase short circuit calculation method based on pre-computed surface for doubly fed induction generator

    Science.gov (United States)

    Ma, J.; Liu, Q.

    2018-02-01

    This paper presents an improved short circuit calculation method, based on pre-computed surface to determine the short circuit current of a distribution system with multiple doubly fed induction generators (DFIGs). The short circuit current, injected into power grid by DFIG, is determined by low voltage ride through (LVRT) control and protection under grid fault. However, the existing methods are difficult to calculate the short circuit current of DFIG in engineering practice due to its complexity. A short circuit calculation method, based on pre-computed surface, was proposed by developing the surface of short circuit current changing with the calculating impedance and the open circuit voltage. And the short circuit currents were derived by taking into account the rotor excitation and crowbar activation time. Finally, the pre-computed surfaces of short circuit current at different time were established, and the procedure of DFIG short circuit calculation considering its LVRT was designed. The correctness of proposed method was verified by simulation.

  11. Methods for computing color anaglyphs

    Science.gov (United States)

    McAllister, David F.; Zhou, Ya; Sullivan, Sophia

    2010-02-01

    A new computation technique is presented for calculating pixel colors in anaglyph images. The method depends upon knowing the RGB spectral distributions of the display device and the transmission functions of the filters in the viewing glasses. It requires the solution of a nonlinear least-squares program for each pixel in a stereo pair and is based on minimizing color distances in the CIEL*a*b* uniform color space. The method is compared with several techniques for computing anaglyphs including approximation in CIE space using the Euclidean and Uniform metrics, the Photoshop method and its variants, and a method proposed by Peter Wimmer. We also discuss the methods of desaturation and gamma correction for reducing retinal rivalry.

  12. System and method for controlling power consumption in a computer system based on user satisfaction

    Science.gov (United States)

    Yang, Lei; Dick, Robert P; Chen, Xi; Memik, Gokhan; Dinda, Peter A; Shy, Alex; Ozisikyilmaz, Berkin; Mallik, Arindam; Choudhary, Alok

    2014-04-22

    Systems and methods for controlling power consumption in a computer system. For each of a plurality of interactive applications, the method changes a frequency at which a processor of the computer system runs, receives an indication of user satisfaction, determines a relationship between the changed frequency and the user satisfaction of the interactive application, and stores the determined relationship information. The determined relationship can distinguish between different users and different interactive applications. A frequency may be selected from the discrete frequencies at which the processor of the computer system runs based on the determined relationship information for a particular user and a particular interactive application running on the processor of the computer system. The processor may be adapted to run at the selected frequency.

  13. Classical versus Computer Algebra Methods in Elementary Geometry

    Science.gov (United States)

    Pech, Pavel

    2005-01-01

    Computer algebra methods based on results of commutative algebra like Groebner bases of ideals and elimination of variables make it possible to solve complex, elementary and non elementary problems of geometry, which are difficult to solve using a classical approach. Computer algebra methods permit the proof of geometric theorems, automatic…

  14. Computer based methods for measurement of joint space width: update of an ongoing OMERACT project

    NARCIS (Netherlands)

    Sharp, John T.; Angwin, Jane; Boers, Maarten; Duryea, Jeff; von Ingersleben, Gabriele; Hall, James R.; Kauffman, Joost A.; Landewé, Robert; Langs, Georg; Lukas, Cédric; Maillefert, Jean-Francis; Bernelot Moens, Hein J.; Peloschek, Philipp; Strand, Vibeke; van der Heijde, Désirée

    2007-01-01

    Computer-based methods of measuring joint space width (JSW) could potentially have advantages over scoring joint space narrowing, with regard to increased standardization, sensitivity, and reproducibility. In an early exercise, 4 different methods showed good agreement on measured change in JSW over

  15. Volume Measurement Algorithm for Food Product with Irregular Shape using Computer Vision based on Monte Carlo Method

    Directory of Open Access Journals (Sweden)

    Joko Siswantoro

    2014-11-01

    Full Text Available Volume is one of important issues in the production and processing of food product. Traditionally, volume measurement can be performed using water displacement method based on Archimedes’ principle. Water displacement method is inaccurate and considered as destructive method. Computer vision offers an accurate and nondestructive method in measuring volume of food product. This paper proposes algorithm for volume measurement of irregular shape food product using computer vision based on Monte Carlo method. Five images of object were acquired from five different views and then processed to obtain the silhouettes of object. From the silhouettes of object, Monte Carlo method was performed to approximate the volume of object. The simulation result shows that the algorithm produced high accuracy and precision for volume measurement.

  16. Advanced scientific computational methods and their applications to nuclear technologies. (4) Overview of scientific computational methods, introduction of continuum simulation methods and their applications (4)

    International Nuclear Information System (INIS)

    Sekimura, Naoto; Okita, Taira

    2006-01-01

    Scientific computational methods have advanced remarkably with the progress of nuclear development. They have played the role of weft connecting each realm of nuclear engineering and then an introductory course of advanced scientific computational methods and their applications to nuclear technologies were prepared in serial form. This is the fourth issue showing the overview of scientific computational methods with the introduction of continuum simulation methods and their applications. Simulation methods on physical radiation effects on materials are reviewed based on the process such as binary collision approximation, molecular dynamics, kinematic Monte Carlo method, reaction rate method and dislocation dynamics. (T. Tanaka)

  17. A blended pressure/density based method for the computation of incompressible and compressible flows

    International Nuclear Information System (INIS)

    Rossow, C.-C.

    2003-01-01

    An alternative method to low speed preconditioning for the computation of nearly incompressible flows with compressible methods is developed. For this approach the leading terms of the flux difference splitting (FDS) approximate Riemann solver are analyzed in the incompressible limit. In combination with the requirement of the velocity field to be divergence-free, an elliptic equation to solve for a pressure correction to enforce the divergence-free velocity field on the discrete level is derived. The pressure correction equation established is shown to be equivalent to classical methods for incompressible flows. In order to allow the computation of flows at all speeds, a blending technique for the transition from the incompressible, pressure based formulation to the compressible, density based formulation is established. It is found necessary to use preconditioning with this blending technique to account for a remaining 'compressible' contribution in the incompressible limit, and a suitable matrix directly applicable to conservative residuals is derived. Thus, a coherent framework is established to cover the discretization of both incompressible and compressible flows. Compared with standard preconditioning techniques, the blended pressure/density based approach showed improved robustness for high lift flows close to separation

  18. Probability-Based Determination Methods for Service Waiting in Service-Oriented Computing Environments

    Science.gov (United States)

    Zeng, Sen; Huang, Shuangxi; Liu, Yang

    Cooperative business processes (CBP)-based service-oriented enterprise networks (SOEN) are emerging with the significant advances of enterprise integration and service-oriented architecture. The performance prediction and optimization for CBP-based SOEN is very complex. To meet these challenges, one of the key points is to try to reduce an abstract service’s waiting number of its physical services. This paper introduces a probability-based determination method (PBDM) of an abstract service’ waiting number, M l , and time span, τ i , for its physical services. The determination of M i and τ i is according to the physical services’ arriving rule and their overall performance’s distribution functions. In PBDM, the arriving probability of the physical services with the best overall performance value is a pre-defined reliability. PBDM has made use of the information of the physical services’ arriving rule and performance distribution functions thoroughly, which will improve the computational efficiency for the scheme design and performance optimization of the collaborative business processes in service-oriented computing environments.

  19. The Research of Computer Aided Farm Machinery Designing Method Based on Ergonomics

    Science.gov (United States)

    Gao, Xiyin; Li, Xinling; Song, Qiang; Zheng, Ying

    Along with agricultural economy development, the farm machinery product type Increases gradually, the ergonomics question is also getting more and more prominent. The widespread application of computer aided machinery design makes it possible that farm machinery design is intuitive, flexible and convenient. At present, because the developed computer aided ergonomics software has not suitable human body database, which is needed in view of farm machinery design in China, the farm machinery design have deviation in ergonomics analysis. This article puts forward that using the open database interface procedure in CATIA to establish human body database which aims at the farm machinery design, and reading the human body data to ergonomics module of CATIA can product practical application virtual body, using human posture analysis and human activity analysis module to analysis the ergonomics in farm machinery, thus computer aided farm machinery designing method based on engineering can be realized.

  20. Computational and mathematical methods in brain atlasing.

    Science.gov (United States)

    Nowinski, Wieslaw L

    2017-12-01

    Brain atlases have a wide range of use from education to research to clinical applications. Mathematical methods as well as computational methods and tools play a major role in the process of brain atlas building and developing atlas-based applications. Computational methods and tools cover three areas: dedicated editors for brain model creation, brain navigators supporting multiple platforms, and atlas-assisted specific applications. Mathematical methods in atlas building and developing atlas-aided applications deal with problems in image segmentation, geometric body modelling, physical modelling, atlas-to-scan registration, visualisation, interaction and virtual reality. Here I overview computational and mathematical methods in atlas building and developing atlas-assisted applications, and share my contribution to and experience in this field.

  1. Cone Beam X-ray Luminescence Computed Tomography Based on Bayesian Method.

    Science.gov (United States)

    Zhang, Guanglei; Liu, Fei; Liu, Jie; Luo, Jianwen; Xie, Yaoqin; Bai, Jing; Xing, Lei

    2017-01-01

    X-ray luminescence computed tomography (XLCT), which aims to achieve molecular and functional imaging by X-rays, has recently been proposed as a new imaging modality. Combining the principles of X-ray excitation of luminescence-based probes and optical signal detection, XLCT naturally fuses functional and anatomical images and provides complementary information for a wide range of applications in biomedical research. In order to improve the data acquisition efficiency of previously developed narrow-beam XLCT, a cone beam XLCT (CB-XLCT) mode is adopted here to take advantage of the useful geometric features of cone beam excitation. Practically, a major hurdle in using cone beam X-ray for XLCT is that the inverse problem here is seriously ill-conditioned, hindering us to achieve good image quality. In this paper, we propose a novel Bayesian method to tackle the bottleneck in CB-XLCT reconstruction. The method utilizes a local regularization strategy based on Gaussian Markov random field to mitigate the ill-conditioness of CB-XLCT. An alternating optimization scheme is then used to automatically calculate all the unknown hyperparameters while an iterative coordinate descent algorithm is adopted to reconstruct the image with a voxel-based closed-form solution. Results of numerical simulations and mouse experiments show that the self-adaptive Bayesian method significantly improves the CB-XLCT image quality as compared with conventional methods.

  2. Fast computation of the characteristics method on vector computers

    International Nuclear Information System (INIS)

    Kugo, Teruhiko

    2001-11-01

    Fast computation of the characteristics method to solve the neutron transport equation in a heterogeneous geometry has been studied. Two vector computation algorithms; an odd-even sweep (OES) method and an independent sequential sweep (ISS) method have been developed and their efficiency to a typical fuel assembly calculation has been investigated. For both methods, a vector computation is 15 times faster than a scalar computation. From a viewpoint of comparison between the OES and ISS methods, the followings are found: 1) there is a small difference in a computation speed, 2) the ISS method shows a faster convergence and 3) the ISS method saves about 80% of computer memory size compared with the OES method. It is, therefore, concluded that the ISS method is superior to the OES method as a vectorization method. In the vector computation, a table-look-up method to reduce computation time of an exponential function saves only 20% of a whole computation time. Both the coarse mesh rebalance method and the Aitken acceleration method are effective as acceleration methods for the characteristics method, a combination of them saves 70-80% of outer iterations compared with a free iteration. (author)

  3. A dental implant-based registration method for measuring mandibular kinematics using cone beam computed tomography-based fluoroscopy.

    Science.gov (United States)

    Lin, Cheng-Chung; Chen, Chien-Chih; Chen, Yunn-Jy; Lu, Tung-Wu; Hong, Shih-Wun

    2014-01-01

    This study aimed to develop and evaluate experimentally an implant-based registration method for measuring three-dimensional (3D) kinematics of the mandible and dental implants in the mandible based on dental cone beam computed tomography (CBCT), modified to include fluoroscopic function. The proposed implant-based registration method was based on the registration of CBCT data of implants/bones with single-plane fluoroscopy images. Seven registration conditions that included one to three implants were evaluated experimentally for their performance in a cadaveric porcine headmodel. The implant-based registration method was shown to have measurement errors (SD) of less than -0.2 (0.3) mm, 1.1 (2.2) mm, and 0.7 degrees (1.3 degrees) for the in-plane translation, out-of-plane translation, and all angular components, respectively, regardless of the number of implants used. The corresponding errors were reduced to less than -0.1 (0.1) mm, -0.3 (1.7) mm, and 0.5 degree (0.4 degree) when three implants were used. An implant-based registration method was developed to measure the 3D kinematics of the mandible/implants. With its high accuracy and reliability, the new method will be useful for measuring the 3D motion of the bones/implants for relevant applications.

  4. Comparison of Knowledge and Attitudes Using Computer-Based and Face-to-Face Personal Hygiene Training Methods in Food Processing Facilities

    Science.gov (United States)

    Fenton, Ginger D.; LaBorde, Luke F.; Radhakrishna, Rama B.; Brown, J. Lynne; Cutter, Catherine N.

    2006-01-01

    Computer-based training is increasingly favored by food companies for training workers due to convenience, self-pacing ability, and ease of use. The objectives of this study were to determine if personal hygiene training, offered through a computer-based method, is as effective as a face-to-face method in knowledge acquisition and improved…

  5. Tundish Cover Flux Thickness Measurement Method and Instrumentation Based on Computer Vision in Continuous Casting Tundish

    Directory of Open Access Journals (Sweden)

    Meng Lu

    2013-01-01

    Full Text Available Thickness of tundish cover flux (TCF plays an important role in continuous casting (CC steelmaking process. Traditional measurement method of TCF thickness is single/double wire methods, which have several problems such as personal security, easily affected by operators, and poor repeatability. To solve all these problems, in this paper, we specifically designed and built an instrumentation and presented a novel method to measure the TCF thickness. The instrumentation was composed of a measurement bar, a mechanical device, a high-definition industrial camera, a Siemens S7-200 programmable logic controller (PLC, and a computer. Our measurement method was based on the computer vision algorithms, including image denoising method, monocular range measurement method, scale invariant feature transform (SIFT, and image gray gradient detection method. Using the present instrumentation and method, images in the CC tundish can be collected by camera and transferred to computer to do imaging processing. Experiments showed that our instrumentation and method worked well at scene of steel plants, can accurately measure the thickness of TCF, and overcome the disadvantages of traditional measurement methods, or even replace the traditional ones.

  6. Computer Anti-forensics Methods and their Impact on Computer Forensic Investigation

    OpenAIRE

    Pajek, Przemyslaw; Pimenidis, Elias

    2009-01-01

    Electronic crime is very difficult to investigate and prosecute, mainly\\ud due to the fact that investigators have to build their cases based on artefacts left\\ud on computer systems. Nowadays, computer criminals are aware of computer forensics\\ud methods and techniques and try to use countermeasure techniques to efficiently\\ud impede the investigation processes. In many cases investigation with\\ud such countermeasure techniques in place appears to be too expensive, or too\\ud time consuming t...

  7. A Security Monitoring Method Based on Autonomic Computing for the Cloud Platform

    Directory of Open Access Journals (Sweden)

    Jingjie Zhang

    2018-01-01

    Full Text Available With the continuous development of cloud computing, cloud security has become one of the most important issues in cloud computing. For example, data stored in the cloud platform may be attacked, and its security is difficult to be guaranteed. Therefore, we must attach weight to the issue of how to protect the data stored in the cloud. To protect data, data monitoring is a necessary process. Based on autonomic computing, we develop a cloud data monitoring system on the cloud platform, monitoring whether the data is abnormal in the cycle and analyzing the security of the data according to the monitored results. In this paper, the feasibility of the scheme can be verified through simulation. The results show that the proposed method can adapt to the dynamic change of cloud platform load, and it can also accurately evaluate the degree of abnormal data. Meanwhile, by adjusting monitoring frequency automatically, it improves the accuracy and timeliness of monitoring. Furthermore, it can reduce the monitoring cost of the system in normal operation process.

  8. SU-F-I-43: A Software-Based Statistical Method to Compute Low Contrast Detectability in Computed Tomography Images

    Energy Technology Data Exchange (ETDEWEB)

    Chacko, M; Aldoohan, S [University of Oklahoma Health Sciences Center, Oklahoma City, OK (United States)

    2016-06-15

    Purpose: The low contrast detectability (LCD) of a CT scanner is its ability to detect and display faint lesions. The current approach to quantify LCD is achieved using vendor-specific methods and phantoms, typically by subjectively observing the smallest size object at a contrast level above phantom background. However, this approach does not yield clinically applicable values for LCD. The current study proposes a statistical LCD metric using software tools to not only to assess scanner performance, but also to quantify the key factors affecting LCD. This approach was developed using uniform QC phantoms, and its applicability was then extended under simulated clinical conditions. Methods: MATLAB software was developed to compute LCD using a uniform image of a QC phantom. For a given virtual object size, the software randomly samples the image within a selected area, and uses statistical analysis based on Student’s t-distribution to compute the LCD as the minimal Hounsfield Unit’s that can be distinguished from the background at the 95% confidence level. Its validity was assessed by comparison with the behavior of a known QC phantom under various scan protocols and a tissue-mimicking phantom. The contributions of beam quality and scattered radiation upon the computed LCD were quantified by using various external beam-hardening filters and phantom lengths. Results: As expected, the LCD was inversely related to object size under all scan conditions. The type of image reconstruction kernel filter and tissue/organ type strongly influenced the background noise characteristics and therefore, the computed LCD for the associated image. Conclusion: The proposed metric and its associated software tools are vendor-independent and can be used to analyze any LCD scanner performance. Furthermore, the method employed can be used in conjunction with the relationships established in this study between LCD and tissue type to extend these concepts to patients’ clinical CT

  9. Intravenous catheter training system: computer-based education versus traditional learning methods.

    Science.gov (United States)

    Engum, Scott A; Jeffries, Pamela; Fisher, Lisa

    2003-07-01

    Virtual reality simulators allow trainees to practice techniques without consequences, reduce potential risk associated with training, minimize animal use, and help to develop standards and optimize procedures. Current intravenous (IV) catheter placement training methods utilize plastic arms, however, the lack of variability can diminish the educational stimulus for the student. This study compares the effectiveness of an interactive, multimedia, virtual reality computer IV catheter simulator with a traditional laboratory experience of teaching IV venipuncture skills to both nursing and medical students. A randomized, pretest-posttest experimental design was employed. A total of 163 participants, 70 baccalaureate nursing students and 93 third-year medical students beginning their fundamental skills training were recruited. The students ranged in age from 20 to 55 years (mean 25). Fifty-eight percent were female and 68% percent perceived themselves as having average computer skills (25% declaring excellence). The methods of IV catheter education compared included a traditional method of instruction involving a scripted self-study module which involved a 10-minute videotape, instructor demonstration, and hands-on-experience using plastic mannequin arms. The second method involved an interactive multimedia, commercially made computer catheter simulator program utilizing virtual reality (CathSim). The pretest scores were similar between the computer and the traditional laboratory group. There was a significant improvement in cognitive gains, student satisfaction, and documentation of the procedure with the traditional laboratory group compared with the computer catheter simulator group. Both groups were similar in their ability to demonstrate the skill correctly. CONCLUSIONS; This evaluation and assessment was an initial effort to assess new teaching methodologies related to intravenous catheter placement and their effects on student learning outcomes and behaviors

  10. Numerical evaluation of methods for computing tomographic projections

    International Nuclear Information System (INIS)

    Zhuang, W.; Gopal, S.S.; Hebert, T.J.

    1994-01-01

    Methods for computing forward/back projections of 2-D images can be viewed as numerical integration techniques. The accuracy of any ray-driven projection method can be improved by increasing the number of ray-paths that are traced per projection bin. The accuracy of pixel-driven projection methods can be increased by dividing each pixel into a number of smaller sub-pixels and projecting each sub-pixel. The authors compared four competing methods of computing forward/back projections: bilinear interpolation, ray-tracing, pixel-driven projection based upon sub-pixels, and pixel-driven projection based upon circular, rather than square, pixels. This latter method is equivalent to a fast, bi-nonlinear interpolation. These methods and the choice of the number of ray-paths per projection bin or the number of sub-pixels per pixel present a trade-off between computational speed and accuracy. To solve the problem of assessing backprojection accuracy, the analytical inverse Fourier transform of the ramp filtered forward projection of the Shepp and Logan head phantom is derived

  11. Application of statistical method for FBR plant transient computation

    International Nuclear Information System (INIS)

    Kikuchi, Norihiro; Mochizuki, Hiroyasu

    2014-01-01

    Highlights: • A statistical method with a large trial number up to 10,000 is applied to the plant system analysis. • A turbine trip test conducted at the “Monju” reactor is selected as a plant transient. • A reduction method of trial numbers is discussed. • The result with reduced trial number can express the base regions of the computed distribution. -- Abstract: It is obvious that design tolerances, errors included in operation, and statistical errors in empirical correlations effect on the transient behavior. The purpose of the present study is to apply above mentioned statistical errors to a plant system computation in order to evaluate the statistical distribution contained in the transient evolution. A selected computation case is the turbine trip test conducted at 40% electric power of the prototype fast reactor “Monju”. All of the heat transport systems of “Monju” are modeled with the NETFLOW++ system code which has been validated using the plant transient tests of the experimental fast reactor Joyo, and “Monju”. The effects of parameters on upper plenum temperature are confirmed by sensitivity analyses, and dominant parameters are chosen. The statistical errors are applied to each computation deck by using a pseudorandom number and the Monte-Carlo method. The dSFMT (Double precision SIMD-oriented Fast Mersenne Twister) that is developed version of Mersenne Twister (MT), is adopted as the pseudorandom number generator. In the present study, uniform random numbers are generated by dSFMT, and these random numbers are transformed to the normal distribution by the Box–Muller method. Ten thousands of different computations are performed at once. In every computation case, the steady calculation is performed for 12,000 s, and transient calculation is performed for 4000 s. In the purpose of the present statistical computation, it is important that the base regions of distribution functions should be calculated precisely. A large number of

  12. Reliability of lower limb alignment measures using an established landmark-based method with a customized computer software program

    Science.gov (United States)

    Sled, Elizabeth A.; Sheehy, Lisa M.; Felson, David T.; Costigan, Patrick A.; Lam, Miu; Cooke, T. Derek V.

    2010-01-01

    The objective of the study was to evaluate the reliability of frontal plane lower limb alignment measures using a landmark-based method by (1) comparing inter- and intra-reader reliability between measurements of alignment obtained manually with those using a computer program, and (2) determining inter- and intra-reader reliability of computer-assisted alignment measures from full-limb radiographs. An established method for measuring alignment was used, involving selection of 10 femoral and tibial bone landmarks. 1) To compare manual and computer methods, we used digital images and matching paper copies of five alignment patterns simulating healthy and malaligned limbs drawn using AutoCAD. Seven readers were trained in each system. Paper copies were measured manually and repeat measurements were performed daily for 3 days, followed by a similar routine with the digital images using the computer. 2) To examine the reliability of computer-assisted measures from full-limb radiographs, 100 images (200 limbs) were selected as a random sample from 1,500 full-limb digital radiographs which were part of the Multicenter Osteoarthritis (MOST) Study. Three trained readers used the software program to measure alignment twice from the batch of 100 images, with two or more weeks between batch handling. Manual and computer measures of alignment showed excellent agreement (intraclass correlations [ICCs] 0.977 – 0.999 for computer analysis; 0.820 – 0.995 for manual measures). The computer program applied to full-limb radiographs produced alignment measurements with high inter- and intra-reader reliability (ICCs 0.839 – 0.998). In conclusion, alignment measures using a bone landmark-based approach and a computer program were highly reliable between multiple readers. PMID:19882339

  13. The method of a joint intraday security check system based on cloud computing

    Science.gov (United States)

    Dong, Wei; Feng, Changyou; Zhou, Caiqi; Cai, Zhi; Dan, Xu; Dai, Sai; Zhang, Chuancheng

    2017-01-01

    The intraday security check is the core application in the dispatching control system. The existing security check calculation only uses the dispatch center’s local model and data as the functional margin. This paper introduces the design of all-grid intraday joint security check system based on cloud computing and its implementation. To reduce the effect of subarea bad data on the all-grid security check, a new power flow algorithm basing on comparison and adjustment with inter-provincial tie-line plan is presented. And the numerical example illustrated the effectiveness and feasibility of the proposed method.

  14. Spintronics-based computing

    CERN Document Server

    Prenat, Guillaume

    2015-01-01

    This book provides a comprehensive introduction to spintronics-based computing for the next generation of ultra-low power/highly reliable logic, which is widely considered a promising candidate to replace conventional, pure CMOS-based logic. It will cover aspects from device to system-level, including magnetic memory cells, device modeling, hybrid circuit structure, design methodology, CAD tools, and technological integration methods. This book is accessible to a variety of readers and little or no background in magnetism and spin electronics are required to understand its content.  The multidisciplinary team of expert authors from circuits, devices, computer architecture, CAD and system design reveal to readers the potential of spintronics nanodevices to reduce power consumption, improve reliability and enable new functionality.  .

  15. A fast computation method for MUSIC spectrum function based on circular arrays

    Science.gov (United States)

    Du, Zhengdong; Wei, Ping

    2015-02-01

    The large computation amount of multiple signal classification (MUSIC) spectrum function seriously affects the timeliness of direction finding system using MUSIC algorithm, especially in the two-dimensional directions of arrival (DOA) estimation of azimuth and elevation with a large antenna array. This paper proposes a fast computation method for MUSIC spectrum. It is suitable for any circular array. First, the circular array is transformed into a virtual uniform circular array, in the process of calculating MUSIC spectrum, for the cyclic characteristics of steering vector, the inner product in the calculation of spatial spectrum is realised by cyclic convolution. The computational amount of MUSIC spectrum is obviously less than that of the conventional method. It is a very practical way for MUSIC spectrum computation in circular arrays.

  16. An introduction to statistical computing a simulation-based approach

    CERN Document Server

    Voss, Jochen

    2014-01-01

    A comprehensive introduction to sampling-based methods in statistical computing The use of computers in mathematics and statistics has opened up a wide range of techniques for studying otherwise intractable problems.  Sampling-based simulation techniques are now an invaluable tool for exploring statistical models.  This book gives a comprehensive introduction to the exciting area of sampling-based methods. An Introduction to Statistical Computing introduces the classical topics of random number generation and Monte Carlo methods.  It also includes some advanced met

  17. Developing a multimodal biometric authentication system using soft computing methods.

    Science.gov (United States)

    Malcangi, Mario

    2015-01-01

    Robust personal authentication is becoming ever more important in computer-based applications. Among a variety of methods, biometric offers several advantages, mainly in embedded system applications. Hard and soft multi-biometric, combined with hard and soft computing methods, can be applied to improve the personal authentication process and to generalize the applicability. This chapter describes the embedded implementation of a multi-biometric (voiceprint and fingerprint) multimodal identification system based on hard computing methods (DSP) for feature extraction and matching, an artificial neural network (ANN) for soft feature pattern matching, and a fuzzy logic engine (FLE) for data fusion and decision.

  18. Moment-based method for computing the two-dimensional discrete Hartley transform

    Science.gov (United States)

    Dong, Zhifang; Wu, Jiasong; Shu, Huazhong

    2009-10-01

    In this paper, we present a fast algorithm for computing the two-dimensional (2-D) discrete Hartley transform (DHT). By using kernel transform and Taylor expansion, the 2-D DHT is approximated by a linear sum of 2-D geometric moments. This enables us to use the fast algorithms developed for computing the 2-D moments to efficiently calculate the 2-D DHT. The proposed method achieves a simple computational structure and is suitable to deal with any sequence lengths.

  19. Discrete linear canonical transform computation by adaptive method.

    Science.gov (United States)

    Zhang, Feng; Tao, Ran; Wang, Yue

    2013-07-29

    The linear canonical transform (LCT) describes the effect of quadratic phase systems on a wavefield and generalizes many optical transforms. In this paper, the computation method for the discrete LCT using the adaptive least-mean-square (LMS) algorithm is presented. The computation approaches of the block-based discrete LCT and the stream-based discrete LCT using the LMS algorithm are derived, and the implementation structures of these approaches by the adaptive filter system are considered. The proposed computation approaches have the inherent parallel structures which make them suitable for efficient VLSI implementations, and are robust to the propagation of possible errors in the computation process.

  20. A computational method for sharp interface advection

    DEFF Research Database (Denmark)

    Roenby, Johan; Bredmose, Henrik; Jasak, Hrvoje

    2016-01-01

    We devise a numerical method for passive advection of a surface, such as the interface between two incompressible fluids, across a computational mesh. The method is called isoAdvector, and is developed for general meshes consisting of arbitrary polyhedral cells. The algorithm is based on the volu...

  1. Computational Methods for Modeling Aptamers and Designing Riboswitches

    Directory of Open Access Journals (Sweden)

    Sha Gong

    2017-11-01

    Full Text Available Riboswitches, which are located within certain noncoding RNA region perform functions as genetic “switches”, regulating when and where genes are expressed in response to certain ligands. Understanding the numerous functions of riboswitches requires computation models to predict structures and structural changes of the aptamer domains. Although aptamers often form a complex structure, computational approaches, such as RNAComposer and Rosetta, have already been applied to model the tertiary (three-dimensional (3D structure for several aptamers. As structural changes in aptamers must be achieved within the certain time window for effective regulation, kinetics is another key point for understanding aptamer function in riboswitch-mediated gene regulation. The coarse-grained self-organized polymer (SOP model using Langevin dynamics simulation has been successfully developed to investigate folding kinetics of aptamers, while their co-transcriptional folding kinetics can be modeled by the helix-based computational method and BarMap approach. Based on the known aptamers, the web server Riboswitch Calculator and other theoretical methods provide a new tool to design synthetic riboswitches. This review will represent an overview of these computational methods for modeling structure and kinetics of riboswitch aptamers and for designing riboswitches.

  2. Computationally efficient dynamic modeling of robot manipulators with multiple flexible-links using acceleration-based discrete time transfer matrix method

    DEFF Research Database (Denmark)

    Zhang, Xuping; Sørensen, Rasmus; RahbekIversen, Mathias

    2018-01-01

    This paper presents a novel and computationally efficient modeling method for the dynamics of flexible-link robot manipulators. In this method, a robot manipulator is decomposed into components/elements. The component/element dynamics is established using Newton–Euler equations, and then is linea......This paper presents a novel and computationally efficient modeling method for the dynamics of flexible-link robot manipulators. In this method, a robot manipulator is decomposed into components/elements. The component/element dynamics is established using Newton–Euler equations......, and then is linearized based on the acceleration-based state vector. The transfer matrices for each type of components/elements are developed, and used to establish the system equations of a flexible robot manipulator by concatenating the state vector from the base to the end-effector. With this strategy, the size...... manipulators, and only involves calculating and transferring component/element dynamic equations that have small size. The numerical simulations and experimental testing of flexible-link manipulators are conducted to validate the proposed methodologies....

  3. Monte Carlo methods of PageRank computation

    NARCIS (Netherlands)

    Litvak, Nelli

    2004-01-01

    We describe and analyze an on-line Monte Carlo method of PageRank computation. The PageRank is being estimated basing on results of a large number of short independent simulation runs initiated from each page that contains outgoing hyperlinks. The method does not require any storage of the hyperlink

  4. Reference depth for geostrophic computation - A new method

    Digital Repository Service at National Institute of Oceanography (India)

    Varkey, M.J.; Sastry, J.S.

    Various methods are available for the determination of reference depth for geostrophic computation. A new method based on the vertical profiles of mean and variance of the differences of mean specific volume anomaly (delta x 10) for different layers...

  5. Computational Methods to Assess the Production Potential of Bio-Based Chemicals.

    Science.gov (United States)

    Campodonico, Miguel A; Sukumara, Sumesh; Feist, Adam M; Herrgård, Markus J

    2018-01-01

    Elevated costs and long implementation times of bio-based processes for producing chemicals represent a bottleneck for moving to a bio-based economy. A prospective analysis able to elucidate economically and technically feasible product targets at early research phases is mandatory. Computational tools can be implemented to explore the biological and technical spectrum of feasibility, while constraining the operational space for desired chemicals. In this chapter, two different computational tools for assessing potential for bio-based production of chemicals from different perspectives are described in detail. The first tool is GEM-Path: an algorithm to compute all structurally possible pathways from one target molecule to the host metabolome. The second tool is a framework for Modeling Sustainable Industrial Chemicals production (MuSIC), which integrates modeling approaches for cellular metabolism, bioreactor design, upstream/downstream processes, and economic impact assessment. Integrating GEM-Path and MuSIC will play a vital role in supporting early phases of research efforts and guide the policy makers with decisions, as we progress toward planning a sustainable chemical industry.

  6. Computation of saddle-type slow manifolds using iterative methods

    DEFF Research Database (Denmark)

    Kristiansen, Kristian Uldall

    2015-01-01

    with respect to , appropriate estimates are directly attainable using the method of this paper. The method is applied to several examples, including a model for a pair of neurons coupled by reciprocal inhibition with two slow and two fast variables, and the computation of homoclinic connections in the Fitz......This paper presents an alternative approach for the computation of trajectory segments on slow manifolds of saddle type. This approach is based on iterative methods rather than collocation-type methods. Compared to collocation methods, which require mesh refinements to ensure uniform convergence...

  7. Computer-based quantitative computed tomography image analysis in idiopathic pulmonary fibrosis: A mini review.

    Science.gov (United States)

    Ohkubo, Hirotsugu; Nakagawa, Hiroaki; Niimi, Akio

    2018-01-01

    Idiopathic pulmonary fibrosis (IPF) is the most common type of progressive idiopathic interstitial pneumonia in adults. Many computer-based image analysis methods of chest computed tomography (CT) used in patients with IPF include the mean CT value of the whole lungs, density histogram analysis, density mask technique, and texture classification methods. Most of these methods offer good assessment of pulmonary functions, disease progression, and mortality. Each method has merits that can be used in clinical practice. One of the texture classification methods is reported to be superior to visual CT scoring by radiologist for correlation with pulmonary function and prediction of mortality. In this mini review, we summarize the current literature on computer-based CT image analysis of IPF and discuss its limitations and several future directions. Copyright © 2017 The Japanese Respiratory Society. Published by Elsevier B.V. All rights reserved.

  8. Proceedings of computational methods in materials science

    International Nuclear Information System (INIS)

    Mark, J.E. Glicksman, M.E.; Marsh, S.P.

    1992-01-01

    The Symposium on which this volume is based was conceived as a timely expression of some of the fast-paced developments occurring throughout materials science and engineering. It focuses particularly on those involving modern computational methods applied to model and predict the response of materials under a diverse range of physico-chemical conditions. The current easy access of many materials scientists in industry, government laboratories, and academe to high-performance computers has opened many new vistas for predicting the behavior of complex materials under realistic conditions. Some have even argued that modern computational methods in materials science and engineering are literally redefining the bounds of our knowledge from which we predict structure-property relationships, perhaps forever changing the historically descriptive character of the science and much of the engineering

  9. Comparison of meaningful learning characteristics in simulated nursing practice after traditional versus computer-based simulation method: a qualitative videography study.

    Science.gov (United States)

    Poikela, Paula; Ruokamo, Heli; Teräs, Marianne

    2015-02-01

    Nursing educators must ensure that nursing students acquire the necessary competencies; finding the most purposeful teaching methods and encouraging learning through meaningful learning opportunities is necessary to meet this goal. We investigated student learning in a simulated nursing practice using videography. The purpose of this paper is to examine how two different teaching methods presented students' meaningful learning in a simulated nursing experience. The 6-hour study was divided into three parts: part I, general information; part II, training; and part III, simulated nursing practice. Part II was delivered by two different methods: a computer-based simulation and a lecture. The study was carried out in the simulated nursing practice in two universities of applied sciences, in Northern Finland. The participants in parts II and I were 40 first year nursing students; 12 student volunteers continued to part III. Qualitative analysis method was used. The data were collected using video recordings and analyzed by videography. The students who used a computer-based simulation program were more likely to report meaningful learning themes than those who were first exposed to lecture method. Educators should be encouraged to use computer-based simulation teaching in conjunction with other teaching methods to ensure that nursing students are able to receive the greatest educational benefits. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Application of the Decomposition Method to the Design Complexity of Computer-based Display

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hyoung Ju; Lee, Seung Woo; Seong, Poong Hyun [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Park, Jin Kyun [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2012-05-15

    The importance of the design of human machine interfaces (HMIs) for human performance and safety has long been recognized in process industries. In case of nuclear power plants (NPPs), HMIs have significant implications for the safety of the NPPs since poor implementation of HMIs can impair the operators' information searching ability which is considered as one of the important aspects of human behavior. To support and increase the efficiency of the operators' information searching behavior, advanced HMIs based on computer technology are provided. Operators in advanced main control room (MCR) acquire information through video display units (VDUs), and large display panel (LDP) required for the operation of NPPs. These computer-based displays contain a very large quantity of information and present them in a variety of formats than conventional MCR. For example, these displays contain more elements such as abbreviations, labels, icons, symbols, coding, and highlighting than conventional ones. As computer-based displays contain more information, complexity of the elements becomes greater due to less distinctiveness of each element. A greater understanding is emerging about the effectiveness of designs of computer-based displays, including how distinctively display elements should be designed. And according to Gestalt theory, people tend to group similar elements based on attributes such as shape, color or pattern based on the principle of similarity. Therefore, it is necessary to consider not only human operator's perception but the number of element consisting of computer-based display

  11. Application of the Decomposition Method to the Design Complexity of Computer-based Display

    International Nuclear Information System (INIS)

    Kim, Hyoung Ju; Lee, Seung Woo; Seong, Poong Hyun; Park, Jin Kyun

    2012-01-01

    The importance of the design of human machine interfaces (HMIs) for human performance and safety has long been recognized in process industries. In case of nuclear power plants (NPPs), HMIs have significant implications for the safety of the NPPs since poor implementation of HMIs can impair the operators' information searching ability which is considered as one of the important aspects of human behavior. To support and increase the efficiency of the operators' information searching behavior, advanced HMIs based on computer technology are provided. Operators in advanced main control room (MCR) acquire information through video display units (VDUs), and large display panel (LDP) required for the operation of NPPs. These computer-based displays contain a very large quantity of information and present them in a variety of formats than conventional MCR. For example, these displays contain more elements such as abbreviations, labels, icons, symbols, coding, and highlighting than conventional ones. As computer-based displays contain more information, complexity of the elements becomes greater due to less distinctiveness of each element. A greater understanding is emerging about the effectiveness of designs of computer-based displays, including how distinctively display elements should be designed. And according to Gestalt theory, people tend to group similar elements based on attributes such as shape, color or pattern based on the principle of similarity. Therefore, it is necessary to consider not only human operator's perception but the number of element consisting of computer-based display

  12. Computer-based learning for the enhancement of breastfeeding ...

    African Journals Online (AJOL)

    In this study, computer-based learning (CBL) was explored in the context of breastfeeding training for undergraduate Dietetic students. Aim: To adapt and validate an Indian computer-based undergraduate breastfeeding training module for use by South African undergraduate Dietetic students. Methods and materials: The ...

  13. Computer science handbook. Vol. 13.3. Environmental computer science. Computer science methods for environmental protection and environmental research

    International Nuclear Information System (INIS)

    Page, B.; Hilty, L.M.

    1994-01-01

    Environmental computer science is a new partial discipline of applied computer science, which makes use of methods and techniques of information processing in environmental protection. Thanks to the inter-disciplinary nature of environmental problems, computer science acts as a mediator between numerous disciplines and institutions in this sector. The handbook reflects the broad spectrum of state-of-the art environmental computer science. The following important subjects are dealt with: Environmental databases and information systems, environmental monitoring, modelling and simulation, visualization of environmental data and knowledge-based systems in the environmental sector. (orig.) [de

  14. SU-E-T-628: A Cloud Computing Based Multi-Objective Optimization Method for Inverse Treatment Planning.

    Science.gov (United States)

    Na, Y; Suh, T; Xing, L

    2012-06-01

    Multi-objective (MO) plan optimization entails generation of an enormous number of IMRT or VMAT plans constituting the Pareto surface, which presents a computationally challenging task. The purpose of this work is to overcome the hurdle by developing an efficient MO method using emerging cloud computing platform. As a backbone of cloud computing for optimizing inverse treatment planning, Amazon Elastic Compute Cloud with a master node (17.1 GB memory, 2 virtual cores, 420 GB instance storage, 64-bit platform) is used. The master node is able to scale seamlessly a number of working group instances, called workers, based on the user-defined setting account for MO functions in clinical setting. Each worker solved the objective function with an efficient sparse decomposition method. The workers are automatically terminated if there are finished tasks. The optimized plans are archived to the master node to generate the Pareto solution set. Three clinical cases have been planned using the developed MO IMRT and VMAT planning tools to demonstrate the advantages of the proposed method. The target dose coverage and critical structure sparing of plans are comparable obtained using the cloud computing platform are identical to that obtained using desktop PC (Intel Xeon® CPU 2.33GHz, 8GB memory). It is found that the MO planning speeds up the processing of obtaining the Pareto set substantially for both types of plans. The speedup scales approximately linearly with the number of nodes used for computing. With the use of N nodes, the computational time is reduced by the fitting model, 0.2+2.3/N, with r̂2>0.99, on average of the cases making real-time MO planning possible. A cloud computing infrastructure is developed for MO optimization. The algorithm substantially improves the speed of inverse plan optimization. The platform is valuable for both MO planning and future off- or on-line adaptive re-planning. © 2012 American Association of Physicists in Medicine.

  15. Optimal design methods for a digital human-computer interface based on human reliability in a nuclear power plant

    International Nuclear Information System (INIS)

    Jiang, Jianjun; Zhang, Li; Xie, Tian; Wu, Daqing; Li, Min; Wang, Yiqun; Peng, Yuyuan; Peng, Jie; Zhang, Mengjia; Li, Peiyao; Ma, Congmin; Wu, Xing

    2017-01-01

    Highlights: • A complete optimization process is established for digital human-computer interfaces of Npps. • A quick convergence search method is proposed. • The authors propose an affinity error probability mapping function to test human reliability. - Abstract: This is the second in a series of papers describing the optimal design method for a digital human-computer interface of nuclear power plant (Npp) from three different points based on human reliability. The purpose of this series is to explore different optimization methods from varying perspectives. This present paper mainly discusses the optimal design method for quantity of components of the same factor. In monitoring process, quantity of components has brought heavy burden to operators, thus, human errors are easily triggered. To solve the problem, the authors propose an optimization process, a quick convergence search method and an affinity error probability mapping function. Two balanceable parameter values of the affinity error probability function are obtained by experiments. The experimental results show that the affinity error probability mapping function about human-computer interface has very good sensitivity and stability, and that quick convergence search method for fuzzy segments divided by component quantity has better performance than general algorithm.

  16. Computing Cost Price for Cataract Surgery by Activity Based Costing (ABC Method at Hazrat-E-Zahra Hospital, Isfahan University of Medical Sciences, 2014

    Directory of Open Access Journals (Sweden)

    Masuod Ferdosi

    2016-10-01

    Full Text Available Background: Hospital managers need to have accurate information about actual costs to make efficient and effective decisions. In activity based costing method, first, activities are recognized and then direct and indirect costs are computed based on allocation methods. The aim of this study was to compute the cost price for cataract surgery by Activity Based Costing (ABC method at Hazrat-e-Zahra Hospital, Isfahan University of Medical Sciences. Methods: This was a cross- sectional study for computing the costs of cataract surgery by activity based costing technique in Hazrat-e-Zahra Hospital in Isfahan University of Medical Sciences, 2014. Data were collected through interview and direct observation and analyzed by Excel software. Results: According to the results of this study, total cost in cataract surgery was 8,368,978 Rials. Personnel cost included 62.2% (5,213,574 Rials of total cost of cataract surgery that is the highest share of surgery costs. The cost of consumables was 7.57% (1,992,852 Rials of surgery costs. Conclusion: Based on the results, there was different between cost price of the services and public Tariff which appears as hazards or financial crises to the hospital. Therefore, it is recommended to use the right methods to compute the costs relating to Activity Based Costing. Cost price of cataract surgery can be reduced by strategies such as decreasing the cost of consumables.

  17. Computational methods for three-dimensional microscopy reconstruction

    CERN Document Server

    Frank, Joachim

    2014-01-01

    Approaches to the recovery of three-dimensional information on a biological object, which are often formulated or implemented initially in an intuitive way, are concisely described here based on physical models of the object and the image-formation process. Both three-dimensional electron microscopy and X-ray tomography can be captured in the same mathematical framework, leading to closely-related computational approaches, but the methodologies differ in detail and hence pose different challenges. The editors of this volume, Gabor T. Herman and Joachim Frank, are experts in the respective methodologies and present research at the forefront of biological imaging and structural biology.   Computational Methods for Three-Dimensional Microscopy Reconstruction will serve as a useful resource for scholars interested in the development of computational methods for structural biology and cell biology, particularly in the area of 3D imaging and modeling.

  18. Music Learning Based on Computer Software

    OpenAIRE

    Baihui Yan; Qiao Zhou

    2017-01-01

    In order to better develop and improve students’ music learning, the authors proposed the method of music learning based on computer software. It is still a new field to use computer music software to assist teaching. Hereby, we conducted an in-depth analysis on the computer-enabled music learning and the music learning status in secondary schools, obtaining the specific analytical data. Survey data shows that students have many cognitive problems in the current music classroom, and yet teach...

  19. Computational aeroelasticity using a pressure-based solver

    Science.gov (United States)

    Kamakoti, Ramji

    A computational methodology for performing fluid-structure interaction computations for three-dimensional elastic wing geometries is presented. The flow solver used is based on an unsteady Reynolds-Averaged Navier-Stokes (RANS) model. A well validated k-ε turbulence model with wall function treatment for near wall region was used to perform turbulent flow calculations. Relative merits of alternative flow solvers were investigated. The predictor-corrector-based Pressure Implicit Splitting of Operators (PISO) algorithm was found to be computationally economic for unsteady flow computations. Wing structure was modeled using Bernoulli-Euler beam theory. A fully implicit time-marching scheme (using the Newmark integration method) was used to integrate the equations of motion for structure. Bilinear interpolation and linear extrapolation techniques were used to transfer necessary information between fluid and structure solvers. Geometry deformation was accounted for by using a moving boundary module. The moving grid capability was based on a master/slave concept and transfinite interpolation techniques. Since computations were performed on a moving mesh system, the geometric conservation law must be preserved. This is achieved by appropriately evaluating the Jacobian values associated with each cell. Accurate computation of contravariant velocities for unsteady flows using the momentum interpolation method on collocated, curvilinear grids was also addressed. Flutter computations were performed for the AGARD 445.6 wing at subsonic, transonic and supersonic Mach numbers. Unsteady computations were performed at various dynamic pressures to predict the flutter boundary. Results showed favorable agreement of experiment and previous numerical results. The computational methodology exhibited capabilities to predict both qualitative and quantitative features of aeroelasticity.

  20. Testing and Validation of Computational Methods for Mass Spectrometry.

    Science.gov (United States)

    Gatto, Laurent; Hansen, Kasper D; Hoopmann, Michael R; Hermjakob, Henning; Kohlbacher, Oliver; Beyer, Andreas

    2016-03-04

    High-throughput methods based on mass spectrometry (proteomics, metabolomics, lipidomics, etc.) produce a wealth of data that cannot be analyzed without computational methods. The impact of the choice of method on the overall result of a biological study is often underappreciated, but different methods can result in very different biological findings. It is thus essential to evaluate and compare the correctness and relative performance of computational methods. The volume of the data as well as the complexity of the algorithms render unbiased comparisons challenging. This paper discusses some problems and challenges in testing and validation of computational methods. We discuss the different types of data (simulated and experimental validation data) as well as different metrics to compare methods. We also introduce a new public repository for mass spectrometric reference data sets ( http://compms.org/RefData ) that contains a collection of publicly available data sets for performance evaluation for a wide range of different methods.

  1. A method of paralleling computer calculation for two-dimensional kinetic plasma model

    International Nuclear Information System (INIS)

    Brazhnik, V.A.; Demchenko, V.V.; Dem'yanov, V.G.; D'yakov, V.E.; Ol'shanskij, V.V.; Panchenko, V.I.

    1987-01-01

    A method for parallel computer calculation and OSIRIS program complex realizing it and designed for numerical plasma simulation by the macroparticle method are described. The calculation can be carried out either with one or simultaneously with two computers BESM-6, that is provided by some package of interacting programs functioning in every computer. Program interaction in every computer is based on event techniques realized in OS DISPAK. Parallel computer calculation with two BESM-6 computers allows to accelerate the computation 1.5 times

  2. A method of detection to the grinding wheel layer thickness based on computer vision

    Science.gov (United States)

    Ji, Yuchen; Fu, Luhua; Yang, Dujuan; Wang, Lei; Liu, Changjie; Wang, Zhong

    2018-01-01

    This paper proposed a method of detection to the grinding wheel layer thickness based on computer vision. A camera is used to capture images of grinding wheel layer on the whole circle. Forward lighting and back lighting are used to enables a clear image to be acquired. Image processing is then executed on the images captured, which consists of image preprocessing, binarization and subpixel subdivision. The aim of binarization is to help the location of a chord and the corresponding ring width. After subpixel subdivision, the thickness of the grinding layer can be calculated finally. Compared with methods usually used to detect grinding wheel wear, method in this paper can directly and quickly get the information of thickness. Also, the eccentric error and the error of pixel equivalent are discussed in this paper.

  3. Fully consistent CFD methods for incompressible flow computations

    DEFF Research Database (Denmark)

    Kolmogorov, Dmitry; Shen, Wen Zhong; Sørensen, Niels N.

    2014-01-01

    Nowadays collocated grid based CFD methods are one of the most e_cient tools for computations of the ows past wind turbines. To ensure the robustness of the methods they require special attention to the well-known problem of pressure-velocity coupling. Many commercial codes to ensure the pressure...

  4. Music Learning Based on Computer Software

    Directory of Open Access Journals (Sweden)

    Baihui Yan

    2017-12-01

    Full Text Available In order to better develop and improve students’ music learning, the authors proposed the method of music learning based on computer software. It is still a new field to use computer music software to assist teaching. Hereby, we conducted an in-depth analysis on the computer-enabled music learning and the music learning status in secondary schools, obtaining the specific analytical data. Survey data shows that students have many cognitive problems in the current music classroom, and yet teachers have not found a reasonable countermeasure to them. Against this background, the introduction of computer music software to music learning is a new trial that can not only cultivate the students’ initiatives of music learning, but also enhance their abilities to learn music. Therefore, it is concluded that the computer software based music learning is of great significance to improving the current music learning modes and means.

  5. Methods of physical experiment and installation automation on the base of computers

    International Nuclear Information System (INIS)

    Stupin, Yu.V.

    1983-01-01

    Peculiarities of using computers for physical experiment and installation automation are considered. Systems for data acquisition and processing on the base of microprocessors, micro- and mini-computers, CAMAC equipment and real time operational systems as well as systems intended for automation of physical experiments on accelerators and installations of laser thermonuclear fusion and installations for plasma investigation are dpscribed. The problems of multimachine complex and multi-user system, arrangement, development of automated systems for collective use, arrangement of intermachine data exchange and control of experimental data base are discussed. Data on software systems used for complex experimental data processing are presented. It is concluded that application of new computers in combination with new possibilities provided for users by universal operational systems essentially exceeds efficiency of a scientist work

  6. Reciprocal Questioning and Computer-based Instruction in Introductory Auditing: Student Perceptions.

    Science.gov (United States)

    Watters, Mike

    2000-01-01

    An auditing course used reciprocal questioning (Socratic method) and computer-based instruction. Separate evaluations by 67 students revealed a strong aversion to the Socratic method; students expected professors to lecture. They showed a strong preference for the computer-based assignment. (SK)

  7. Digital image processing mathematical and computational methods

    CERN Document Server

    Blackledge, J M

    2005-01-01

    This authoritative text (the second part of a complete MSc course) provides mathematical methods required to describe images, image formation and different imaging systems, coupled with the principle techniques used for processing digital images. It is based on a course for postgraduates reading physics, electronic engineering, telecommunications engineering, information technology and computer science. This book relates the methods of processing and interpreting digital images to the 'physics' of imaging systems. Case studies reinforce the methods discussed, with examples of current research

  8. Reduced order methods for modeling and computational reduction

    CERN Document Server

    Rozza, Gianluigi

    2014-01-01

    This monograph addresses the state of the art of reduced order methods for modeling and computational reduction of complex parametrized systems, governed by ordinary and/or partial differential equations, with a special emphasis on real time computing techniques and applications in computational mechanics, bioengineering and computer graphics.  Several topics are covered, including: design, optimization, and control theory in real-time with applications in engineering; data assimilation, geometry registration, and parameter estimation with special attention to real-time computing in biomedical engineering and computational physics; real-time visualization of physics-based simulations in computer science; the treatment of high-dimensional problems in state space, physical space, or parameter space; the interactions between different model reduction and dimensionality reduction approaches; the development of general error estimation frameworks which take into account both model and discretization effects. This...

  9. Numerical methods in matrix computations

    CERN Document Server

    Björck, Åke

    2015-01-01

    Matrix algorithms are at the core of scientific computing and are indispensable tools in most applications in engineering. This book offers a comprehensive and up-to-date treatment of modern methods in matrix computation. It uses a unified approach to direct and iterative methods for linear systems, least squares and eigenvalue problems. A thorough analysis of the stability, accuracy, and complexity of the treated methods is given. Numerical Methods in Matrix Computations is suitable for use in courses on scientific computing and applied technical areas at advanced undergraduate and graduate level. A large bibliography is provided, which includes both historical and review papers as well as recent research papers. This makes the book useful also as a reference and guide to further study and research work. Åke Björck is a professor emeritus at the Department of Mathematics, Linköping University. He is a Fellow of the Society of Industrial and Applied Mathematics.

  10. Computer aided diagnosis based on medical image processing and artificial intelligence methods

    Science.gov (United States)

    Stoitsis, John; Valavanis, Ioannis; Mougiakakou, Stavroula G.; Golemati, Spyretta; Nikita, Alexandra; Nikita, Konstantina S.

    2006-12-01

    Advances in imaging technology and computer science have greatly enhanced interpretation of medical images, and contributed to early diagnosis. The typical architecture of a Computer Aided Diagnosis (CAD) system includes image pre-processing, definition of region(s) of interest, features extraction and selection, and classification. In this paper, the principles of CAD systems design and development are demonstrated by means of two examples. The first one focuses on the differentiation between symptomatic and asymptomatic carotid atheromatous plaques. For each plaque, a vector of texture and motion features was estimated, which was then reduced to the most robust ones by means of ANalysis of VAriance (ANOVA). Using fuzzy c-means, the features were then clustered into two classes. Clustering performances of 74%, 79%, and 84% were achieved for texture only, motion only, and combinations of texture and motion features, respectively. The second CAD system presented in this paper supports the diagnosis of focal liver lesions and is able to characterize liver tissue from Computed Tomography (CT) images as normal, hepatic cyst, hemangioma, and hepatocellular carcinoma. Five texture feature sets were extracted for each lesion, while a genetic algorithm based feature selection method was applied to identify the most robust features. The selected feature set was fed into an ensemble of neural network classifiers. The achieved classification performance was 100%, 93.75% and 90.63% in the training, validation and testing set, respectively. It is concluded that computerized analysis of medical images in combination with artificial intelligence can be used in clinical practice and may contribute to more efficient diagnosis.

  11. Computer aided diagnosis based on medical image processing and artificial intelligence methods

    International Nuclear Information System (INIS)

    Stoitsis, John; Valavanis, Ioannis; Mougiakakou, Stavroula G.; Golemati, Spyretta; Nikita, Alexandra; Nikita, Konstantina S.

    2006-01-01

    Advances in imaging technology and computer science have greatly enhanced interpretation of medical images, and contributed to early diagnosis. The typical architecture of a Computer Aided Diagnosis (CAD) system includes image pre-processing, definition of region(s) of interest, features extraction and selection, and classification. In this paper, the principles of CAD systems design and development are demonstrated by means of two examples. The first one focuses on the differentiation between symptomatic and asymptomatic carotid atheromatous plaques. For each plaque, a vector of texture and motion features was estimated, which was then reduced to the most robust ones by means of ANalysis of VAriance (ANOVA). Using fuzzy c-means, the features were then clustered into two classes. Clustering performances of 74%, 79%, and 84% were achieved for texture only, motion only, and combinations of texture and motion features, respectively. The second CAD system presented in this paper supports the diagnosis of focal liver lesions and is able to characterize liver tissue from Computed Tomography (CT) images as normal, hepatic cyst, hemangioma, and hepatocellular carcinoma. Five texture feature sets were extracted for each lesion, while a genetic algorithm based feature selection method was applied to identify the most robust features. The selected feature set was fed into an ensemble of neural network classifiers. The achieved classification performance was 100%, 93.75% and 90.63% in the training, validation and testing set, respectively. It is concluded that computerized analysis of medical images in combination with artificial intelligence can be used in clinical practice and may contribute to more efficient diagnosis

  12. Computer aided diagnosis based on medical image processing and artificial intelligence methods

    Energy Technology Data Exchange (ETDEWEB)

    Stoitsis, John [National Technical University of Athens, School of Electrical and Computer Engineering, Athens 157 71 (Greece)]. E-mail: stoitsis@biosim.ntua.gr; Valavanis, Ioannis [National Technical University of Athens, School of Electrical and Computer Engineering, Athens 157 71 (Greece); Mougiakakou, Stavroula G. [National Technical University of Athens, School of Electrical and Computer Engineering, Athens 157 71 (Greece); Golemati, Spyretta [National Technical University of Athens, School of Electrical and Computer Engineering, Athens 157 71 (Greece); Nikita, Alexandra [University of Athens, Medical School 152 28 Athens (Greece); Nikita, Konstantina S. [National Technical University of Athens, School of Electrical and Computer Engineering, Athens 157 71 (Greece)

    2006-12-20

    Advances in imaging technology and computer science have greatly enhanced interpretation of medical images, and contributed to early diagnosis. The typical architecture of a Computer Aided Diagnosis (CAD) system includes image pre-processing, definition of region(s) of interest, features extraction and selection, and classification. In this paper, the principles of CAD systems design and development are demonstrated by means of two examples. The first one focuses on the differentiation between symptomatic and asymptomatic carotid atheromatous plaques. For each plaque, a vector of texture and motion features was estimated, which was then reduced to the most robust ones by means of ANalysis of VAriance (ANOVA). Using fuzzy c-means, the features were then clustered into two classes. Clustering performances of 74%, 79%, and 84% were achieved for texture only, motion only, and combinations of texture and motion features, respectively. The second CAD system presented in this paper supports the diagnosis of focal liver lesions and is able to characterize liver tissue from Computed Tomography (CT) images as normal, hepatic cyst, hemangioma, and hepatocellular carcinoma. Five texture feature sets were extracted for each lesion, while a genetic algorithm based feature selection method was applied to identify the most robust features. The selected feature set was fed into an ensemble of neural network classifiers. The achieved classification performance was 100%, 93.75% and 90.63% in the training, validation and testing set, respectively. It is concluded that computerized analysis of medical images in combination with artificial intelligence can be used in clinical practice and may contribute to more efficient diagnosis.

  13. The asymptotic expansion method via symbolic computation

    OpenAIRE

    Navarro, Juan F.

    2012-01-01

    This paper describes an algorithm for implementing a perturbation method based on an asymptotic expansion of the solution to a second-order differential equation. We also introduce a new symbolic computation system which works with the so-called modified quasipolynomials, as well as an implementation of the algorithm on it.

  14. Computational statics and dynamics an introduction based on the finite element method

    CERN Document Server

    Öchsner, Andreas

    2016-01-01

    This book introduces readers to modern computational mechanics based on the finite element method. It helps students succeed in mechanics courses by showing them how to apply the fundamental knowledge they gained in the first years of their engineering education to more advanced topics. In order to deepen readers’ understanding of the derived equations and theories, each chapter also includes supplementary problems. These problems start with fundamental knowledge questions on the theory presented in the chapter, followed by calculation problems. In total over 80 such calculation problems are provided, along with brief solutions for each. This book is especially designed to meet the needs of Australian students, reviewing the mathematics covered in their first two years at university. The 13-week course comprises three hours of lectures and two hours of tutorials per week.

  15. HuRECA: Human Reliability Evaluator for Computer-based Control Room Actions

    International Nuclear Information System (INIS)

    Kim, Jae Whan; Lee, Seung Jun; Jang, Seung Cheol

    2011-01-01

    As computer-based design features such as computer-based procedures (CBP), soft controls (SCs), and integrated information systems are being adopted in main control rooms (MCR) of nuclear power plants, a human reliability analysis (HRA) method capable of dealing with the effects of these design features on human reliability is needed. From the observations of human factors engineering verification and validation experiments, we have drawn some major important characteristics on operator behaviors and design-related influencing factors (DIFs) from the perspective of human reliability. Firstly, there are new DIFs that should be considered in developing an HRA method for computer-based control rooms including especially CBP and SCs. In the case of the computer-based procedure rather than the paper-based procedure, the structural and managerial elements should be considered as important PSFs in addition to the procedural contents. In the case of the soft controllers, the so-called interface management tasks (or secondary tasks) should be reflected in the assessment of human error probability. Secondly, computer-based control rooms can provide more effective error recovery features than conventional control rooms. Major error recovery features for computer-based control rooms include the automatic logic checking function of the computer-based procedure and the information sharing feature of the general computer-based designs

  16. Analysis of the computational methods on the equipment shock response based on ANSYS environments

    International Nuclear Information System (INIS)

    Wang Yu; Li Zhaojun

    2005-01-01

    With the developments and completions of equipment shock vibration theory, math calculation method simulation technique and other aspects, equipment shock calculation methods are gradually developing form static development to dynamic and from linearity to non-linearity. Now, the equipment shock calculation methods applied worldwide in engineering practices mostly include equivalent static force method, Dynamic Design Analysis Method (abbreviated to DDAM) and real-time simulation method. The DDAM is a method based on the modal analysis theory, which inputs the shock design spectrum as shock load and gets hold of the shock response of the integrated system by applying separate cross-modal integrating method within the frequency domain. The real-time simulation method is to carry through the computational analysis of the equipment shock response within the time domain, use the time-history curves obtained from real-time measurement or spectrum transformation as the equipment shock load and find an iterative solution of a differential equation of the system movement by using the computational procedure within the time domain. Conclusions: Using the separate DDAM and Real-time Simulation Method, this paper carried through the shock analysis of a three-dimensional frame floating raft in ANSYS environments, analyzed the result, and drew the following conclusion: Because DDAM does not calculate damping, non-linear effect and phase difference between mode responses, the result is much bigger than that of real-time simulation method. The coupling response is much complex when the mode result of 3-dimension structure is being calculated, and the coupling response of non-shock direction is also much bigger than that of real-time simulation method when DDAM is applied. Both DDAM and real-time simulation method has its good points and scope of application. The designers should select the design method that is economic and in point according to the features and anti

  17. Computational analysis of water entry of a circular section at constant velocity based on Reynold's averaged Navier-Stokes method

    Science.gov (United States)

    Uddin, M. Maruf; Fuad, Muzaddid-E.-Zaman; Rahaman, Md. Mashiur; Islam, M. Rabiul

    2017-12-01

    With the rapid decrease in the cost of computational infrastructure with more efficient algorithm for solving non-linear problems, Reynold's averaged Navier-Stokes (RaNS) based Computational Fluid Dynamics (CFD) has been used widely now-a-days. As a preliminary evaluation tool, CFD is used to calculate the hydrodynamic loads on offshore installations, ships, and other structures in the ocean at initial design stages. Traditionally, wedges have been studied more than circular cylinders because cylinder section has zero deadrise angle at the instant of water impact, which increases with increase of submergence. In Present study, RaNS based commercial code ANSYS Fluent is used to simulate the water entry of a circular section at constant velocity. It is seen that present computational results were compared with experiment and other numerical method.

  18. MRI-Based Computed Tomography Metal Artifact Correction Method for Improving Proton Range Calculation Accuracy

    International Nuclear Information System (INIS)

    Park, Peter C.; Schreibmann, Eduard; Roper, Justin; Elder, Eric; Crocker, Ian; Fox, Tim; Zhu, X. Ronald; Dong, Lei; Dhabaan, Anees

    2015-01-01

    Purpose: Computed tomography (CT) artifacts can severely degrade dose calculation accuracy in proton therapy. Prompted by the recently increased popularity of magnetic resonance imaging (MRI) in the radiation therapy clinic, we developed an MRI-based CT artifact correction method for improving the accuracy of proton range calculations. Methods and Materials: The proposed method replaces corrupted CT data by mapping CT Hounsfield units (HU number) from a nearby artifact-free slice, using a coregistered MRI. MRI and CT volumetric images were registered with use of 3-dimensional (3D) deformable image registration (DIR). The registration was fine-tuned on a slice-by-slice basis by using 2D DIR. Based on the intensity of paired MRI pixel values and HU from an artifact-free slice, we performed a comprehensive analysis to predict the correct HU for the corrupted region. For a proof-of-concept validation, metal artifacts were simulated on a reference data set. Proton range was calculated using reference, artifactual, and corrected images to quantify the reduction in proton range error. The correction method was applied to 4 unique clinical cases. Results: The correction method resulted in substantial artifact reduction, both quantitatively and qualitatively. On respective simulated brain and head and neck CT images, the mean error was reduced from 495 and 370 HU to 108 and 92 HU after correction. Correspondingly, the absolute mean proton range errors of 2.4 cm and 1.7 cm were reduced to less than 2 mm in both cases. Conclusions: Our MRI-based CT artifact correction method can improve CT image quality and proton range calculation accuracy for patients with severe CT artifacts

  19. The Asymptotic Expansion Method via Symbolic Computation

    Directory of Open Access Journals (Sweden)

    Juan F. Navarro

    2012-01-01

    Full Text Available This paper describes an algorithm for implementing a perturbation method based on an asymptotic expansion of the solution to a second-order differential equation. We also introduce a new symbolic computation system which works with the so-called modified quasipolynomials, as well as an implementation of the algorithm on it.

  20. Optimal design method for a digital human–computer interface based on human reliability in a nuclear power plant. Part 3: Optimization method for interface task layout

    International Nuclear Information System (INIS)

    Jiang, Jianjun; Wang, Yiqun; Zhang, Li; Xie, Tian; Li, Min; Peng, Yuyuan; Wu, Daqing; Li, Peiyao; Ma, Congmin; Shen, Mengxu; Wu, Xing; Weng, Mengyun; Wang, Shiwei; Xie, Cen

    2016-01-01

    Highlights: • The authors present an optimization algorithm for interface task layout. • The performing process of the proposed algorithm was depicted. • The performance evaluation method adopted neural network method. • The optimization layouts of an event interface tasks were obtained by experiments. - Abstract: This is the last in a series of papers describing the optimal design for a digital human–computer interface of a nuclear power plant (NPP) from three different points based on human reliability. The purpose of this series is to propose different optimization methods from varying perspectives to decrease human factor events that arise from the defects of a human–computer interface. The present paper mainly solves the optimization method as to how to effectively layout interface tasks into different screens. The purpose of this paper is to decrease human errors by reducing the distance that an operator moves among different screens in each operation. In order to resolve the problem, the authors propose an optimization process of interface task layout for digital human–computer interface of a NPP. As to how to automatically layout each interface task into one of screens in each operation, the paper presents a shortest moving path optimization algorithm with dynamic flag based on human reliability. To test the algorithm performance, the evaluation method uses neural network based on human reliability. The less the human error probabilities are, the better the interface task layouts among different screens are. Thus, by analyzing the performance of each interface task layout, the optimization result is obtained. Finally, the optimization layouts of spurious safety injection event interface tasks of the NPP are obtained by an experiment, the proposed methods has a good accuracy and stabilization.

  1. Recent advances in computational methods and clinical applications for spine imaging

    CERN Document Server

    Glocker, Ben; Klinder, Tobias; Li, Shuo

    2015-01-01

    This book contains the full papers presented at the MICCAI 2014 workshop on Computational Methods and Clinical Applications for Spine Imaging. The workshop brought together scientists and clinicians in the field of computational spine imaging. The chapters included in this book present and discuss the new advances and challenges in these fields, using several methods and techniques in order to address more efficiently different and timely applications involving signal and image acquisition, image processing and analysis, image segmentation, image registration and fusion, computer simulation, image based modeling, simulation and surgical planning, image guided robot assisted surgical and image based diagnosis. The book also includes papers and reports from the first challenge on vertebra segmentation held at the workshop.

  2. A-VCI: A flexible method to efficiently compute vibrational spectra

    Science.gov (United States)

    Odunlami, Marc; Le Bris, Vincent; Bégué, Didier; Baraille, Isabelle; Coulaud, Olivier

    2017-06-01

    The adaptive vibrational configuration interaction algorithm has been introduced as a new method to efficiently reduce the dimension of the set of basis functions used in a vibrational configuration interaction process. It is based on the construction of nested bases for the discretization of the Hamiltonian operator according to a theoretical criterion that ensures the convergence of the method. In the present work, the Hamiltonian is written as a sum of products of operators. The purpose of this paper is to study the properties and outline the performance details of the main steps of the algorithm. New parameters have been incorporated to increase flexibility, and their influence has been thoroughly investigated. The robustness and reliability of the method are demonstrated for the computation of the vibrational spectrum up to 3000 cm-1 of a widely studied 6-atom molecule (acetonitrile). Our results are compared to the most accurate up to date computation; we also give a new reference calculation for future work on this system. The algorithm has also been applied to a more challenging 7-atom molecule (ethylene oxide). The computed spectrum up to 3200 cm-1 is the most accurate computation that exists today on such systems.

  3. Computing Nash equilibria through computational intelligence methods

    Science.gov (United States)

    Pavlidis, N. G.; Parsopoulos, K. E.; Vrahatis, M. N.

    2005-03-01

    Nash equilibrium constitutes a central solution concept in game theory. The task of detecting the Nash equilibria of a finite strategic game remains a challenging problem up-to-date. This paper investigates the effectiveness of three computational intelligence techniques, namely, covariance matrix adaptation evolution strategies, particle swarm optimization, as well as, differential evolution, to compute Nash equilibria of finite strategic games, as global minima of a real-valued, nonnegative function. An issue of particular interest is to detect more than one Nash equilibria of a game. The performance of the considered computational intelligence methods on this problem is investigated using multistart and deflection.

  4. Advanced computational tools and methods for nuclear analyses of fusion technology systems

    International Nuclear Information System (INIS)

    Fischer, U.; Chen, Y.; Pereslavtsev, P.; Simakov, S.P.; Tsige-Tamirat, H.; Loughlin, M.; Perel, R.L.; Petrizzi, L.; Tautges, T.J.; Wilson, P.P.H.

    2005-01-01

    An overview is presented of advanced computational tools and methods developed recently for nuclear analyses of Fusion Technology systems such as the experimental device ITER ('International Thermonuclear Experimental Reactor') and the intense neutron source IFMIF ('International Fusion Material Irradiation Facility'). These include Monte Carlo based computational schemes for the calculation of three-dimensional shut-down dose rate distributions, methods, codes and interfaces for the use of CAD geometry models in Monte Carlo transport calculations, algorithms for Monte Carlo based sensitivity/uncertainty calculations, as well as computational techniques and data for IFMIF neutronics and activation calculations. (author)

  5. Simple adaptive sparse representation based classification schemes for EEG based brain-computer interface applications.

    Science.gov (United States)

    Shin, Younghak; Lee, Seungchan; Ahn, Minkyu; Cho, Hohyun; Jun, Sung Chan; Lee, Heung-No

    2015-11-01

    One of the main problems related to electroencephalogram (EEG) based brain-computer interface (BCI) systems is the non-stationarity of the underlying EEG signals. This results in the deterioration of the classification performance during experimental sessions. Therefore, adaptive classification techniques are required for EEG based BCI applications. In this paper, we propose simple adaptive sparse representation based classification (SRC) schemes. Supervised and unsupervised dictionary update techniques for new test data and a dictionary modification method by using the incoherence measure of the training data are investigated. The proposed methods are very simple and additional computation for the re-training of the classifier is not needed. The proposed adaptive SRC schemes are evaluated using two BCI experimental datasets. The proposed methods are assessed by comparing classification results with the conventional SRC and other adaptive classification methods. On the basis of the results, we find that the proposed adaptive schemes show relatively improved classification accuracy as compared to conventional methods without requiring additional computation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Imaging sensory effects of occipital nerve stimulation: a new computer-based method in neuromodulation.

    Science.gov (United States)

    Göbel, Anna; Göbel, Carl H; Heinze, Axel; Heinze-Kuhn, Katja; Petersen, Inga; Meinecke, Christoph; Clasen, Svenja; Niederberger, Uwe; Rasche, Dirk; Mehdorn, Hubertus M; Göbel, Hartmut

    2015-01-01

    Within the last years, occipital nerve stimulation (ONS) has proven to be an important method in the treatment of severe therapy-resistant neurological pain disorders. The correspondence between lead placement as well as possible stimulation parameters and the resulting stimulation effects remains unclear. The method aims to directly relate the neuromodulatory mechanisms with the clinical treatment results, to achieve insight in the mode of action of neuromodulation, to identify the most effective stimulation sets and to optimize individual treatment effects. We describe a new computer-based imaging method for mapping the spatial, cognitive and affective sensory effects of ONS. The procedure allows a quantitative and qualitative analysis of the relationship between lead positioning, the stimulation settings as well as the sensory and clinical stimulation effects. A regular mapping of stimulation and sensory parameters allows a coordinated monitoring. The stimulation results can be reviewed and compared with regards to clinical effectiveness. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. Oligomerization of G protein-coupled receptors: computational methods.

    Science.gov (United States)

    Selent, J; Kaczor, A A

    2011-01-01

    Recent research has unveiled the complexity of mechanisms involved in G protein-coupled receptor (GPCR) functioning in which receptor dimerization/oligomerization may play an important role. Although the first high-resolution X-ray structure for a likely functional chemokine receptor dimer has been deposited in the Protein Data Bank, the interactions and mechanisms of dimer formation are not yet fully understood. In this respect, computational methods play a key role for predicting accurate GPCR complexes. This review outlines computational approaches focusing on sequence- and structure-based methodologies as well as discusses their advantages and limitations. Sequence-based approaches that search for possible protein-protein interfaces in GPCR complexes have been applied with success in several studies, but did not yield always consistent results. Structure-based methodologies are a potent complement to sequence-based approaches. For instance, protein-protein docking is a valuable method especially when guided by experimental constraints. Some disadvantages like limited receptor flexibility and non-consideration of the membrane environment have to be taken into account. Molecular dynamics simulation can overcome these drawbacks giving a detailed description of conformational changes in a native-like membrane. Successful prediction of GPCR complexes using computational approaches combined with experimental efforts may help to understand the role of dimeric/oligomeric GPCR complexes for fine-tuning receptor signaling. Moreover, since such GPCR complexes have attracted interest as potential drug target for diverse diseases, unveiling molecular determinants of dimerization/oligomerization can provide important implications for drug discovery.

  8. A Computationally Efficient Method for Polyphonic Pitch Estimation

    Directory of Open Access Journals (Sweden)

    Ruohua Zhou

    2009-01-01

    Full Text Available This paper presents a computationally efficient method for polyphonic pitch estimation. The method employs the Fast Resonator Time-Frequency Image (RTFI as the basic time-frequency analysis tool. The approach is composed of two main stages. First, a preliminary pitch estimation is obtained by means of a simple peak-picking procedure in the pitch energy spectrum. Such spectrum is calculated from the original RTFI energy spectrum according to harmonic grouping principles. Then the incorrect estimations are removed according to spectral irregularity and knowledge of the harmonic structures of the music notes played on commonly used music instruments. The new approach is compared with a variety of other frame-based polyphonic pitch estimation methods, and results demonstrate the high performance and computational efficiency of the approach.

  9. Enhancing school-based asthma education efforts using computer-based education for children.

    Science.gov (United States)

    Nabors, Laura A; Kockritz, Jennifer L; Ludke, Robert L; Bernstein, Jonathan A

    2012-03-01

    Schools are an important site for delivery of asthma education programs. Computer-based educational programs are a critical component of asthma education programs and may be a particularly important education method in busy school environments. The objective of this brief report is to review and critique computer-based education efforts in schools. The results of our literature review indicated that school-based computer education efforts are related to improved knowledge about asthma and its management. In some studies, improvements in clinical outcomes also occur. Data collection programs need to be built into games that improve knowledge. Many projects do not appear to last for periods greater than 1 year and little information is available about cultural relevance of these programs. Educational games and other programs are effective methods of delivering knowledge about asthma management and control. Research about the long-term effects of this increased knowledge, in regard to behavior change, is needed. Additionally, developing sustainable projects, which are culturally relevant, is a goal for future research.

  10. Improved computation method in residual life estimation of structural components

    Directory of Open Access Journals (Sweden)

    Maksimović Stevan M.

    2013-01-01

    Full Text Available This work considers the numerical computation methods and procedures for the fatigue crack growth predicting of cracked notched structural components. Computation method is based on fatigue life prediction using the strain energy density approach. Based on the strain energy density (SED theory, a fatigue crack growth model is developed to predict the lifetime of fatigue crack growth for single or mixed mode cracks. The model is based on an equation expressed in terms of low cycle fatigue parameters. Attention is focused on crack growth analysis of structural components under variable amplitude loads. Crack growth is largely influenced by the effect of the plastic zone at the front of the crack. To obtain efficient computation model plasticity-induced crack closure phenomenon is considered during fatigue crack growth. The use of the strain energy density method is efficient for fatigue crack growth prediction under cyclic loading in damaged structural components. Strain energy density method is easy for engineering applications since it does not require any additional determination of fatigue parameters (those would need to be separately determined for fatigue crack propagation phase, and low cyclic fatigue parameters are used instead. Accurate determination of fatigue crack closure has been a complex task for years. The influence of this phenomenon can be considered by means of experimental and numerical methods. Both of these models are considered. Finite element analysis (FEA has been shown to be a powerful and useful tool1,6 to analyze crack growth and crack closure effects. Computation results are compared with available experimental results. [Projekat Ministarstva nauke Republike Srbije, br. OI 174001

  11. Depth-Averaged Non-Hydrostatic Hydrodynamic Model Using a New Multithreading Parallel Computing Method

    Directory of Open Access Journals (Sweden)

    Ling Kang

    2017-03-01

    Full Text Available Compared to the hydrostatic hydrodynamic model, the non-hydrostatic hydrodynamic model can accurately simulate flows that feature vertical accelerations. The model’s low computational efficiency severely restricts its wider application. This paper proposes a non-hydrostatic hydrodynamic model based on a multithreading parallel computing method. The horizontal momentum equation is obtained by integrating the Navier–Stokes equations from the bottom to the free surface. The vertical momentum equation is approximated by the Keller-box scheme. A two-step method is used to solve the model equations. A parallel strategy based on block decomposition computation is utilized. The original computational domain is subdivided into two subdomains that are physically connected via a virtual boundary technique. Two sub-threads are created and tasked with the computation of the two subdomains. The producer–consumer model and the thread lock technique are used to achieve synchronous communication between sub-threads. The validity of the model was verified by solitary wave propagation experiments over a flat bottom and slope, followed by two sinusoidal wave propagation experiments over submerged breakwater. The parallel computing method proposed here was found to effectively enhance computational efficiency and save 20%–40% computation time compared to serial computing. The parallel acceleration rate and acceleration efficiency are approximately 1.45% and 72%, respectively. The parallel computing method makes a contribution to the popularization of non-hydrostatic models.

  12. Improving communication when seeking informed consent: a randomised controlled study of a computer-based method for providing information to prospective clinical trial participants.

    Science.gov (United States)

    Karunaratne, Asuntha S; Korenman, Stanley G; Thomas, Samantha L; Myles, Paul S; Komesaroff, Paul A

    2010-04-05

    To assess the efficacy, with respect to participant understanding of information, of a computer-based approach to communication about complex, technical issues that commonly arise when seeking informed consent for clinical research trials. An open, randomised controlled study of 60 patients with diabetes mellitus, aged 27-70 years, recruited between August 2006 and October 2007 from the Department of Diabetes and Endocrinology at the Alfred Hospital and Baker IDI Heart and Diabetes Institute, Melbourne. Participants were asked to read information about a mock study via a computer-based presentation (n = 30) or a conventional paper-based information statement (n = 30). The computer-based presentation contained visual aids, including diagrams, video, hyperlinks and quiz pages. Understanding of information as assessed by quantitative and qualitative means. Assessment scores used to measure level of understanding were significantly higher in the group that completed the computer-based task than the group that completed the paper-based task (82% v 73%; P = 0.005). More participants in the group that completed the computer-based task expressed interest in taking part in the mock study (23 v 17 participants; P = 0.01). Most participants from both groups preferred the idea of a computer-based presentation to the paper-based statement (21 in the computer-based task group, 18 in the paper-based task group). A computer-based method of providing information may help overcome existing deficiencies in communication about clinical research, and may reduce costs and improve efficiency in recruiting participants for clinical trials.

  13. Spatial analysis statistics, visualization, and computational methods

    CERN Document Server

    Oyana, Tonny J

    2015-01-01

    An introductory text for the next generation of geospatial analysts and data scientists, Spatial Analysis: Statistics, Visualization, and Computational Methods focuses on the fundamentals of spatial analysis using traditional, contemporary, and computational methods. Outlining both non-spatial and spatial statistical concepts, the authors present practical applications of geospatial data tools, techniques, and strategies in geographic studies. They offer a problem-based learning (PBL) approach to spatial analysis-containing hands-on problem-sets that can be worked out in MS Excel or ArcGIS-as well as detailed illustrations and numerous case studies. The book enables readers to: Identify types and characterize non-spatial and spatial data Demonstrate their competence to explore, visualize, summarize, analyze, optimize, and clearly present statistical data and results Construct testable hypotheses that require inferential statistical analysis Process spatial data, extract explanatory variables, conduct statisti...

  14. Parallel scientific computing theory, algorithms, and applications of mesh based and meshless methods

    CERN Document Server

    Trobec, Roman

    2015-01-01

    This book is concentrated on the synergy between computer science and numerical analysis. It is written to provide a firm understanding of the described approaches to computer scientists, engineers or other experts who have to solve real problems. The meshless solution approach is described in more detail, with a description of the required algorithms and the methods that are needed for the design of an efficient computer program. Most of the details are demonstrated on solutions of practical problems, from basic to more complicated ones. This book will be a useful tool for any reader interes

  15. Method and system for environmentally adaptive fault tolerant computing

    Science.gov (United States)

    Copenhaver, Jason L. (Inventor); Jeremy, Ramos (Inventor); Wolfe, Jeffrey M. (Inventor); Brenner, Dean (Inventor)

    2010-01-01

    A method and system for adapting fault tolerant computing. The method includes the steps of measuring an environmental condition representative of an environment. An on-board processing system's sensitivity to the measured environmental condition is measured. It is determined whether to reconfigure a fault tolerance of the on-board processing system based in part on the measured environmental condition. The fault tolerance of the on-board processing system may be reconfigured based in part on the measured environmental condition.

  16. Systems, computer-implemented methods, and tangible computer-readable storage media for wide-field interferometry

    Science.gov (United States)

    Lyon, Richard G. (Inventor); Leisawitz, David T. (Inventor); Rinehart, Stephen A. (Inventor); Memarsadeghi, Nargess (Inventor)

    2012-01-01

    Disclosed herein are systems, computer-implemented methods, and tangible computer-readable storage media for wide field imaging interferometry. The method includes for each point in a two dimensional detector array over a field of view of an image: gathering a first interferogram from a first detector and a second interferogram from a second detector, modulating a path-length for a signal from an image associated with the first interferogram in the first detector, overlaying first data from the modulated first detector and second data from the second detector, and tracking the modulating at every point in a two dimensional detector array comprising the first detector and the second detector over a field of view for the image. The method then generates a wide-field data cube based on the overlaid first data and second data for each point. The method can generate an image from the wide-field data cube.

  17. Computational methods for fluid dynamics

    CERN Document Server

    Ferziger, Joel H

    2002-01-01

    In its 3rd revised and extended edition the book offers an overview of the techniques used to solve problems in fluid mechanics on computers and describes in detail those most often used in practice. Included are advanced methods in computational fluid dynamics, like direct and large-eddy simulation of turbulence, multigrid methods, parallel computing, moving grids, structured, block-structured and unstructured boundary-fitted grids, free surface flows. The 3rd edition contains a new section dealing with grid quality and an extended description of discretization methods. The book shows common roots and basic principles for many different methods. The book also contains a great deal of practical advice for code developers and users, it is designed to be equally useful to beginners and experts. The issues of numerical accuracy, estimation and reduction of numerical errors are dealt with in detail, with many examples. A full-feature user-friendly demo-version of a commercial CFD software has been added, which ca...

  18. A ROC-based feature selection method for computer-aided detection and diagnosis

    Science.gov (United States)

    Wang, Songyuan; Zhang, Guopeng; Liao, Qimei; Zhang, Junying; Jiao, Chun; Lu, Hongbing

    2014-03-01

    Image-based computer-aided detection and diagnosis (CAD) has been a very active research topic aiming to assist physicians to detect lesions and distinguish them from benign to malignant. However, the datasets fed into a classifier usually suffer from small number of samples, as well as significantly less samples available in one class (have a disease) than the other, resulting in the classifier's suboptimal performance. How to identifying the most characterizing features of the observed data for lesion detection is critical to improve the sensitivity and minimize false positives of a CAD system. In this study, we propose a novel feature selection method mR-FAST that combines the minimal-redundancymaximal relevance (mRMR) framework with a selection metric FAST (feature assessment by sliding thresholds) based on the area under a ROC curve (AUC) generated on optimal simple linear discriminants. With three feature datasets extracted from CAD systems for colon polyps and bladder cancer, we show that the space of candidate features selected by mR-FAST is more characterizing for lesion detection with higher AUC, enabling to find a compact subset of superior features at low cost.

  19. Indirect versus direct feedback in computer-based Prism Adaptation Therapy

    DEFF Research Database (Denmark)

    Wilms, Inge Linda; Rytter, Hana Malá

    2010-01-01

    Prism Adaptation Therapy (PAT) is an intervention method in the treatment of the attention disorder neglect (Frassinetti, Angeli, Meneghello, Avanzi, & Ladavas, 2002; Rossetti, et al., 1998). The aim of this study was to investigate whether one session of PAT using a computer-attached touchscreen...... in the aftereffect. The findings have direct implications for future implementations of computer-based methods of treatment of visuospatial disorders and computer-assisted rehabilitation in general....

  20. Computer Based Test Untuk Seleksi Masuk Politeknik Negeri Bengkalis

    Directory of Open Access Journals (Sweden)

    Agus Tedyyana

    2017-11-01

    Full Text Available AbstrakPenyeleksian calon mahasiswa baru dapat dilakukan dengan aplikasi Computer Based Test (CBT. Metode yang digunakan meliputi teknik pengumpulan data, analisis sistem, model perancangan, implementasi dan pengujian. Penelitian ini menghasilkan aplikasi CBT dimana soal yang dimunculkan dari bank soal melalui proses pengacakan dengan tidak akan memunculkan soal yang sama dengan menggunakan metoda Fisher-Yates Shuffle. Dalam proses pengamanan informasi soal saat terhubung ke jaringan maka diperlukan teknik untuk penyandian pesan agar soal tersebut sebeum dimunculkan melewati proses enkripsi dan deskripsi data terlebih dahulu maka digunakan algoritma kriptografi  RSA. Metode perancangan perangkat lunak menggunakan model waterfall, perancangan database menggunakan entity relationship diagram, perancangan antarmuka menggunakan hypertext markup language (HTML Cascading Style Sheet (CSS dan jQuery serta diimplementasikan berbasis web dengan menggunakan bahasa pemrograman PHP dan database MySQL, Arsitektur jaringan yang digunakan aplikasi Computer Based Test adalah model jaringan client-server dengan jaringan Local Area Network (LAN. Kata kunci: Computer Based Test, Fisher-Yates Shuffle, Criptography, Local Area Network AbstractSelection of new student candidates can be done with Computer Based Test (CBT application. The methods used include data collection techniques, system analysis, design model, implementation and testing. This study produces a CBT application where the questions raised from the question bank through randomization process will not bring up the same problem using the Fisher-Yates Shuffle method. In the process of securing information about the problem when connected to the network it is necessary techniques for encoding the message so that the problem before appear through the process of encryption and description of data first then used RSA cryptography algorithm. Software design method using waterfall model, database design

  1. Integrating computational methods to retrofit enzymes to synthetic pathways.

    Science.gov (United States)

    Brunk, Elizabeth; Neri, Marilisa; Tavernelli, Ivano; Hatzimanikatis, Vassily; Rothlisberger, Ursula

    2012-02-01

    Microbial production of desired compounds provides an efficient framework for the development of renewable energy resources. To be competitive to traditional chemistry, one requirement is to utilize the full capacity of the microorganism to produce target compounds with high yields and turnover rates. We use integrated computational methods to generate and quantify the performance of novel biosynthetic routes that contain highly optimized catalysts. Engineering a novel reaction pathway entails addressing feasibility on multiple levels, which involves handling the complexity of large-scale biochemical networks while respecting the critical chemical phenomena at the atomistic scale. To pursue this multi-layer challenge, our strategy merges knowledge-based metabolic engineering methods with computational chemistry methods. By bridging multiple disciplines, we provide an integral computational framework that could accelerate the discovery and implementation of novel biosynthetic production routes. Using this approach, we have identified and optimized a novel biosynthetic route for the production of 3HP from pyruvate. Copyright © 2011 Wiley Periodicals, Inc.

  2. VELOCITY FIELD COMPUTATION IN VIBRATED GRANULAR MEDIA USING AN OPTICAL FLOW BASED MULTISCALE IMAGE ANALYSIS METHOD

    Directory of Open Access Journals (Sweden)

    Johan Debayle

    2011-05-01

    Full Text Available An image analysis method has been developed in order to compute the velocity field of a granular medium (sand grains, mean diameter 600 μm submitted to different kinds of mechanical stresses. The differential method based on optical flow conservation consists in describing a dense motion field with vectors associated to each pixel. A multiscale, coarse-to-fine, analytical approach through tailor sized windows yields the best compromise between accuracy and robustness of the results, while enabling an acceptable computation time. The corresponding algorithmis presented and its validation discussed through different tests. The results of the validation tests of the proposed approach show that the method is satisfactory when attributing specific values to parameters in association with the size of the image analysis window. An application in the case of vibrated sand has been studied. An instrumented laboratory device provides sinusoidal vibrations and enables external optical observations of sand motion in 3D transparent boxes. At 50 Hz, by increasing the relative acceleration G, the onset and development of two convective rolls can be observed. An ultra fast camera records the grain avalanches, and several pairs of images are analysed by the proposed method. The vertical velocity profiles are deduced and allow to precisely quantify the dimensions of the fluidized region as a function of G.

  3. Computer-Based Molecular Modelling: Finnish School Teachers' Experiences and Views

    Science.gov (United States)

    Aksela, Maija; Lundell, Jan

    2008-01-01

    Modern computer-based molecular modelling opens up new possibilities for chemistry teaching at different levels. This article presents a case study seeking insight into Finnish school teachers' use of computer-based molecular modelling in teaching chemistry, into the different working and teaching methods used, and their opinions about necessary…

  4. Hamiltonian lattice field theory: Computer calculations using variational methods

    International Nuclear Information System (INIS)

    Zako, R.L.

    1991-01-01

    I develop a variational method for systematic numerical computation of physical quantities -- bound state energies and scattering amplitudes -- in quantum field theory. An infinite-volume, continuum theory is approximated by a theory on a finite spatial lattice, which is amenable to numerical computation. I present an algorithm for computing approximate energy eigenvalues and eigenstates in the lattice theory and for bounding the resulting errors. I also show how to select basis states and choose variational parameters in order to minimize errors. The algorithm is based on the Rayleigh-Ritz principle and Kato's generalizations of Temple's formula. The algorithm could be adapted to systems such as atoms and molecules. I show how to compute Green's functions from energy eigenvalues and eigenstates in the lattice theory, and relate these to physical (renormalized) coupling constants, bound state energies and Green's functions. Thus one can compute approximate physical quantities in a lattice theory that approximates a quantum field theory with specified physical coupling constants. I discuss the errors in both approximations. In principle, the errors can be made arbitrarily small by increasing the size of the lattice, decreasing the lattice spacing and computing sufficiently long. Unfortunately, I do not understand the infinite-volume and continuum limits well enough to quantify errors due to the lattice approximation. Thus the method is currently incomplete. I apply the method to real scalar field theories using a Fock basis of free particle states. All needed quantities can be calculated efficiently with this basis. The generalization to more complicated theories is straightforward. I describe a computer implementation of the method and present numerical results for simple quantum mechanical systems

  5. Hamiltonian lattice field theory: Computer calculations using variational methods

    International Nuclear Information System (INIS)

    Zako, R.L.

    1991-01-01

    A variational method is developed for systematic numerical computation of physical quantities-bound state energies and scattering amplitudes-in quantum field theory. An infinite-volume, continuum theory is approximated by a theory on a finite spatial lattice, which is amenable to numerical computation. An algorithm is presented for computing approximate energy eigenvalues and eigenstates in the lattice theory and for bounding the resulting errors. It is shown how to select basis states and choose variational parameters in order to minimize errors. The algorithm is based on the Rayleigh-Ritz principle and Kato's generalizations of Temple's formula. The algorithm could be adapted to systems such as atoms and molecules. It is shown how to compute Green's functions from energy eigenvalues and eigenstates in the lattice theory, and relate these to physical (renormalized) coupling constants, bound state energies and Green's functions. Thus one can compute approximate physical quantities in a lattice theory that approximates a quantum field theory with specified physical coupling constants. The author discusses the errors in both approximations. In principle, the errors can be made arbitrarily small by increasing the size of the lattice, decreasing the lattice spacing and computing sufficiently long. Unfortunately, the author does not understand the infinite-volume and continuum limits well enough to quantify errors due to the lattice approximation. Thus the method is currently incomplete. The method is applied to real scalar field theories using a Fock basis of free particle states. All needed quantities can be calculated efficiently with this basis. The generalization to more complicated theories is straightforward. The author describes a computer implementation of the method and present numerical results for simple quantum mechanical systems

  6. Depth compensating calculation method of computer-generated holograms using symmetry and similarity of zone plates

    Science.gov (United States)

    Wei, Hui; Gong, Guanghong; Li, Ni

    2017-10-01

    Computer-generated hologram (CGH) is a promising 3D display technology while it is challenged by heavy computation load and vast memory requirement. To solve these problems, a depth compensating CGH calculation method based on symmetry and similarity of zone plates is proposed and implemented on graphics processing unit (GPU). An improved LUT method is put forward to compute the distances between object points and hologram pixels in the XY direction. The concept of depth compensating factor is defined and used for calculating the holograms of points with different depth positions instead of layer-based methods. The proposed method is suitable for arbitrary sampling objects with lower memory usage and higher computational efficiency compared to other CGH methods. The effectiveness of the proposed method is validated by numerical and optical experiments.

  7. Evolutionary Computing Methods for Spectral Retrieval

    Science.gov (United States)

    Terrile, Richard; Fink, Wolfgang; Huntsberger, Terrance; Lee, Seugwon; Tisdale, Edwin; VonAllmen, Paul; Tinetti, Geivanna

    2009-01-01

    A methodology for processing spectral images to retrieve information on underlying physical, chemical, and/or biological phenomena is based on evolutionary and related computational methods implemented in software. In a typical case, the solution (the information that one seeks to retrieve) consists of parameters of a mathematical model that represents one or more of the phenomena of interest. The methodology was developed for the initial purpose of retrieving the desired information from spectral image data acquired by remote-sensing instruments aimed at planets (including the Earth). Examples of information desired in such applications include trace gas concentrations, temperature profiles, surface types, day/night fractions, cloud/aerosol fractions, seasons, and viewing angles. The methodology is also potentially useful for retrieving information on chemical and/or biological hazards in terrestrial settings. In this methodology, one utilizes an iterative process that minimizes a fitness function indicative of the degree of dissimilarity between observed and synthetic spectral and angular data. The evolutionary computing methods that lie at the heart of this process yield a population of solutions (sets of the desired parameters) within an accuracy represented by a fitness-function value specified by the user. The evolutionary computing methods (ECM) used in this methodology are Genetic Algorithms and Simulated Annealing, both of which are well-established optimization techniques and have also been described in previous NASA Tech Briefs articles. These are embedded in a conceptual framework, represented in the architecture of the implementing software, that enables automatic retrieval of spectral and angular data and analysis of the retrieved solutions for uniqueness.

  8. Performance of particle in cell methods on highly concurrent computational architectures

    International Nuclear Information System (INIS)

    Adams, M.F.; Ethier, S.; Wichmann, N.

    2009-01-01

    Particle in cell (PIC) methods are effective in computing Vlasov-Poisson system of equations used in simulations of magnetic fusion plasmas. PIC methods use grid based computations, for solving Poisson's equation or more generally Maxwell's equations, as well as Monte-Carlo type methods to sample the Vlasov equation. The presence of two types of discretizations, deterministic field solves and Monte-Carlo methods for the Vlasov equation, pose challenges in understanding and optimizing performance on today large scale computers which require high levels of concurrency. These challenges arises from the need to optimize two very different types of processes and the interactions between them. Modern cache based high-end computers have very deep memory hierarchies and high degrees of concurrency which must be utilized effectively to achieve good performance. The effective use of these machines requires maximizing concurrency by eliminating serial or redundant work and minimizing global communication. A related issue is minimizing the memory traffic between levels of the memory hierarchy because performance is often limited by the bandwidths and latencies of the memory system. This paper discusses some of the performance issues, particularly in regard to parallelism, of PIC methods. The gyrokinetic toroidal code (GTC) is used for these studies and a new radial grid decomposition is presented and evaluated. Scaling of the code is demonstrated on ITER sized plasmas with up to 16K Cray XT3/4 cores.

  9. Performance of particle in cell methods on highly concurrent computational architectures

    International Nuclear Information System (INIS)

    Adams, M F; Ethier, S; Wichmann, N

    2007-01-01

    Particle in cell (PIC) methods are effective in computing Vlasov-Poisson system of equations used in simulations of magnetic fusion plasmas. PIC methods use grid based computations, for solving Poisson's equation or more generally Maxwell's equations, as well as Monte-Carlo type methods to sample the Vlasov equation. The presence of two types of discretizations, deterministic field solves and Monte-Carlo methods for the Vlasov equation, pose challenges in understanding and optimizing performance on today large scale computers which require high levels of concurrency. These challenges arises from the need to optimize two very different types of processes and the interactions between them. Modern cache based high-end computers have very deep memory hierarchies and high degrees of concurrency which must be utilized effectively to achieve good performance. The effective use of these machines requires maximizing concurrency by eliminating serial or redundant work and minimizing global communication. A related issue is minimizing the memory traffic between levels of the memory hierarchy because performance is often limited by the bandwidths and latencies of the memory system. This paper discusses some of the performance issues, particularly in regard to parallelism, of PIC methods. The gyrokinetic toroidal code (GTC) is used for these studies and a new radial grid decomposition is presented and evaluated. Scaling of the code is demonstrated on ITER sized plasmas with up to 16K Cray XT3/4 cores

  10. Capability-based computer systems

    CERN Document Server

    Levy, Henry M

    2014-01-01

    Capability-Based Computer Systems focuses on computer programs and their capabilities. The text first elaborates capability- and object-based system concepts, including capability-based systems, object-based approach, and summary. The book then describes early descriptor architectures and explains the Burroughs B5000, Rice University Computer, and Basic Language Machine. The text also focuses on early capability architectures. Dennis and Van Horn's Supervisor; CAL-TSS System; MIT PDP-1 Timesharing System; and Chicago Magic Number Machine are discussed. The book then describes Plessey System 25

  11. Reliable methods for computer simulation error control and a posteriori estimates

    CERN Document Server

    Neittaanmäki, P

    2004-01-01

    Recent decades have seen a very rapid success in developing numerical methods based on explicit control over approximation errors. It may be said that nowadays a new direction is forming in numerical analysis, the main goal of which is to develop methods ofreliable computations. In general, a reliable numerical method must solve two basic problems: (a) generate a sequence of approximations that converges to a solution and (b) verify the accuracy of these approximations. A computer code for such a method must consist of two respective blocks: solver and checker.In this book, we are chie

  12. Numerical computer methods part D

    CERN Document Server

    Johnson, Michael L

    2004-01-01

    The aim of this volume is to brief researchers of the importance of data analysis in enzymology, and of the modern methods that have developed concomitantly with computer hardware. It is also to validate researchers' computer programs with real and synthetic data to ascertain that the results produced are what they expected. Selected Contents: Prediction of protein structure; modeling and studying proteins with molecular dynamics; statistical error in isothermal titration calorimetry; analysis of circular dichroism data; model comparison methods.

  13. Advanced computational electromagnetic methods and applications

    CERN Document Server

    Li, Wenxing; Elsherbeni, Atef; Rahmat-Samii, Yahya

    2015-01-01

    This new resource covers the latest developments in computational electromagnetic methods, with emphasis on cutting-edge applications. This book is designed to extend existing literature to the latest development in computational electromagnetic methods, which are of interest to readers in both academic and industrial areas. The topics include advanced techniques in MoM, FEM and FDTD, spectral domain method, GPU and Phi hardware acceleration, metamaterials, frequency and time domain integral equations, and statistics methods in bio-electromagnetics.

  14. Advanced soft computing diagnosis method for tumour grading.

    Science.gov (United States)

    Papageorgiou, E I; Spyridonos, P P; Stylios, C D; Ravazoula, P; Groumpos, P P; Nikiforidis, G N

    2006-01-01

    To develop an advanced diagnostic method for urinary bladder tumour grading. A novel soft computing modelling methodology based on the augmentation of fuzzy cognitive maps (FCMs) with the unsupervised active Hebbian learning (AHL) algorithm is applied. One hundred and twenty-eight cases of urinary bladder cancer were retrieved from the archives of the Department of Histopathology, University Hospital of Patras, Greece. All tumours had been characterized according to the classical World Health Organization (WHO) grading system. To design the FCM model for tumour grading, three experts histopathologists defined the main histopathological features (concepts) and their impact on grade characterization. The resulted FCM model consisted of nine concepts. Eight concepts represented the main histopathological features for tumour grading. The ninth concept represented the tumour grade. To increase the classification ability of the FCM model, the AHL algorithm was applied to adjust the weights of the FCM. The proposed FCM grading model achieved a classification accuracy of 72.5%, 74.42% and 95.55% for tumours of grades I, II and III, respectively. An advanced computerized method to support tumour grade diagnosis decision was proposed and developed. The novelty of the method is based on employing the soft computing method of FCMs to represent specialized knowledge on histopathology and on augmenting FCMs ability using an unsupervised learning algorithm, the AHL. The proposed method performs with reasonably high accuracy compared to other existing methods and at the same time meets the physicians' requirements for transparency and explicability.

  15. On Spoken English Phoneme Evaluation Method Based on Sphinx-4 Computer System

    Directory of Open Access Journals (Sweden)

    Li Qin

    2017-12-01

    Full Text Available In oral English learning, HDPs (phonemes that are hard to be distinguished are areas where Chinese students frequently make mistakes in pronunciation. This paper studies a speech phoneme evaluation method for HDPs, hoping to improve the ability of individualized evaluation on HDPs and help provide a personalized learning platform for English learners. First of all, this paper briefly introduces relevant phonetic recognition technologies and pronunciation evaluation algorithms and also describes the phonetic retrieving, phonetic decoding and phonetic knowledge base in the Sphinx-4 computer system, which constitute the technological foundation for phoneme evaluation. Then it proposes an HDP evaluation model, which integrates the reliability of the speech processing system and the individualization of spoken English learners into the evaluation system. After collecting HDPs of spoken English learners and sorting them into different sets, it uses the evaluation system to recognize these HDP sets and at last analyzes the experimental results of HDP evaluation, which proves the effectiveness of the HDP evaluation model.

  16. Applications of meshless methods for damage computations with finite strains

    International Nuclear Information System (INIS)

    Pan Xiaofei; Yuan Huang

    2009-01-01

    Material defects such as cavities have great effects on the damage process in ductile materials. Computations based on finite element methods (FEMs) often suffer from instability due to material failure as well as large distortions. To improve computational efficiency and robustness the element-free Galerkin (EFG) method is applied in the micro-mechanical constitute damage model proposed by Gurson and modified by Tvergaard and Needleman (the GTN damage model). The EFG algorithm is implemented in the general purpose finite element code ABAQUS via the user interface UEL. With the help of the EFG method, damage processes in uniaxial tension specimens and notched specimens are analyzed and verified with experimental data. Computational results reveal that the damage which takes place in the interior of specimens will extend to the exterior and cause fracture of specimens; the damage is a fast procedure relative to the whole tensing process. The EFG method provides more stable and robust numerical solution in comparing with the FEM analysis

  17. Comparability of Computer-based and Paper-based Versions of Writing Section of PET in Iranian EFL Context

    OpenAIRE

    Mohammad Mohammadi; Masoud Barzgaran

    2010-01-01

    Computer technology has provided language testing experts with opportunity to develop computerized versions of traditional paper-based language tests. New generations of TOEFL and Cambridge IELTS, BULATS, KET, PET are good examples of computer-based language tests. Since this new method of testing introduces new factors into the realm of language assessment ( e.g. modes of test delivery, familiarity with computer, etc.),the question may be whether the two modes of computer- and paper-based te...

  18. A fast computing method to distinguish the hyperbolic trajectory of an non-autonomous system

    Science.gov (United States)

    Jia, Meng; Fan, Yang-Yu; Tian, Wei-Jian

    2011-03-01

    Attempting to find a fast computing method to DHT (distinguished hyperbolic trajectory), this study first proves that the errors of the stable DHT can be ignored in normal direction when they are computed as the trajectories extend. This conclusion means that the stable flow with perturbation will approach to the real trajectory as it extends over time. Based on this theory and combined with the improved DHT computing method, this paper reports a new fast computing method to DHT, which magnifies the DHT computing speed without decreasing its accuracy. Project supported by the National Natural Science Foundation of China (Grant No. 60872159).

  19. A fast computing method to distinguish the hyperbolic trajectory of an non-autonomous system

    International Nuclear Information System (INIS)

    Jia Meng; Fan Yang-Yu; Tian Wei-Jian

    2011-01-01

    Attempting to find a fast computing method to DHT (distinguished hyperbolic trajectory), this study first proves that the errors of the stable DHT can be ignored in normal direction when they are computed as the trajectories extend. This conclusion means that the stable flow with perturbation will approach to the real trajectory as it extends over time. Based on this theory and combined with the improved DHT computing method, this paper reports a new fast computing method to DHT, which magnifies the DHT computing speed without decreasing its accuracy. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  20. Thermoelectricity analogy method for computing the periodic heat transfer in external building envelopes

    International Nuclear Information System (INIS)

    Peng Changhai; Wu Zhishen

    2008-01-01

    Simple and effective computation methods are needed to calculate energy efficiency in buildings for building thermal comfort and HVAC system simulations. This paper, which is based upon the theory of thermoelectricity analogy, develops a new harmonic method, the thermoelectricity analogy method (TEAM), to compute the periodic heat transfer in external building envelopes (EBE). It presents, in detail, the principles and specific techniques of TEAM to calculate both the decay rates and time lags of EBE. First, a set of linear equations is established using the theory of thermoelectricity analogy. Second, the temperature of each node is calculated by solving the linear equations set. Finally, decay rates and time lags are found by solving simple mathematical expressions. Comparisons show that this method is highly accurate and efficient. Moreover, relative to the existing harmonic methods, which are based on the classical control theory and the method of separation of variables, TEAM does not require complicated derivation and is amenable to hand computation and programming

  1. Randomized controlled trial of a computer-based module to improve contraceptive method choice.

    Science.gov (United States)

    Garbers, Samantha; Meserve, Allison; Kottke, Melissa; Hatcher, Robert; Ventura, Alicia; Chiasson, Mary Ann

    2012-10-01

    Unintended pregnancy is common in the United States, and interventions are needed to improve contraceptive use among women at higher risk of unintended pregnancy, including Latinas and women with low educational attainment. A three-arm randomized controlled trial was conducted at two family planning sites serving low-income, predominantly Latina populations. The trial tested the efficacy of a computer-based contraceptive assessment module in increasing the proportion of patients choosing an effective method of contraception (women per year, typical use). Participants were randomized to complete the module and receive tailored health materials, to complete the module and receive generic health materials, or to a control condition. In intent-to-treat analyses adjusted for recruitment site (n=2231), family planning patients who used the module were significantly more likely to choose an effective contraceptive method: 75% among those who received tailored materials [odds ratio (OR)=1.56; 95% confidence interval (CI): 1.23-1.98] and 78% among those who received generic materials (OR=1.74; 95% CI: 1.35-2.25), compared to 65% among control arm participants. The findings support prior research suggesting that patient-centered interventions can positively influence contraceptive method choice. Copyright © 2012 Elsevier Inc. All rights reserved.

  2. Computer-based literature search in medical institutions in India

    Directory of Open Access Journals (Sweden)

    Kalita Jayantee

    2007-01-01

    Full Text Available Aim: To study the use of computer-based literature search and its application in clinical training and patient care as a surrogate marker of evidence-based medicine. Materials and Methods: A questionnaire comprising of questions on purpose (presentation, patient management, research, realm (site accessed, nature and frequency of search, effect, infrastructure, formal training in computer based literature search and suggestions for further improvement were sent to residents and faculty of a Postgraduate Medical Institute (PGI and a Medical College. The responses were compared amongst different subgroups of respondents. Results: Out of 300 subjects approached 194 responded; of whom 103 were from PGI and 91 from Medical College. There were 97 specialty residents, 58 super-specialty residents and 39 faculty members. Computer-based literature search was done at least once a month by 89% though there was marked variability in frequency and extent. The motivation for computer-based literature search was for presentation in 90%, research in 65% and patient management in 60.3%. The benefit of search was acknowledged in learning and teaching by 80%, research by 65% and patient care by 64.4% of respondents. Formal training in computer based literature search was received by 41% of whom 80% were residents. Residents from PGI did more frequent and more extensive computer-based literature search, which was attributed to better infrastructure and training. Conclusion: Training and infrastructure both are crucial for computer-based literature search, which may translate into evidence based medicine.

  3. Computational Methods in Plasma Physics

    CERN Document Server

    Jardin, Stephen

    2010-01-01

    Assuming no prior knowledge of plasma physics or numerical methods, Computational Methods in Plasma Physics covers the computational mathematics and techniques needed to simulate magnetically confined plasmas in modern magnetic fusion experiments and future magnetic fusion reactors. Largely self-contained, the text presents the basic concepts necessary for the numerical solution of partial differential equations. Along with discussing numerical stability and accuracy, the author explores many of the algorithms used today in enough depth so that readers can analyze their stability, efficiency,

  4. A New Computationally Frugal Method For Sensitivity Analysis Of Environmental Models

    Science.gov (United States)

    Rakovec, O.; Hill, M. C.; Clark, M. P.; Weerts, A.; Teuling, R.; Borgonovo, E.; Uijlenhoet, R.

    2013-12-01

    Effective and efficient parameter sensitivity analysis methods are crucial to understand the behaviour of complex environmental models and use of models in risk assessment. This paper proposes a new computationally frugal method for analyzing parameter sensitivity: the Distributed Evaluation of Local Sensitivity Analysis (DELSA). The DELSA method can be considered a hybrid of local and global methods, and focuses explicitly on multiscale evaluation of parameter sensitivity across the parameter space. Results of the DELSA method are compared with the popular global, variance-based Sobol' method and the delta method. We assess the parameter sensitivity of both (1) a simple non-linear reservoir model with only two parameters, and (2) five different "bucket-style" hydrologic models applied to a medium-sized catchment (200 km2) in the Belgian Ardennes. Results show that in both the synthetic and real-world examples, the global Sobol' method and the DELSA method provide similar sensitivities, with the DELSA method providing more detailed insight at much lower computational cost. The ability to understand how sensitivity measures vary through parameter space with modest computational requirements provides exciting new opportunities.

  5. Primary care physicians’ perspectives on computer-based health risk assessment tools for chronic diseases: a mixed methods study

    Directory of Open Access Journals (Sweden)

    Teja Voruganti

    2015-09-01

    Full Text Available Background Health risk assessment tools compute an individual’s risk of developing a disease. Routine use of such tools by primary care physicians (PCPs is potentially useful in chronic disease prevention. We sought physicians’ awareness and perceptions of the usefulness, usability and feasibility of performing assessments with computer-based risk assessment tools in primary care settings.Methods Focus groups and usability testing with a computer-based risk assessment tool were conducted with PCPs from both university-affiliated and community-based practices. Analysis was derived from grounded theory methodology.Results PCPs (n = 30 were aware of several risk assessment tools although only select tools were used routinely. The decision to use a tool depended on how use impacted practice workflow and whether the tool had credibility. Participants felt that embedding tools in the electronic medical records (EMRs system might allow for health information from the medical record to auto-populate into the tool. User comprehension of risk could also be improved with computer-based interfaces that present risk in different formats.Conclusions In this study, PCPs chose to use certain tools more regularly because of usability and credibility. Despite there being differences in the particular tools a clinical practice used, there was general appreciation for the usefulness of tools for different clinical situations. Participants characterised particular features of an ideal tool, feeling strongly that embedding risk assessment tools in the EMR would maximise accessibility and use of the tool for chronic disease management. However, appropriate practice workflow integration and features that facilitate patient understanding at point-of-care are also essential. 

  6. ZIVIS: A City Computing Platform Based on Volunteer Computing

    International Nuclear Information System (INIS)

    Antoli, B.; Castejon, F.; Giner, A.; Losilla, G.; Reynolds, J. M.; Rivero, A.; Sangiao, S.; Serrano, F.; Tarancon, A.; Valles, R.; Velasco, J. L.

    2007-01-01

    Abstract Volunteer computing has come up as a new form of distributed computing. Unlike other computing paradigms like Grids, which use to be based on complex architectures, volunteer computing has demonstrated a great ability to integrate dispersed, heterogeneous computing resources with ease. This article presents ZIVIS, a project which aims to deploy a city-wide computing platform in Zaragoza (Spain). ZIVIS is based on BOINC (Berkeley Open Infrastructure for Network Computing), a popular open source framework to deploy volunteer and desktop grid computing systems. A scientific code which simulates the trajectories of particles moving inside a stellarator fusion device, has been chosen as the pilot application of the project. In this paper we describe the approach followed to port the code to the BOINC framework as well as some novel techniques, based on standard Grid protocols, we have used to access the output data present in the BOINC server from a remote visualizer. (Author)

  7.   Indirect versus direct feedback in computer-based Prism Adaptation Therapy

    DEFF Research Database (Denmark)

    Wilms, Inge Linda; Rytter, Hana Malá

    2010-01-01

      Prism Adaptation Therapy (PAT) is an intervention method in the treatment of the attention disorder neglect (Frassinetti, Angeli, Meneghello, Avanzi, & Ladavas, 2002; Rossetti, et al., 1998). The aim of this study was to investigate whether one session of PAT using a computer-attached touchscreen...... have direct implications for future implementations of computer-based methods of treatment of visuospatial disorders and computer-assisted rehabilitation in general....

  8. A New Energy-Based Method for 3-D Finite-Element Nonlinear Flux Linkage computation of Electrical Machines

    DEFF Research Database (Denmark)

    Lu, Kaiyuan; Rasmussen, Peter Omand; Ritchie, Ewen

    2011-01-01

    This paper presents a new method for computation of the nonlinear flux linkage in 3-D finite-element models (FEMs) of electrical machines. Accurate computation of the nonlinear flux linkage in 3-D FEM is not an easy task. Compared to the existing energy-perturbation method, the new technique......-perturbation method. The new method proposed is validated using experimental results on two different permanent magnet machines....

  9. Computational efficiency for the surface renewal method

    Science.gov (United States)

    Kelley, Jason; Higgins, Chad

    2018-04-01

    Measuring surface fluxes using the surface renewal (SR) method requires programmatic algorithms for tabulation, algebraic calculation, and data quality control. A number of different methods have been published describing automated calibration of SR parameters. Because the SR method utilizes high-frequency (10 Hz+) measurements, some steps in the flux calculation are computationally expensive, especially when automating SR to perform many iterations of these calculations. Several new algorithms were written that perform the required calculations more efficiently and rapidly, and that tested for sensitivity to length of flux averaging period, ability to measure over a large range of lag timescales, and overall computational efficiency. These algorithms utilize signal processing techniques and algebraic simplifications that demonstrate simple modifications that dramatically improve computational efficiency. The results here complement efforts by other authors to standardize a robust and accurate computational SR method. Increased speed of computation time grants flexibility to implementing the SR method, opening new avenues for SR to be used in research, for applied monitoring, and in novel field deployments.

  10. Choosing Learning Methods Suitable for Teaching and Learning in Computer Science

    Science.gov (United States)

    Taylor, Estelle; Breed, Marnus; Hauman, Ilette; Homann, Armando

    2013-01-01

    Our aim is to determine which teaching methods students in Computer Science and Information Systems prefer. There are in total 5 different paradigms (behaviorism, cognitivism, constructivism, design-based and humanism) with 32 models between them. Each model is unique and states different learning methods. Recommendations are made on methods that…

  11. RPM-WEBBSYS: A web-based computer system to apply the rational polynomial method for estimating static formation temperatures of petroleum and geothermal wells

    Science.gov (United States)

    Wong-Loya, J. A.; Santoyo, E.; Andaverde, J. A.; Quiroz-Ruiz, A.

    2015-12-01

    A Web-Based Computer System (RPM-WEBBSYS) has been developed for the application of the Rational Polynomial Method (RPM) to estimate static formation temperatures (SFT) of geothermal and petroleum wells. The system is also capable to reproduce the full thermal recovery processes occurred during the well completion. RPM-WEBBSYS has been programmed using advances of the information technology to perform more efficiently computations of SFT. RPM-WEBBSYS may be friendly and rapidly executed by using any computing device (e.g., personal computers and portable computing devices such as tablets or smartphones) with Internet access and a web browser. The computer system was validated using bottomhole temperature (BHT) measurements logged in a synthetic heat transfer experiment, where a good matching between predicted and true SFT was achieved. RPM-WEBBSYS was finally applied to BHT logs collected from well drilling and shut-in operations, where the typical problems of the under- and over-estimation of the SFT (exhibited by most of the existing analytical methods) were effectively corrected.

  12. Evaluating the efficiency of divestiture policy in promoting competitiveness using an analytical method and agent-based computational economics

    International Nuclear Information System (INIS)

    Rahimiyan, Morteza; Rajabi Mashhadi, Habib

    2010-01-01

    Choosing a desired policy for divestiture of dominant firms' generation assets has been a challenging task and open question for regulatory authority. To deal with this problem, in this paper, an analytical method and agent-based computational economics (ACE) approach are used for ex-ante analysis of divestiture policy in reducing market power. The analytical method is applied to solve a designed concentration boundary problem, even for situations where the cost data of generators are unknown. The concentration boundary problem is the problem of minimizing or maximizing market concentration subject to operation constraints of the electricity market. It is proved here that the market concentration corresponding to operation condition is certainly viable in an interval calculated by the analytical method. For situations where the cost function of generators is available, the ACE is used to model the electricity market. In ACE, each power producer's profit-maximization problem is solved by the computational approach of Q-learning. The power producer using the Q-learning method learns from past experiences to implicitly identify the market power, and find desired response in competing with the rivals. Both methods are applied in a multi-area power system and effects of different divestiture policies on market behavior are analyzed.

  13. Evaluating the efficiency of divestiture policy in promoting competitiveness using an analytical method and agent-based computational economics

    Energy Technology Data Exchange (ETDEWEB)

    Rahimiyan, Morteza; Rajabi Mashhadi, Habib [Department of Electrical Engineering, Faculty of Engineering, Ferdowsi University of Mashhad, Mashhad (Iran)

    2010-03-15

    Choosing a desired policy for divestiture of dominant firms' generation assets has been a challenging task and open question for regulatory authority. To deal with this problem, in this paper, an analytical method and agent-based computational economics (ACE) approach are used for ex-ante analysis of divestiture policy in reducing market power. The analytical method is applied to solve a designed concentration boundary problem, even for situations where the cost data of generators are unknown. The concentration boundary problem is the problem of minimizing or maximizing market concentration subject to operation constraints of the electricity market. It is proved here that the market concentration corresponding to operation condition is certainly viable in an interval calculated by the analytical method. For situations where the cost function of generators is available, the ACE is used to model the electricity market. In ACE, each power producer's profit-maximization problem is solved by the computational approach of Q-learning. The power producer using the Q-learning method learns from past experiences to implicitly identify the market power, and find desired response in competing with the rivals. Both methods are applied in a multi-area power system and effects of different divestiture policies on market behavior are analyzed. (author)

  14. A new fault detection method for computer networks

    International Nuclear Information System (INIS)

    Lu, Lu; Xu, Zhengguo; Wang, Wenhai; Sun, Youxian

    2013-01-01

    Over the past few years, fault detection for computer networks has attracted extensive attentions for its importance in network management. Most existing fault detection methods are based on active probing techniques which can detect the occurrence of faults fast and precisely. But these methods suffer from the limitation of traffic overhead, especially in large scale networks. To relieve traffic overhead induced by active probing based methods, a new fault detection method, whose key is to divide the detection process into multiple stages, is proposed in this paper. During each stage, only a small region of the network is detected by using a small set of probes. Meanwhile, it also ensures that the entire network can be covered after multiple detection stages. This method can guarantee that the traffic used by probes during each detection stage is small sufficiently so that the network can operate without severe disturbance from probes. Several simulation results verify the effectiveness of the proposed method

  15. Computational techniques of the simplex method

    CERN Document Server

    Maros, István

    2003-01-01

    Computational Techniques of the Simplex Method is a systematic treatment focused on the computational issues of the simplex method. It provides a comprehensive coverage of the most important and successful algorithmic and implementation techniques of the simplex method. It is a unique source of essential, never discussed details of algorithmic elements and their implementation. On the basis of the book the reader will be able to create a highly advanced implementation of the simplex method which, in turn, can be used directly or as a building block in other solution algorithms.

  16. Computational methods for reversed-field equilibrium

    International Nuclear Information System (INIS)

    Boyd, J.K.; Auerbach, S.P.; Willmann, P.A.; Berk, H.L.; McNamara, B.

    1980-01-01

    Investigating the temporal evolution of reversed-field equilibrium caused by transport processes requires the solution of the Grad-Shafranov equation and computation of field-line-averaged quantities. The technique for field-line averaging and the computation of the Grad-Shafranov equation are presented. Application of Green's function to specify the Grad-Shafranov equation boundary condition is discussed. Hill's vortex formulas used to verify certain computations are detailed. Use of computer software to implement computational methods is described

  17. Computer Based Road Accident Reconstruction Experiences

    Directory of Open Access Journals (Sweden)

    Milan Batista

    2005-03-01

    Full Text Available Since road accident analyses and reconstructions are increasinglybased on specific computer software for simulationof vehicle d1iving dynamics and collision dynamics, and forsimulation of a set of trial runs from which the model that bestdescribes a real event can be selected, the paper presents anoverview of some computer software and methods available toaccident reconstruction experts. Besides being time-saving,when properly used such computer software can provide moreauthentic and more trustworthy accident reconstruction, thereforepractical experiences while using computer software toolsfor road accident reconstruction obtained in the TransportSafety Laboratory at the Faculty for Maritime Studies andTransport of the University of Ljubljana are presented and discussed.This paper addresses also software technology for extractingmaximum information from the accident photo-documentationto support accident reconstruction based on the simulationsoftware, as well as the field work of reconstruction expertsor police on the road accident scene defined by this technology.

  18. Numerical methods for characterization of synchrotron radiation based on the Wigner function method

    Directory of Open Access Journals (Sweden)

    Takashi Tanaka

    2014-06-01

    Full Text Available Numerical characterization of synchrotron radiation based on the Wigner function method is explored in order to accurately evaluate the light source performance. A number of numerical methods to compute the Wigner functions for typical synchrotron radiation sources such as bending magnets, undulators and wigglers, are presented, which significantly improve the computation efficiency and reduce the total computation time. As a practical example of the numerical characterization, optimization of betatron functions to maximize the brilliance of undulator radiation is discussed.

  19. Computational Methods and Function Theory

    CERN Document Server

    Saff, Edward; Salinas, Luis; Varga, Richard

    1990-01-01

    The volume is devoted to the interaction of modern scientific computation and classical function theory. Many problems in pure and more applied function theory can be tackled using modern computing facilities: numerically as well as in the sense of computer algebra. On the other hand, computer algorithms are often based on complex function theory, and dedicated research on their theoretical foundations can lead to great enhancements in performance. The contributions - original research articles, a survey and a collection of problems - cover a broad range of such problems.

  20. Zonal methods and computational fluid dynamics

    International Nuclear Information System (INIS)

    Atta, E.H.

    1985-01-01

    Recent advances in developing numerical algorithms for solving fluid flow problems, and the continuing improvement in the speed and storage of large scale computers have made it feasible to compute the flow field about complex and realistic configurations. Current solution methods involve the use of a hierarchy of mathematical models ranging from the linearized potential equation to the Navier Stokes equations. Because of the increasing complexity of both the geometries and flowfields encountered in practical fluid flow simulation, there is a growing emphasis in computational fluid dynamics on the use of zonal methods. A zonal method is one that subdivides the total flow region into interconnected smaller regions or zones. The flow solutions in these zones are then patched together to establish the global flow field solution. Zonal methods are primarily used either to limit the complexity of the governing flow equations to a localized region or to alleviate the grid generation problems about geometrically complex and multicomponent configurations. This paper surveys the application of zonal methods for solving the flow field about two and three-dimensional configurations. Various factors affecting their accuracy and ease of implementation are also discussed. From the presented review it is concluded that zonal methods promise to be very effective for computing complex flowfields and configurations. Currently there are increasing efforts to improve their efficiency, versatility, and accuracy

  1. Computational methods in power system analysis

    CERN Document Server

    Idema, Reijer

    2014-01-01

    This book treats state-of-the-art computational methods for power flow studies and contingency analysis. In the first part the authors present the relevant computational methods and mathematical concepts. In the second part, power flow and contingency analysis are treated. Furthermore, traditional methods to solve such problems are compared to modern solvers, developed using the knowledge of the first part of the book. Finally, these solvers are analyzed both theoretically and experimentally, clearly showing the benefits of the modern approach.

  2. Empirical evaluation methods in computer vision

    CERN Document Server

    Christensen, Henrik I

    2002-01-01

    This book provides comprehensive coverage of methods for the empirical evaluation of computer vision techniques. The practical use of computer vision requires empirical evaluation to ensure that the overall system has a guaranteed performance. The book contains articles that cover the design of experiments for evaluation, range image segmentation, the evaluation of face recognition and diffusion methods, image matching using correlation methods, and the performance of medical image processing algorithms. Sample Chapter(s). Foreword (228 KB). Chapter 1: Introduction (505 KB). Contents: Automate

  3. Advanced computer-based training

    Energy Technology Data Exchange (ETDEWEB)

    Fischer, H D; Martin, H D

    1987-05-01

    The paper presents new techniques of computer-based training for personnel of nuclear power plants. Training on full-scope simulators is further increased by use of dedicated computer-based equipment. An interactive communication system runs on a personal computer linked to a video disc; a part-task simulator runs on 32 bit process computers and shows two versions: as functional trainer or as on-line predictor with an interactive learning system (OPAL), which may be well-tailored to a specific nuclear power plant. The common goal of both develoments is the optimization of the cost-benefit ratio for training and equipment.

  4. Advanced computer-based training

    International Nuclear Information System (INIS)

    Fischer, H.D.; Martin, H.D.

    1987-01-01

    The paper presents new techniques of computer-based training for personnel of nuclear power plants. Training on full-scope simulators is further increased by use of dedicated computer-based equipment. An interactive communication system runs on a personal computer linked to a video disc; a part-task simulator runs on 32 bit process computers and shows two versions: as functional trainer or as on-line predictor with an interactive learning system (OPAL), which may be well-tailored to a specific nuclear power plant. The common goal of both develoments is the optimization of the cost-benefit ratio for training and equipment. (orig.) [de

  5. Organization of the secure distributed computing based on multi-agent system

    Science.gov (United States)

    Khovanskov, Sergey; Rumyantsev, Konstantin; Khovanskova, Vera

    2018-04-01

    Nowadays developing methods for distributed computing is received much attention. One of the methods of distributed computing is using of multi-agent systems. The organization of distributed computing based on the conventional network computers can experience security threats performed by computational processes. Authors have developed the unified agent algorithm of control system of computing network nodes operation. Network PCs is used as computing nodes. The proposed multi-agent control system for the implementation of distributed computing allows in a short time to organize using of the processing power of computers any existing network to solve large-task by creating a distributed computing. Agents based on a computer network can: configure a distributed computing system; to distribute the computational load among computers operated agents; perform optimization distributed computing system according to the computing power of computers on the network. The number of computers connected to the network can be increased by connecting computers to the new computer system, which leads to an increase in overall processing power. Adding multi-agent system in the central agent increases the security of distributed computing. This organization of the distributed computing system reduces the problem solving time and increase fault tolerance (vitality) of computing processes in a changing computing environment (dynamic change of the number of computers on the network). Developed a multi-agent system detects cases of falsification of the results of a distributed system, which may lead to wrong decisions. In addition, the system checks and corrects wrong results.

  6. Algebraic Verification Method for SEREs Properties via Groebner Bases Approaches

    Directory of Open Access Journals (Sweden)

    Ning Zhou

    2013-01-01

    Full Text Available This work presents an efficient solution using computer algebra system to perform linear temporal properties verification for synchronous digital systems. The method is essentially based on both Groebner bases approaches and symbolic simulation. A mechanism for constructing canonical polynomial set based symbolic representations for both circuit descriptions and assertions is studied. We then present a complete checking algorithm framework based on these algebraic representations by using Groebner bases. The computational experience result in this work shows that the algebraic approach is a quite competitive checking method and will be a useful supplement to the existent verification methods based on simulation.

  7. Robust fault detection of linear systems using a computationally efficient set-membership method

    DEFF Research Database (Denmark)

    Tabatabaeipour, Mojtaba; Bak, Thomas

    2014-01-01

    In this paper, a computationally efficient set-membership method for robust fault detection of linear systems is proposed. The method computes an interval outer-approximation of the output of the system that is consistent with the model, the bounds on noise and disturbance, and the past measureme...... is trivially parallelizable. The method is demonstrated for fault detection of a hydraulic pitch actuator of a wind turbine. We show the effectiveness of the proposed method by comparing our results with two zonotope-based set-membership methods....

  8. The Effect of Instructional Method on Cardiopulmonary Resuscitation Skill Performance: A Comparison Between Instructor-Led Basic Life Support and Computer-Based Basic Life Support With Voice-Activated Manikin.

    Science.gov (United States)

    Wilson-Sands, Cathy; Brahn, Pamela; Graves, Kristal

    2015-01-01

    Validating participants' ability to correctly perform cardiopulmonary resuscitation (CPR) skills during basic life support courses can be a challenge for nursing professional development specialists. This study compares two methods of basic life support training, instructor-led and computer-based learning with voice-activated manikins, to identify if one method is more effective for performance of CPR skills. The findings suggest that a computer-based learning course with voice-activated manikins is a more effective method of training for improved CPR performance.

  9. A computational method based on the integration of heterogeneous networks for predicting disease-gene associations.

    Directory of Open Access Journals (Sweden)

    Xingli Guo

    Full Text Available The identification of disease-causing genes is a fundamental challenge in human health and of great importance in improving medical care, and provides a better understanding of gene functions. Recent computational approaches based on the interactions among human proteins and disease similarities have shown their power in tackling the issue. In this paper, a novel systematic and global method that integrates two heterogeneous networks for prioritizing candidate disease-causing genes is provided, based on the observation that genes causing the same or similar diseases tend to lie close to one another in a network of protein-protein interactions. In this method, the association score function between a query disease and a candidate gene is defined as the weighted sum of all the association scores between similar diseases and neighbouring genes. Moreover, the topological correlation of these two heterogeneous networks can be incorporated into the definition of the score function, and finally an iterative algorithm is designed for this issue. This method was tested with 10-fold cross-validation on all 1,126 diseases that have at least a known causal gene, and it ranked the correct gene as one of the top ten in 622 of all the 1,428 cases, significantly outperforming a state-of-the-art method called PRINCE. The results brought about by this method were applied to study three multi-factorial disorders: breast cancer, Alzheimer disease and diabetes mellitus type 2, and some suggestions of novel causal genes and candidate disease-causing subnetworks were provided for further investigation.

  10. Computer-based systems for nuclear power stations

    International Nuclear Information System (INIS)

    Humble, P.J.; Welbourne, D.; Belcher, G.

    1995-01-01

    The published intentions of vendors are for extensive touch-screen control and computer-based protection. The software features needed for acceptance in the UK are indicated. The defence in depth needed is analyzed. Current practice in aircraft flight control systems and the software methods available are discussed. Software partitioning and mathematically formal methods are appropriate for the structures and simple logic needed for nuclear power applications. The potential for claims of diversity and independence between two computer-based subsystems of a protection system is discussed. Features needed to meet a single failure criterion applied to software are discussed. Conclusions are given on the main factors which a design should allow for. The work reported was done for the Health and Safety Executive of the UK (HSE), and acknowledgement is given to them, to NNC Ltd and to GEC-Marconi Avionics Ltd for permission to publish. The opinions and recommendations expressed are those of the authors and do not necessarily reflect those of HSE. (Author)

  11. Computer holography: 3D digital art based on high-definition CGH

    International Nuclear Information System (INIS)

    Matsushima, K; Arima, Y; Nishi, H; Yamashita, H; Yoshizaki, Y; Ogawa, K; Nakahara, S

    2013-01-01

    Our recent works of high-definition computer-generated holograms (CGH) and the techniques used for the creation, such as the polygon-based method, silhouette method and digitized holography, are summarized and reviewed in this paper. The concept of computer holography is proposed in terms of integrating and crystalizing the techniques into novel digital art.

  12. Projection computation based on pixel in simultaneous algebraic reconstruction technique

    International Nuclear Information System (INIS)

    Wang Xu; Chen Zhiqiang; Xiong Hua; Zhang Li

    2005-01-01

    SART is an important arithmetic of image reconstruction, in which the projection computation takes over half of the reconstruction time. An efficient way to compute projection coefficient matrix together with memory optimization is presented in this paper. Different from normal method, projection lines are located based on every pixel, and the following projection coefficient computation can make use of the results. Correlation of projection lines and pixels can be used to optimize the computation. (authors)

  13. Computational methods in earthquake engineering

    CERN Document Server

    Plevris, Vagelis; Lagaros, Nikos

    2017-01-01

    This is the third book in a series on Computational Methods in Earthquake Engineering. The purpose of this volume is to bring together the scientific communities of Computational Mechanics and Structural Dynamics, offering a wide coverage of timely issues on contemporary Earthquake Engineering. This volume will facilitate the exchange of ideas in topics of mutual interest and can serve as a platform for establishing links between research groups with complementary activities. The computational aspects are emphasized in order to address difficult engineering problems of great social and economic importance. .

  14. An efficient method for computing the absorption of solar radiation by water vapor

    Science.gov (United States)

    Chou, M.-D.; Arking, A.

    1981-01-01

    Chou and Arking (1980) have developed a fast but accurate method for computing the IR cooling rate due to water vapor. Using a similar approach, the considered investigation develops a method for computing the heating rates due to the absorption of solar radiation by water vapor in the wavelength range from 4 to 8.3 micrometers. The validity of the method is verified by comparison with line-by-line calculations. An outline is provided of an efficient method for transmittance and flux computations based upon actual line parameters. High speed is achieved by employing a one-parameter scaling approximation to convert an inhomogeneous path into an equivalent homogeneous path at suitably chosen reference conditions.

  15. Computational method for free surface hydrodynamics

    International Nuclear Information System (INIS)

    Hirt, C.W.; Nichols, B.D.

    1980-01-01

    There are numerous flow phenomena in pressure vessel and piping systems that involve the dynamics of free fluid surfaces. For example, fluid interfaces must be considered during the draining or filling of tanks, in the formation and collapse of vapor bubbles, and in seismically shaken vessels that are partially filled. To aid in the analysis of these types of flow phenomena, a new technique has been developed for the computation of complicated free-surface motions. This technique is based on the concept of a local average volume of fluid (VOF) and is embodied in a computer program for two-dimensional, transient fluid flow called SOLA-VOF. The basic approach used in the VOF technique is briefly described, and compared to other free-surface methods. Specific capabilities of the SOLA-VOF program are illustrated by generic examples of bubble growth and collapse, flows of immiscible fluid mixtures, and the confinement of spilled liquids

  16. A numerical method to compute interior transmission eigenvalues

    International Nuclear Information System (INIS)

    Kleefeld, Andreas

    2013-01-01

    In this paper the numerical calculation of eigenvalues of the interior transmission problem arising in acoustic scattering for constant contrast in three dimensions is considered. From the computational point of view existing methods are very expensive, and are only able to show the existence of such transmission eigenvalues. Furthermore, they have trouble finding them if two or more eigenvalues are situated closely together. We present a new method based on complex-valued contour integrals and the boundary integral equation method which is able to calculate highly accurate transmission eigenvalues. So far, this is the first paper providing such accurate values for various surfaces different from a sphere in three dimensions. Additionally, the computational cost is even lower than those of existing methods. Furthermore, the algorithm is capable of finding complex-valued eigenvalues for which no numerical results have been reported yet. Until now, the proof of existence of such eigenvalues is still open. Finally, highly accurate eigenvalues of the interior Dirichlet problem are provided and might serve as test cases to check newly derived Faber–Krahn type inequalities for larger transmission eigenvalues that are not yet available. (paper)

  17. Computing discharge using the index velocity method

    Science.gov (United States)

    Levesque, Victor A.; Oberg, Kevin A.

    2012-01-01

    Application of the index velocity method for computing continuous records of discharge has become increasingly common, especially since the introduction of low-cost acoustic Doppler velocity meters (ADVMs) in 1997. Presently (2011), the index velocity method is being used to compute discharge records for approximately 470 gaging stations operated and maintained by the U.S. Geological Survey. The purpose of this report is to document and describe techniques for computing discharge records using the index velocity method. Computing discharge using the index velocity method differs from the traditional stage-discharge method by separating velocity and area into two ratings—the index velocity rating and the stage-area rating. The outputs from each of these ratings, mean channel velocity (V) and cross-sectional area (A), are then multiplied together to compute a discharge. For the index velocity method, V is a function of such parameters as streamwise velocity, stage, cross-stream velocity, and velocity head, and A is a function of stage and cross-section shape. The index velocity method can be used at locations where stage-discharge methods are used, but it is especially appropriate when more than one specific discharge can be measured for a specific stage. After the ADVM is selected, installed, and configured, the stage-area rating and the index velocity rating must be developed. A standard cross section is identified and surveyed in order to develop the stage-area rating. The standard cross section should be surveyed every year for the first 3 years of operation and thereafter at a lesser frequency, depending on the susceptibility of the cross section to change. Periodic measurements of discharge are used to calibrate and validate the index rating for the range of conditions experienced at the gaging station. Data from discharge measurements, ADVMs, and stage sensors are compiled for index-rating analysis. Index ratings are developed by means of regression

  18. Computer-based multi-channel analyzer based on internet

    International Nuclear Information System (INIS)

    Zhou Xinzhi; Ning Jiaoxian

    2001-01-01

    Combined the technology of Internet with computer-based multi-channel analyzer, a new kind of computer-based multi-channel analyzer system which is based on browser is presented. Its framework and principle as well as its implementation are discussed

  19. Optimizing event-related potential based brain-computer interfaces: a systematic evaluation of dynamic stopping methods

    Science.gov (United States)

    Schreuder, Martijn; Höhne, Johannes; Blankertz, Benjamin; Haufe, Stefan; Dickhaus, Thorsten; Tangermann, Michael

    2013-06-01

    Objective. In brain-computer interface (BCI) research, systems based on event-related potentials (ERP) are considered particularly successful and robust. This stems in part from the repeated stimulation which counteracts the low signal-to-noise ratio in electroencephalograms. Repeated stimulation leads to an optimization problem, as more repetitions also cost more time. The optimal number of repetitions thus represents a data-dependent trade-off between the stimulation time and the obtained accuracy. Several methods for dealing with this have been proposed as ‘early stopping’, ‘dynamic stopping’ or ‘adaptive stimulation’. Despite their high potential for BCI systems at the patient's bedside, those methods are typically ignored in current BCI literature. The goal of the current study is to assess the benefit of these methods. Approach. This study assesses for the first time the existing methods on a common benchmark of both artificially generated data and real BCI data of 83 BCI sessions, allowing for a direct comparison between these methods in the context of text entry. Main results. The results clearly show the beneficial effect on the online performance of a BCI system, if the trade-off between the number of stimulus repetitions and accuracy is optimized. All assessed methods work very well for data of good subjects, and worse for data of low-performing subjects. Most methods, however, are robust in the sense that they do not reduce the performance below the baseline of a simple no stopping strategy. Significance. Since all methods can be realized as a module between the BCI and an application, minimal changes are needed to include these methods into existing BCI software architectures. Furthermore, the hyperparameters of most methods depend to a large extend on only a single variable—the discriminability of the training data. For the convenience of BCI practitioners, the present study proposes linear regression coefficients for directly estimating

  20. Soil Erosion Estimation Using Grid-based Computation

    Directory of Open Access Journals (Sweden)

    Josef Vlasák

    2005-06-01

    Full Text Available Soil erosion estimation is an important part of a land consolidation process. Universal soil loss equation (USLE was presented by Wischmeier and Smith. USLE computation uses several factors, namely R – rainfall factor, K – soil erodability, L – slope length factor, S – slope gradient factor, C – cropping management factor, and P – erosion control management factor. L and S factors are usually combined to one LS factor – Topographic factor. The single factors are determined from several sources, such as DTM (Digital Terrain Model, BPEJ – soil type map, aerial and satellite images, etc. A conventional approach to the USLE computation, which is widely used in the Czech Republic, is based on the selection of characteristic profiles for which all above-mentioned factors must be determined. The result (G – annual soil loss of such computation is then applied for a whole area (slope of interest. Another approach to the USLE computation uses grids as a main data-structure. A prerequisite for a grid-based USLE computation is that each of the above-mentioned factors exists as a separate grid layer. The crucial step in this computation is a selection of appropriate grid resolution (grid cell size. A large cell size can cause an undesirable precision degradation. Too small cell size can noticeably slow down the whole computation. Provided that the cell size is derived from the source’s precision, the appropriate cell size for the Czech Republic varies from 30m to 50m. In some cases, especially when new surveying was done, grid computations can be performed with higher accuracy, i.e. with a smaller grid cell size. In such case, we have proposed a new method using the two-step computation. The first step computation uses a bigger cell size and is designed to identify higher erosion spots. The second step then uses a smaller cell size but it make the computation only the area identified in the previous step. This decomposition allows a

  1. A new 2D segmentation method based on dynamic programming applied to computer aided detection in mammography

    International Nuclear Information System (INIS)

    Timp, Sheila; Karssemeijer, Nico

    2004-01-01

    Mass segmentation plays a crucial role in computer-aided diagnosis (CAD) systems for classification of suspicious regions as normal, benign, or malignant. In this article we present a robust and automated segmentation technique--based on dynamic programming--to segment mass lesions from surrounding tissue. In addition, we propose an efficient algorithm to guarantee resulting contours to be closed. The segmentation method based on dynamic programming was quantitatively compared with two other automated segmentation methods (region growing and the discrete contour model) on a dataset of 1210 masses. For each mass an overlap criterion was calculated to determine the similarity with manual segmentation. The mean overlap percentage for dynamic programming was 0.69, for the other two methods 0.60 and 0.59, respectively. The difference in overlap percentage was statistically significant. To study the influence of the segmentation method on the performance of a CAD system two additional experiments were carried out. The first experiment studied the detection performance of the CAD system for the different segmentation methods. Free-response receiver operating characteristics analysis showed that the detection performance was nearly identical for the three segmentation methods. In the second experiment the ability of the classifier to discriminate between malignant and benign lesions was studied. For region based evaluation the area A z under the receiver operating characteristics curve was 0.74 for dynamic programming, 0.72 for the discrete contour model, and 0.67 for region growing. The difference in A z values obtained by the dynamic programming method and region growing was statistically significant. The differences between other methods were not significant

  2. Fast Reduction Method in Dominance-Based Information Systems

    Science.gov (United States)

    Li, Yan; Zhou, Qinghua; Wen, Yongchuan

    2018-01-01

    In real world applications, there are often some data with continuous values or preference-ordered values. Rough sets based on dominance relations can effectively deal with these kinds of data. Attribute reduction can be done in the framework of dominance-relation based approach to better extract decision rules. However, the computational cost of the dominance classes greatly affects the efficiency of attribute reduction and rule extraction. This paper presents an efficient method of computing dominance classes, and further compares it with traditional method with increasing attributes and samples. Experiments on UCI data sets show that the proposed algorithm obviously improves the efficiency of the traditional method, especially for large-scale data.

  3. Fibonacci’s Computation Methods vs Modern Algorithms

    Directory of Open Access Journals (Sweden)

    Ernesto Burattini

    2013-12-01

    Full Text Available In this paper we discuss some computational procedures given by Leonardo Pisano Fibonacci in his famous Liber Abaci book, and we propose their translation into a modern language for computers (C ++. Among the other we describe the method of “cross” multiplication, we evaluate its computational complexity in algorithmic terms and we show the output of a C ++ code that describes the development of the method applied to the product of two integers. In a similar way we show the operations performed on fractions introduced by Fibonacci. Thanks to the possibility to reproduce on a computer, the Fibonacci’s different computational procedures, it was possible to identify some calculation errors present in the different versions of the original text.

  4. Integrated Markov-neural reliability computation method: A case for multiple automated guided vehicle system

    International Nuclear Information System (INIS)

    Fazlollahtabar, Hamed; Saidi-Mehrabad, Mohammad; Balakrishnan, Jaydeep

    2015-01-01

    This paper proposes an integrated Markovian and back propagation neural network approaches to compute reliability of a system. While states of failure occurrences are significant elements for accurate reliability computation, Markovian based reliability assessment method is designed. Due to drawbacks shown by Markovian model for steady state reliability computations and neural network for initial training pattern, integration being called Markov-neural is developed and evaluated. To show efficiency of the proposed approach comparative analyses are performed. Also, for managerial implication purpose an application case for multiple automated guided vehicles (AGVs) in manufacturing networks is conducted. - Highlights: • Integrated Markovian and back propagation neural network approach to compute reliability. • Markovian based reliability assessment method. • Managerial implication is shown in an application case for multiple automated guided vehicles (AGVs) in manufacturing networks

  5. Morphing-Based Shape Optimization in Computational Fluid Dynamics

    Science.gov (United States)

    Rousseau, Yannick; Men'Shov, Igor; Nakamura, Yoshiaki

    In this paper, a Morphing-based Shape Optimization (MbSO) technique is presented for solving Optimum-Shape Design (OSD) problems in Computational Fluid Dynamics (CFD). The proposed method couples Free-Form Deformation (FFD) and Evolutionary Computation, and, as its name suggests, relies on the morphing of shape and computational domain, rather than direct shape parameterization. Advantages of the FFD approach compared to traditional parameterization are first discussed. Then, examples of shape and grid deformations by FFD are presented. Finally, the MbSO approach is illustrated and applied through an example: the design of an airfoil for a future Mars exploration airplane.

  6. A control method for agricultural greenhouses heating based on computational fluid dynamics and energy prediction model

    International Nuclear Information System (INIS)

    Chen, Jiaoliao; Xu, Fang; Tan, Dapeng; Shen, Zheng; Zhang, Libin; Ai, Qinglin

    2015-01-01

    Highlights: • A novel control method for the heating greenhouse with SWSHPS is proposed. • CFD is employed to predict the priorities of FCU loops for thermal performance. • EPM is act as an on-line tool to predict the total energy demand of greenhouse. • The CFD–EPM-based method can save energy and improve control accuracy. • The energy savings potential is between 8.7% and 15.1%. - Abstract: As energy heating is one of the main production costs, many efforts have been made to reduce the energy consumption of agricultural greenhouses. Herein, a novel control method of greenhouse heating using computational fluid dynamics (CFD) and energy prediction model (EPM) is proposed for energy savings and system performance. Based on the low-Reynolds number k–ε turbulence principle, a CFD model of heating greenhouse is developed, applying the discrete ordinates model for the radiative heat transfers and porous medium approach for plants considering plants sensible and latent heat exchanges. The CFD simulations have been validated, and used to analyze the greenhouse thermal performance and the priority of fan coil units (FCU) loops under the various heating conditions. According to the heating efficiency and temperature uniformity, the priorities of each FCU loop can be predicted to generate a database with priorities for control system. EPM is built up based on the thermal balance, and used to predict and optimize the energy demand of the greenhouse online. Combined with the priorities of FCU loops from CFD simulations offline, we have developed the CFD–EPM-based heating control system of greenhouse with surface water source heat pumps system (SWSHPS). Compared with conventional multi-zone independent control (CMIC) method, the energy savings potential is between 8.7% and 15.1%, and the control temperature deviation is decreased to between 0.1 °C and 0.6 °C in the investigated greenhouse. These results show the CFD–EPM-based method can improve system

  7. Computational methods for nuclear criticality safety analysis

    International Nuclear Information System (INIS)

    Maragni, M.G.

    1992-01-01

    Nuclear criticality safety analyses require the utilization of methods which have been tested and verified against benchmarks results. In this work, criticality calculations based on the KENO-IV and MCNP codes are studied aiming the qualification of these methods at the IPEN-CNEN/SP and COPESP. The utilization of variance reduction techniques is important to reduce the computer execution time, and several of them are analysed. As practical example of the above methods, a criticality safety analysis for the storage tubes for irradiated fuel elements from the IEA-R1 research has been carried out. This analysis showed that the MCNP code is more adequate for problems with complex geometries, and the KENO-IV code shows conservative results when it is not used the generalized geometry option. (author)

  8. New or improved computational methods and advanced reactor design

    International Nuclear Information System (INIS)

    Nakagawa, Masayuki; Takeda, Toshikazu; Ushio, Tadashi

    1997-01-01

    Nuclear computational method has been studied continuously up to date, as a fundamental technology supporting the nuclear development. At present, research on computational method according to new theory and the calculating method thought to be difficult to practise are also continued actively to find new development due to splendid improvement of features of computer. In Japan, many light water type reactors are now in operations, new computational methods are induced for nuclear design, and a lot of efforts are concentrated for intending to more improvement of economics and safety. In this paper, some new research results on the nuclear computational methods and their application to nuclear design of the reactor were described for introducing recent trend of the nuclear design of the reactor. 1) Advancement of the computational method, 2) Reactor core design and management of the light water reactor, and 3) Nuclear design of the fast reactor. (G.K.)

  9. Multiscale methods in turbulent combustion: strategies and computational challenges

    International Nuclear Information System (INIS)

    Echekki, Tarek

    2009-01-01

    A principal challenge in modeling turbulent combustion flows is associated with their complex, multiscale nature. Traditional paradigms in the modeling of these flows have attempted to address this nature through different strategies, including exploiting the separation of turbulence and combustion scales and a reduced description of the composition space. The resulting moment-based methods often yield reasonable predictions of flow and reactive scalars' statistics under certain conditions. However, these methods must constantly evolve to address combustion at different regimes, modes or with dominant chemistries. In recent years, alternative multiscale strategies have emerged, which although in part inspired by the traditional approaches, also draw upon basic tools from computational science, applied mathematics and the increasing availability of powerful computational resources. This review presents a general overview of different strategies adopted for multiscale solutions of turbulent combustion flows. Within these strategies, some specific models are discussed or outlined to illustrate their capabilities and underlying assumptions. These strategies may be classified under four different classes, including (i) closure models for atomistic processes, (ii) multigrid and multiresolution strategies, (iii) flame-embedding strategies and (iv) hybrid large-eddy simulation-low-dimensional strategies. A combination of these strategies and models can potentially represent a robust alternative strategy to moment-based models; but a significant challenge remains in the development of computational frameworks for these approaches as well as their underlying theories. (topical review)

  10. Soft Computing Methods for Disulfide Connectivity Prediction.

    Science.gov (United States)

    Márquez-Chamorro, Alfonso E; Aguilar-Ruiz, Jesús S

    2015-01-01

    The problem of protein structure prediction (PSP) is one of the main challenges in structural bioinformatics. To tackle this problem, PSP can be divided into several subproblems. One of these subproblems is the prediction of disulfide bonds. The disulfide connectivity prediction problem consists in identifying which nonadjacent cysteines would be cross-linked from all possible candidates. Determining the disulfide bond connectivity between the cysteines of a protein is desirable as a previous step of the 3D PSP, as the protein conformational search space is highly reduced. The most representative soft computing approaches for the disulfide bonds connectivity prediction problem of the last decade are summarized in this paper. Certain aspects, such as the different methodologies based on soft computing approaches (artificial neural network or support vector machine) or features of the algorithms, are used for the classification of these methods.

  11. A method for improved 4D-computed tomography data acquisition

    Energy Technology Data Exchange (ETDEWEB)

    Kupper, Martin; Sprengel, Wolfgang [Technische Univ. Graz (Austria). Inst. fuer Materialphysik; Winkler, Peter; Zurl, Brigitte [Medizinische Univ. Graz (Austria). Comprehensive Cancer Center

    2017-05-01

    In four-dimensional time-dependent computed tomography (4D-CT) of the lungs, irregularities in breathing movements can cause errors in data acquisition, or even data loss. We present a method based on sending a synthetic, regular breathing signal to the CT instead of the real signal, which ensures 4D-CT data sets without data loss. Subsequent correction of the signal based on the real breathing curve enables an accurate reconstruction of the size and movement of the target volume. This makes it possible to plan radiation treatment based on the obtained data. The method was tested with dynamic thorax phantom measurements using synthetic and real breathing patterns.

  12. Internet messenger based smart virtual class learning using ubiquitous computing

    Science.gov (United States)

    Umam, K.; Mardi, S. N. S.; Hariadi, M.

    2017-06-01

    Internet messenger (IM) has become an important educational technology component in college education, IM makes it possible for students to engage in learning and collaborating at smart virtual class learning (SVCL) using ubiquitous computing. However, the model of IM-based smart virtual class learning using ubiquitous computing and empirical evidence that would favor a broad application to improve engagement and behavior are still limited. In addition, the expectation that IM based SVCL using ubiquitous computing could improve engagement and behavior on smart class cannot be confirmed because the majority of the reviewed studies followed instructions paradigms. This article aims to present the model of IM-based SVCL using ubiquitous computing and showing learners’ experiences in improved engagement and behavior for learner-learner and learner-lecturer interactions. The method applied in this paper includes design process and quantitative analysis techniques, with the purpose of identifying scenarios of ubiquitous computing and realize the impressions of learners and lecturers about engagement and behavior aspect and its contribution to learning

  13. Comparative assessment of computational methods for the determination of solvation free energies in alcohol-based molecules.

    Science.gov (United States)

    Martins, Silvia A; Sousa, Sergio F

    2013-06-05

    The determination of differences in solvation free energies between related drug molecules remains an important challenge in computational drug optimization, when fast and accurate calculation of differences in binding free energy are required. In this study, we have evaluated the performance of five commonly used polarized continuum model (PCM) methodologies in the determination of solvation free energies for 53 typical alcohol and alkane small molecules. In addition, the performance of these PCM methods, of a thermodynamic integration (TI) protocol and of the Poisson-Boltzmann (PB) and generalized Born (GB) methods, were tested in the determination of solvation free energies changes for 28 common alkane-alcohol transformations, by the substitution of an hydrogen atom for a hydroxyl substituent. The results show that the solvation model D (SMD) performs better among the PCM-based approaches in estimating solvation free energies for alcohol molecules, and solvation free energy changes for alkane-alcohol transformations, with an average error below 1 kcal/mol for both quantities. However, for the determination of solvation free energy changes on alkane-alcohol transformation, PB and TI yielded better results. TI was particularly accurate in the treatment of hydroxyl groups additions to aromatic rings (0.53 kcal/mol), a common transformation when optimizing drug-binding in computer-aided drug design. Copyright © 2013 Wiley Periodicals, Inc.

  14. An analytical method for computing atomic contact areas in biomolecules.

    Science.gov (United States)

    Mach, Paul; Koehl, Patrice

    2013-01-15

    We propose a new analytical method for detecting and computing contacts between atoms in biomolecules. It is based on the alpha shape theory and proceeds in three steps. First, we compute the weighted Delaunay triangulation of the union of spheres representing the molecule. In the second step, the Delaunay complex is filtered to derive the dual complex. Finally, contacts between spheres are collected. In this approach, two atoms i and j are defined to be in contact if their centers are connected by an edge in the dual complex. The contact areas between atom i and its neighbors are computed based on the caps formed by these neighbors on the surface of i; the total area of all these caps is partitioned according to their spherical Laguerre Voronoi diagram on the surface of i. This method is analytical and its implementation in a new program BallContact is fast and robust. We have used BallContact to study contacts in a database of 1551 high resolution protein structures. We show that with this new definition of atomic contacts, we generate realistic representations of the environments of atoms and residues within a protein. In particular, we establish the importance of nonpolar contact areas that complement the information represented by the accessible surface areas. This new method bears similarity to the tessellation methods used to quantify atomic volumes and contacts, with the advantage that it does not require the presence of explicit solvent molecules if the surface of the protein is to be considered. © 2012 Wiley Periodicals, Inc. Copyright © 2012 Wiley Periodicals, Inc.

  15. Computer- and Suggestion-based Cognitive Rehabilitation following Acquired Brain Injury

    DEFF Research Database (Denmark)

    Lindeløv, Jonas Kristoffer

    . That is, training does not cause cognitive transfer and thus does not constitute “brain training” or “brain exercise” of any clinical relevance. A larger study found more promising results for a suggestion-based treatment in a hypnotic procedure. Patients improved to above population average in a matter...... of 4-8 hours, making this by far the most effective treatment compared to computer-based training, physical exercise, phamaceuticals, meditation, and attention process training. The contrast between computer-based methods and the hypnotic suggestion treatment may be reflect a more general discrepancy...

  16. Delamination detection using methods of computational intelligence

    Science.gov (United States)

    Ihesiulor, Obinna K.; Shankar, Krishna; Zhang, Zhifang; Ray, Tapabrata

    2012-11-01

    Abstract Reliable delamination prediction scheme is indispensable in order to prevent potential risks of catastrophic failures in composite structures. The existence of delaminations changes the vibration characteristics of composite laminates and hence such indicators can be used to quantify the health characteristics of laminates. An approach for online health monitoring of in-service composite laminates is presented in this paper that relies on methods based on computational intelligence. Typical changes in the observed vibration characteristics (i.e. change in natural frequencies) are considered as inputs to identify the existence, location and magnitude of delaminations. The performance of the proposed approach is demonstrated using numerical models of composite laminates. Since this identification problem essentially involves the solution of an optimization problem, the use of finite element (FE) methods as the underlying tool for analysis turns out to be computationally expensive. A surrogate assisted optimization approach is hence introduced to contain the computational time within affordable limits. An artificial neural network (ANN) model with Bayesian regularization is used as the underlying approximation scheme while an improved rate of convergence is achieved using a memetic algorithm. However, building of ANN surrogate models usually requires large training datasets. K-means clustering is effectively employed to reduce the size of datasets. ANN is also used via inverse modeling to determine the position, size and location of delaminations using changes in measured natural frequencies. The results clearly highlight the efficiency and the robustness of the approach.

  17. Advanced scientific computational methods and their applications of nuclear technologies. (1) Overview of scientific computational methods, introduction of continuum simulation methods and their applications (1)

    International Nuclear Information System (INIS)

    Oka, Yoshiaki; Okuda, Hiroshi

    2006-01-01

    Scientific computational methods have advanced remarkably with the progress of nuclear development. They have played the role of weft connecting each realm of nuclear engineering and then an introductory course of advanced scientific computational methods and their applications to nuclear technologies were prepared in serial form. This is the first issue showing their overview and introduction of continuum simulation methods. Finite element method as their applications is also reviewed. (T. Tanaka)

  18. A New Method of Histogram Computation for Efficient Implementation of the HOG Algorithm

    Directory of Open Access Journals (Sweden)

    Mariana-Eugenia Ilas

    2018-03-01

    Full Text Available In this paper we introduce a new histogram computation method to be used within the histogram of oriented gradients (HOG algorithm. The new method replaces the arctangent with the slope computation and the classical magnitude allocation based on interpolation with a simpler algorithm. The new method allows a more efficient implementation of HOG in general, and particularly in field-programmable gate arrays (FPGAs, by considerably reducing the area (thus increasing the level of parallelism, while maintaining very close classification accuracy compared to the original algorithm. Thus, the new method is attractive for many applications, including car detection and classification.

  19. A personal computer-based nuclear magnetic resonance spectrometer

    Science.gov (United States)

    Job, Constantin; Pearson, Robert M.; Brown, Michael F.

    1994-11-01

    Nuclear magnetic resonance (NMR) spectroscopy using personal computer-based hardware has the potential of enabling the application of NMR methods to fields where conventional state of the art equipment is either impractical or too costly. With such a strategy for data acquisition and processing, disciplines including civil engineering, agriculture, geology, archaeology, and others have the possibility of utilizing magnetic resonance techniques within the laboratory or conducting applications directly in the field. Another aspect is the possibility of utilizing existing NMR magnets which may be in good condition but unused because of outdated or nonrepairable electronics. Moreover, NMR applications based on personal computer technology may open up teaching possibilities at the college or even secondary school level. The goal of developing such a personal computer (PC)-based NMR standard is facilitated by existing technologies including logic cell arrays, direct digital frequency synthesis, use of PC-based electrical engineering software tools to fabricate electronic circuits, and the use of permanent magnets based on neodymium-iron-boron alloy. Utilizing such an approach, we have been able to place essentially an entire NMR spectrometer console on two printed circuit boards, with the exception of the receiver and radio frequency power amplifier. Future upgrades to include the deuterium lock and the decoupler unit are readily envisioned. The continued development of such PC-based NMR spectrometers is expected to benefit from the fast growing, practical, and low cost personal computer market.

  20. Comparability of Computer-based and Paper-based Versions of Writing Section of PET in Iranian EFL Context

    Directory of Open Access Journals (Sweden)

    Mohammad Mohammadi

    2010-11-01

    Full Text Available Computer technology has provided language testing experts with opportunity to develop computerized versions of traditional paper-based language tests. New generations of TOEFL and Cambridge IELTS, BULATS, KET, PET are good examples of computer-based language tests. Since this new method of testing introduces new factors into the realm of language assessment ( e.g. modes of test delivery, familiarity with computer, etc.,the question may be whether the two modes of computer- and paper-based tests comparably measure the same construct, and hence, the scores obtained from the two modes can be used interchangeably. Accordingly, the present study aimed to investigate the comparability of the paper- and computer-based versions of a writing test. The data for this study were collected from administering the writing section of a Cambridge Preliminary English Test (PET to eighty Iranian intermediate EFL learners through the two modes of computer- and paper-based testing. Besides, a computer familiarity questionnaire was used to divide participants into two groups with high and low computer familiarity. The results of the independent samples t-test revealed that there was no statistically significant difference between the learners' computer- and paper-based writing scores. The results of the paired samples t-test showed no statistically significant difference between high- and low-computer-familiar groups on computer-based writing. The researchers concluded that the two modes comparably measured the same construct.

  1. Reconstruction method for fluorescent X-ray computed tomography by least-squares method using singular value decomposition

    Science.gov (United States)

    Yuasa, T.; Akiba, M.; Takeda, T.; Kazama, M.; Hoshino, A.; Watanabe, Y.; Hyodo, K.; Dilmanian, F. A.; Akatsuka, T.; Itai, Y.

    1997-02-01

    We describe a new attenuation correction method for fluorescent X-ray computed tomography (FXCT) applied to image nonradioactive contrast materials in vivo. The principle of the FXCT imaging is that of computed tomography of the first generation. Using monochromatized synchrotron radiation from the BLNE-5A bending-magnet beam line of Tristan Accumulation Ring in KEK, Japan, we studied phantoms with the FXCT method, and we succeeded in delineating a 4-mm-diameter channel filled with a 500 /spl mu/g I/ml iodine solution in a 20-mm-diameter acrylic cylindrical phantom. However, to detect smaller iodine concentrations, attenuation correction is needed. We present a correction method based on the equation representing the measurement process. The discretized equation system is solved by the least-squares method using the singular value decomposition. The attenuation correction method is applied to the projections by the Monte Carlo simulation and the experiment to confirm its effectiveness.

  2. Water demand forecasting: review of soft computing methods.

    Science.gov (United States)

    Ghalehkhondabi, Iman; Ardjmand, Ehsan; Young, William A; Weckman, Gary R

    2017-07-01

    Demand forecasting plays a vital role in resource management for governments and private companies. Considering the scarcity of water and its inherent constraints, demand management and forecasting in this domain are critically important. Several soft computing techniques have been developed over the last few decades for water demand forecasting. This study focuses on soft computing methods of water consumption forecasting published between 2005 and 2015. These methods include artificial neural networks (ANNs), fuzzy and neuro-fuzzy models, support vector machines, metaheuristics, and system dynamics. Furthermore, it was discussed that while in short-term forecasting, ANNs have been superior in many cases, but it is still very difficult to pick a single method as the overall best. According to the literature, various methods and their hybrids are applied to water demand forecasting. However, it seems soft computing has a lot more to contribute to water demand forecasting. These contribution areas include, but are not limited, to various ANN architectures, unsupervised methods, deep learning, various metaheuristics, and ensemble methods. Moreover, it is found that soft computing methods are mainly used for short-term demand forecasting.

  3. Comparison of different methods for shielding design in computed tomography

    International Nuclear Information System (INIS)

    Ciraj-Bjelac, O.; Arandjic, D.; Kosutic, D.

    2011-01-01

    The purpose of this work is to compare different methods for shielding calculation in computed tomography (CT). The BIR-IPEM (British Inst. of Radiology and Inst. of Physics in Engineering in Medicine) and NCRP (National Council on Radiation Protection) method were used for shielding thickness calculation. Scattered dose levels and calculated barrier thickness were also compared with those obtained by scatter dose measurements in the vicinity of a dedicated CT unit. Minimal requirement for protective barriers based on BIR-IPEM method ranged between 1.1 and 1.4 mm of lead demonstrating underestimation of up to 20 % and overestimation of up to 30 % when compared with thicknesses based on measured dose levels. For NCRP method, calculated thicknesses were 33 % higher (27-42 %). BIR-IPEM methodology-based results were comparable with values based on scattered dose measurements, while results obtained using NCRP methodology demonstrated an overestimation of the minimal required barrier thickness. (authors)

  4. A novel forward projection-based metal artifact reduction method for flat-detector computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Prell, Daniel; Kyriakou, Yiannis; Beister, Marcel; Kalender, Willi A [Institute of Medical Physics, University of Erlangen-Nuernberg, Henkestrasse 91, 91052 Erlangen (Germany)], E-mail: daniel.prell@imp.uni-erlangen.de

    2009-11-07

    Metallic implants generate streak-like artifacts in flat-detector computed tomography (FD-CT) reconstructed volumetric images. This study presents a novel method for reducing these disturbing artifacts by inserting discarded information into the original rawdata using a three-step correction procedure and working directly with each detector element. Computation times are minimized by completely implementing the correction process on graphics processing units (GPUs). First, the original volume is corrected using a three-dimensional interpolation scheme in the rawdata domain, followed by a second reconstruction. This metal artifact-reduced volume is then segmented into three materials, i.e. air, soft-tissue and bone, using a threshold-based algorithm. Subsequently, a forward projection of the obtained tissue-class model substitutes the missing or corrupted attenuation values directly for each flat detector element that contains attenuation values corresponding to metal parts, followed by a final reconstruction. Experiments using tissue-equivalent phantoms showed a significant reduction of metal artifacts (deviations of CT values after correction compared to measurements without metallic inserts reduced typically to below 20 HU, differences in image noise to below 5 HU) caused by the implants and no significant resolution losses even in areas close to the inserts. To cover a variety of different cases, cadaver measurements and clinical images in the knee, head and spine region were used to investigate the effectiveness and applicability of our method. A comparison to a three-dimensional interpolation correction showed that the new approach outperformed interpolation schemes. Correction times are minimized, and initial and corrected images are made available at almost the same time (12.7 s for the initial reconstruction, 46.2 s for the final corrected image compared to 114.1 s and 355.1 s on central processing units (CPUs))

  5. A novel forward projection-based metal artifact reduction method for flat-detector computed tomography

    International Nuclear Information System (INIS)

    Prell, Daniel; Kyriakou, Yiannis; Beister, Marcel; Kalender, Willi A

    2009-01-01

    Metallic implants generate streak-like artifacts in flat-detector computed tomography (FD-CT) reconstructed volumetric images. This study presents a novel method for reducing these disturbing artifacts by inserting discarded information into the original rawdata using a three-step correction procedure and working directly with each detector element. Computation times are minimized by completely implementing the correction process on graphics processing units (GPUs). First, the original volume is corrected using a three-dimensional interpolation scheme in the rawdata domain, followed by a second reconstruction. This metal artifact-reduced volume is then segmented into three materials, i.e. air, soft-tissue and bone, using a threshold-based algorithm. Subsequently, a forward projection of the obtained tissue-class model substitutes the missing or corrupted attenuation values directly for each flat detector element that contains attenuation values corresponding to metal parts, followed by a final reconstruction. Experiments using tissue-equivalent phantoms showed a significant reduction of metal artifacts (deviations of CT values after correction compared to measurements without metallic inserts reduced typically to below 20 HU, differences in image noise to below 5 HU) caused by the implants and no significant resolution losses even in areas close to the inserts. To cover a variety of different cases, cadaver measurements and clinical images in the knee, head and spine region were used to investigate the effectiveness and applicability of our method. A comparison to a three-dimensional interpolation correction showed that the new approach outperformed interpolation schemes. Correction times are minimized, and initial and corrected images are made available at almost the same time (12.7 s for the initial reconstruction, 46.2 s for the final corrected image compared to 114.1 s and 355.1 s on central processing units (CPUs)).

  6. Development of computational methods of design by analysis for pressure vessel components

    International Nuclear Information System (INIS)

    Bao Shiyi; Zhou Yu; He Shuyan; Wu Honglin

    2005-01-01

    Stress classification is not only one of key steps when pressure vessel component is designed by analysis, but also a difficulty which puzzles engineers and designers at all times. At present, for calculating and categorizing the stress field of pressure vessel components, there are several computation methods of design by analysis such as Stress Equivalent Linearization, Two-Step Approach, Primary Structure method, Elastic Compensation method, GLOSS R-Node method and so on, that are developed and applied. Moreover, ASME code also gives an inelastic method of design by analysis for limiting gross plastic deformation only. When pressure vessel components design by analysis, sometimes there are huge differences between the calculating results for using different calculating and analysis methods mentioned above. As consequence, this is the main reason that affects wide application of design by analysis approach. Recently, a new approach, presented in the new proposal of a European Standard, CEN's unfired pressure vessel standard EN 13445-3, tries to avoid problems of stress classification by analyzing pressure vessel structure's various failure mechanisms directly based on elastic-plastic theory. In this paper, some stress classification methods mentioned above, are described briefly. And the computational methods cited in the European pressure vessel standard, such as Deviatoric Map, and nonlinear analysis methods (plastic analysis and limit analysis), are depicted compendiously. Furthermore, the characteristics of computational methods of design by analysis are summarized for selecting the proper computational method when design pressure vessel component by analysis. (authors)

  7. A computed torque method based attitude control with optimal force distribution for articulated body mobile robots

    International Nuclear Information System (INIS)

    Fukushima, Edwardo F.; Hirose, Shigeo

    2000-01-01

    This paper introduces an attitude control scheme based in optimal force distribution using quadratic programming which minimizes joint energy consumption. This method shares similarities with force distribution for multifingered hands, multiple coordinated manipulators and legged walking robots. In particular, an attitude control scheme was introduced inside the force distribution problem, and successfully implemented for control of the articulated body mobile robot KR-II. This is an actual mobile robot composed of cylindrical segments linked in series by prismatic joints and has a long snake-like appearance. These prismatic joints are force controlled so that each segment's vertical motion can automatically follow the terrain irregularities. An attitude control is necessary because this system acts like a system of wheeled inverted pendulum carts connected in series, being unstable by nature. The validity and effectiveness of the proposed method is verified by computer simulation and experiments with the robot KR-II. (author)

  8. Computing derivative-based global sensitivity measures using polynomial chaos expansions

    International Nuclear Information System (INIS)

    Sudret, B.; Mai, C.V.

    2015-01-01

    In the field of computer experiments sensitivity analysis aims at quantifying the relative importance of each input parameter (or combinations thereof) of a computational model with respect to the model output uncertainty. Variance decomposition methods leading to the well-known Sobol' indices are recognized as accurate techniques, at a rather high computational cost though. The use of polynomial chaos expansions (PCE) to compute Sobol' indices has allowed to alleviate the computational burden though. However, when dealing with large dimensional input vectors, it is good practice to first use screening methods in order to discard unimportant variables. The derivative-based global sensitivity measures (DGSMs) have been developed recently in this respect. In this paper we show how polynomial chaos expansions may be used to compute analytically DGSMs as a mere post-processing. This requires the analytical derivation of derivatives of the orthonormal polynomials which enter PC expansions. Closed-form expressions for Hermite, Legendre and Laguerre polynomial expansions are given. The efficiency of the approach is illustrated on two well-known benchmark problems in sensitivity analysis. - Highlights: • Derivative-based global sensitivity measures (DGSM) have been developed for screening purpose. • Polynomial chaos expansions (PC) are used as a surrogate model of the original computational model. • From a PC expansion the DGSM can be computed analytically. • The paper provides the derivatives of Hermite, Legendre and Laguerre polynomials for this purpose

  9. Problems and Issues in Using Computer- Based Support Tools to Enhance 'Soft' Systems Methodologies

    Directory of Open Access Journals (Sweden)

    Mark Stansfield

    2001-11-01

    Full Text Available This paper explores the issue of whether computer-based support tools can enhance the use of 'soft' systems methodologies as applied to real-world problem situations. Although work has been carried out by a number of researchers in applying computer-based technology to concepts and methodologies relating to 'soft' systems thinking such as Soft Systems Methodology (SSM, such attempts appear to be still in their infancy and have not been applied widely to real-world problem situations. This paper will highlight some of the problems that may be encountered in attempting to develop computer-based support tools for 'soft' systems methodologies. Particular attention will be paid to an attempt by the author to develop a computer-based support tool for a particular 'soft' systems method of inquiry known as the Appreciative Inquiry Method that is based upon Vickers' notion of 'appreciation' (Vickers, 196S and Checkland's SSM (Checkland, 1981. The final part of the paper will explore some of the lessons learnt from developing and applying the computer-based support tool to a real world problem situation, as well as considering the feasibility of developing computer-based support tools for 'soft' systems methodologies. This paper will put forward the point that a mixture of manual and computer-based tools should be employed to allow a methodology to be used in an unconstrained manner, but the benefits provided by computer-based technology should be utilised in supporting and enhancing the more mundane and structured tasks.

  10. Computer-Aided Design Method of Warp-Knitted Jacquard Spacer Fabrics

    Directory of Open Access Journals (Sweden)

    Li Xinxin

    2016-06-01

    Full Text Available Based on a further study on knitting and jacquard principles, this paper presents a mathematical design model to make computer-aided design of warp-knitted jacquard spacer fabrics more efficient. The mathematical model with matrix method employs three essential elements of chain notation, threading and Jacquard designing. With this model, the processing to design warp-knitted jacquard spacer fabrics with CAD software is also introduced. In this study, the sports shoes which have separated functional areas according to the feet structure and characteristics of movement are analysed. The results show the different patterns on Jacquard spacer fabrics that are seamlessly stitched with jacquard technics. The computer-aided design method of warp-knitted jacquard spacer fabrics is efficient and simple.

  11. Electromagnetic computation methods for lightning surge protection studies

    CERN Document Server

    Baba, Yoshihiro

    2016-01-01

    This book is the first to consolidate current research and to examine the theories of electromagnetic computation methods in relation to lightning surge protection. The authors introduce and compare existing electromagnetic computation methods such as the method of moments (MOM), the partial element equivalent circuit (PEEC), the finite element method (FEM), the transmission-line modeling (TLM) method, and the finite-difference time-domain (FDTD) method. The application of FDTD method to lightning protection studies is a topic that has matured through many practical applications in the past decade, and the authors explain the derivation of Maxwell's equations required by the FDTD, and modeling of various electrical components needed in computing lightning electromagnetic fields and surges with the FDTD method. The book describes the application of FDTD method to current and emerging problems of lightning surge protection of continuously more complex installations, particularly in critical infrastructures of e...

  12. Computational methods for data evaluation and assimilation

    CERN Document Server

    Cacuci, Dan Gabriel

    2013-01-01

    Data evaluation and data combination require the use of a wide range of probability theory concepts and tools, from deductive statistics mainly concerning frequencies and sample tallies to inductive inference for assimilating non-frequency data and a priori knowledge. Computational Methods for Data Evaluation and Assimilation presents interdisciplinary methods for integrating experimental and computational information. This self-contained book shows how the methods can be applied in many scientific and engineering areas. After presenting the fundamentals underlying the evaluation of experiment

  13. Computing homography with RANSAC algorithm: a novel method of registration

    Science.gov (United States)

    Li, Xiaowei; Liu, Yue; Wang, Yongtian; Yan, Dayuan

    2005-02-01

    An AR (Augmented Reality) system can integrate computer-generated objects with the image sequences of real world scenes in either an off-line or a real-time way. Registration, or camera pose estimation, is one of the key techniques to determine its performance. The registration methods can be classified as model-based and move-matching. The former approach can accomplish relatively accurate registration results, but it requires the precise model of the scene, which is hard to be obtained. The latter approach carries out registration by computing the ego-motion of the camera. Because it does not require the prior-knowledge of the scene, its registration results sometimes turn out to be less accurate. When the model defined is as simple as a plane, a mixed method is introduced to take advantages of the virtues of the two methods mentioned above. Although unexpected objects often occlude this plane in an AR system, one can still try to detect corresponding points with a contract-expand method, while this will import erroneous correspondences. Computing homography with RANSAC algorithm is used to overcome such shortcomings. Using the robustly estimated homography resulted from RANSAC, the camera projective matrix can be recovered and thus registration is accomplished even when the markers are lost in the scene.

  14. Security personnel training using a computer-based game

    International Nuclear Information System (INIS)

    Ralph, J.; Bickner, L.

    1987-01-01

    Security personnel training is an integral part of a total physical security program, and is essential in enabling security personnel to perform their function effectively. Several training tools are currently available for use by security supervisors, including: textbook study, classroom instruction, and live simulations. However, due to shortcomings inherent in each of these tools, a need exists for the development of low-cost alternative training methods. This paper discusses one such alternative: a computer-based, game-type security training system. This system would be based on a personal computer with high-resolution graphics. Key features of this system include: a high degree of realism; flexibility in use and maintenance; high trainee motivation; and low cost

  15. Model based methods and tools for process systems engineering

    DEFF Research Database (Denmark)

    Gani, Rafiqul

    need to be integrated with work-flows and data-flows for specific product-process synthesis-design problems within a computer-aided framework. The framework therefore should be able to manage knowledge-data, models and the associated methods and tools needed by specific synthesis-design work...... of model based methods and tools within a computer aided framework for product-process synthesis-design will be highlighted.......Process systems engineering (PSE) provides means to solve a wide range of problems in a systematic and efficient manner. This presentation will give a perspective on model based methods and tools needed to solve a wide range of problems in product-process synthesis-design. These methods and tools...

  16. A hybrid method for the computation of quasi-3D seismograms.

    Science.gov (United States)

    Masson, Yder; Romanowicz, Barbara

    2013-04-01

    The development of powerful computer clusters and efficient numerical computation methods, such as the Spectral Element Method (SEM) made possible the computation of seismic wave propagation in a heterogeneous 3D earth. However, the cost of theses computations is still problematic for global scale tomography that requires hundreds of such simulations. Part of the ongoing research effort is dedicated to the development of faster modeling methods based on the spectral element method. Capdeville et al. (2002) proposed to couple SEM simulations with normal modes calculation (C-SEM). Nissen-Meyer et al. (2007) used 2D SEM simulations to compute 3D seismograms in a 1D earth model. Thanks to these developments, and for the first time, Lekic et al. (2011) developed a 3D global model of the upper mantle using SEM simulations. At the local and continental scale, adjoint tomography that is using a lot of SEM simulation can be implemented on current computers (Tape, Liu et al. 2009). Due to their smaller size, these models offer higher resolution. They provide us with images of the crust and the upper part of the mantle. In an attempt to teleport such local adjoint tomographic inversions into the deep earth, we are developing a hybrid method where SEM computation are limited to a region of interest within the earth. That region can have an arbitrary shape and size. Outside this region, the seismic wavefield is extrapolated to obtain synthetic data at the Earth's surface. A key feature of the method is the use of a time reversal mirror to inject the wavefield induced by distant seismic source into the region of interest (Robertsson and Chapman 2000). We compute synthetic seismograms as follow: Inside the region of interest, we are using regional spectral element software RegSEM to compute wave propagation in 3D. Outside this region, the wavefield is extrapolated to the surface by convolution with the Green's functions from the mirror to the seismic stations. For now, these

  17. An Accurate Method for Computing the Absorption of Solar Radiation by Water Vapor

    Science.gov (United States)

    Chou, M. D.

    1980-01-01

    The method is based upon molecular line parameters and makes use of a far wing scaling approximation and k distribution approach previously applied to the computation of the infrared cooling rate due to water vapor. Taking into account the wave number dependence of the incident solar flux, the solar heating rate is computed for the entire water vapor spectrum and for individual absorption bands. The accuracy of the method is tested against line by line calculations. The method introduces a maximum error of 0.06 C/day. The method has the additional advantage over previous methods in that it can be applied to any portion of the spectral region containing the water vapor bands. The integrated absorptances and line intensities computed from the molecular line parameters were compared with laboratory measurements. The comparison reveals that, among the three different sources, absorptance is the largest for the laboratory measurements.

  18. Development and evaluation of a computer-based medical work assessment programme

    Directory of Open Access Journals (Sweden)

    Spallek Michael

    2008-12-01

    Full Text Available Abstract Background There are several ways to conduct a job task analysis in medical work environments including pencil-paper observations, interviews and questionnaires. However these methods implicate bias problems such as high inter-individual deviations and risks of misjudgement. Computer-based observation helps to reduce these problems. The aim of this paper is to give an overview of the development process of a computer-based job task analysis instrument for real-time observations to quantify the job tasks performed by physicians working in different medical settings. In addition reliability and validity data of this instrument will be demonstrated. Methods This instrument was developed in consequential steps. First, lists comprising tasks performed by physicians in different care settings were classified. Afterwards content validity of task lists was proved. After establishing the final task categories, computer software was programmed and implemented in a mobile personal computer. At least inter-observer reliability was evaluated. Two trained observers recorded simultaneously tasks of the same physician. Results Content validity of the task lists was confirmed by observations and experienced specialists of each medical area. Development process of the job task analysis instrument was completed successfully. Simultaneous records showed adequate interrater reliability. Conclusion Initial results of this analysis supported the validity and reliability of this developed method for assessing physicians' working routines as well as organizational context factors. Based on results using this method, possible improvements for health professionals' work organisation can be identified.

  19. Method of Computer-aided Instruction in Situation Control Systems

    Directory of Open Access Journals (Sweden)

    Anatoliy O. Kargin

    2013-01-01

    Full Text Available The article considers the problem of computer-aided instruction in context-chain motivated situation control system of the complex technical system behavior. The conceptual and formal models of situation control with practical instruction are considered. Acquisition of new behavior knowledge is presented as structural changes in system memory in the form of situational agent set. Model and method of computer-aided instruction represent formalization, based on the nondistinct theories by physiologists and cognitive psychologists.The formal instruction model describes situation and reaction formation and dependence on different parameters, effecting education, such as the reinforcement value, time between the stimulus, action and the reinforcement. The change of the contextual link between situational elements when using is formalized.The examples and results of computer instruction experiments of the robot device “LEGO MINDSTORMS NXT”, equipped with ultrasonic distance, touch, light sensors.

  20. A surface capturing method for the efficient computation of steady water waves

    NARCIS (Netherlands)

    Wackers, J.; Koren, B.

    2008-01-01

    A surface capturing method is developed for the computation of steady water–air flow with gravity. Fluxes are based on artificial compressibility and the method is solved with a multigrid technique and line Gauss–Seidel smoother. A test on a channel flow with a bottom bump shows the accuracy of the

  1. The use of gold nanoparticle aggregation for DNA computing and logic-based biomolecular detection

    International Nuclear Information System (INIS)

    Lee, In-Hee; Yang, Kyung-Ae; Zhang, Byoung-Tak; Lee, Ji-Hoon; Park, Ji-Yoon; Chai, Young Gyu; Lee, Jae-Hoon

    2008-01-01

    The use of DNA molecules as a physical computational material has attracted much interest, especially in the area of DNA computing. DNAs are also useful for logical control and analysis of biological systems if efficient visualization methods are available. Here we present a quick and simple visualization technique that displays the results of the DNA computing process based on a colorimetric change induced by gold nanoparticle aggregation, and we apply it to the logic-based detection of biomolecules. Our results demonstrate its effectiveness in both DNA-based logical computation and logic-based biomolecular detection

  2. The Study of Pallet Pooling Information Platform Based on Cloud Computing

    Directory of Open Access Journals (Sweden)

    Jia-bin Li

    2018-01-01

    Full Text Available Effective implementation of pallet pooling system needs a strong information platform to support. Through the analysis of existing pallet pooling information platform (PPIP, the paper pointed out that the existing studies of PPIP are mainly based on traditional IT infrastructures and technologies which have software, hardware, resource utilization, and process restrictions. Because of the advantages of cloud computing technology like strong computing power, high flexibility, and low cost which meet the requirements of the PPIP well, this paper gave a PPIP architecture of two parts based on cloud computing: the users client and the cloud services. The cloud services include three layers, which are IaaS, PaaS, and SaaS. The method of how to deploy PPIP based on cloud computing is proposed finally.

  3. Computational and instrumental methods in EPR

    CERN Document Server

    Bender, Christopher J

    2006-01-01

    Computational and Instrumental Methods in EPR Prof. Bender, Fordham University Prof. Lawrence J. Berliner, University of Denver Electron magnetic resonance has been greatly facilitated by the introduction of advances in instrumentation and better computational tools, such as the increasingly widespread use of the density matrix formalism. This volume is devoted to both instrumentation and computation aspects of EPR, while addressing applications such as spin relaxation time measurements, the measurement of hyperfine interaction parameters, and the recovery of Mn(II) spin Hamiltonian parameters via spectral simulation. Key features: Microwave Amplitude Modulation Technique to Measure Spin-Lattice (T1) and Spin-Spin (T2) Relaxation Times Improvement in the Measurement of Spin-Lattice Relaxation Time in Electron Paramagnetic Resonance Quantitative Measurement of Magnetic Hyperfine Parameters and the Physical Organic Chemistry of Supramolecular Systems New Methods of Simulation of Mn(II) EPR Spectra: Single Cryst...

  4. Knowledge-based computer security advisor

    International Nuclear Information System (INIS)

    Hunteman, W.J.; Squire, M.B.

    1991-01-01

    The rapid expansion of computer security information and technology has included little support to help the security officer identify the safeguards needed to comply with a policy and to secure a computing system. This paper reports that Los Alamos is developing a knowledge-based computer security system to provide expert knowledge to the security officer. This system includes a model for expressing the complex requirements in computer security policy statements. The model is part of an expert system that allows a security officer to describe a computer system and then determine compliance with the policy. The model contains a generic representation that contains network relationships among the policy concepts to support inferencing based on information represented in the generic policy description

  5. On the Computation of Comprehensive Boolean Gröbner Bases

    Science.gov (United States)

    Inoue, Shutaro

    We show that a comprehensive Boolean Gröbner basis of an ideal I in a Boolean polynomial ring B (bar A,bar X) with main variables bar X and parameters bar A can be obtained by simply computing a usual Boolean Gröbner basis of I regarding both bar X and bar A as variables with a certain block term order such that bar X ≫ bar A. The result together with a fact that a finite Boolean ring is isomorphic to a direct product of the Galois field mathbb{GF}_2 enables us to compute a comprehensive Boolean Gröbner basis by only computing corresponding Gröbner bases in a polynomial ring over mathbb{GF}_2. Our implementation in a computer algebra system Risa/Asir shows that our method is extremely efficient comparing with existing computation algorithms of comprehensive Boolean Gröbner bases.

  6. Analysing Test-Takers’ Views on a Computer-Based Speaking Test

    Directory of Open Access Journals (Sweden)

    Marian Amengual-Pizarro

    2017-11-01

    Full Text Available This study examines test-takers’ views on a computer-delivered speaking test in order to investigate the aspects they consider most relevant in technology-based oral assessment, and to explore the main advantages and disadvantages computer-based tests may offer as compared to face-to-face speaking tests. A small-scale open questionnaire was administered to 80 test-takers who took the APTIS speaking test at the Universidad de Alcalá in April 2016. Results reveal that examinees believe computer-based tests provide a valid measure of oral competence in English and are considered to be an adequate method for the assessment of speaking. Interestingly, the data suggest that personal characteristics of test-takers seem to play a key role in deciding upon the most suitable and reliable delivery mode.

  7. Computer-based teaching is as good as face to face lecture-based teaching of evidence based medicine: a randomised controlled trial

    Science.gov (United States)

    2007-01-01

    Background At postgraduate level evidence based medicine (EBM) is currently taught through tutor based lectures. Computer based sessions fit around doctors' workloads, and standardise the quality of educational provision. There have been no randomized controlled trials comparing computer based sessions with traditional lectures at postgraduate level within medicine. Methods This was a randomised controlled trial involving six postgraduate education centres in the West Midlands, U.K. Fifty five newly qualified foundation year one doctors (U.S internship equivalent) were randomised to either computer based sessions or an equivalent lecture in EBM and systematic reviews. The change from pre to post-intervention score was measured using a validated questionnaire assessing knowledge (primary outcome) and attitudes (secondary outcome). Results Both groups were similar at baseline. Participants' improvement in knowledge in the computer based group was equivalent to the lecture based group (gain in score: 2.1 [S.D = 2.0] versus 1.9 [S.D = 2.4]; ANCOVA p = 0.078). Attitudinal gains were similar in both groups. Conclusion On the basis of our findings we feel computer based teaching and learning is as effective as typical lecture based teaching sessions for educating postgraduates in EBM and systematic reviews. PMID:17659076

  8. Computational methods for high-energy source shielding

    International Nuclear Information System (INIS)

    Armstrong, T.W.; Cloth, P.; Filges, D.

    1983-01-01

    The computational methods for high-energy radiation transport related to shielding of the SNQ-spallation source are outlined. The basic approach is to couple radiation-transport computer codes which use Monte Carlo methods and discrete ordinates methods. A code system is suggested that incorporates state-of-the-art radiation-transport techniques. The stepwise verification of that system is briefly summarized. The complexity of the resulting code system suggests a more straightforward code specially tailored for thick shield calculations. A short guide line to future development of such a Monte Carlo code is given

  9. Development of a computer code system for selecting off-site protective action in radiological accidents based on the multiobjective optimization method

    International Nuclear Information System (INIS)

    Ishigami, Tsutomu; Oyama, Kazuo

    1989-09-01

    This report presents a new method to support selection of off-site protective action in nuclear reactor accidents, and provides a user's manual of a computer code system, PRASMA, developed using the method. The PRASMA code system gives several candidates of protective action zones of evacuation, sheltering and no action based on the multiobjective optimization method, which requires objective functions and decision variables. We have assigned population risks of fatality, injury and cost as the objective functions, and distance from a nuclear power plant characterizing the above three protective action zones as the decision variables. (author)

  10. Band structure calculation of GaSe-based nanostructures using empirical pseudopotential method

    International Nuclear Information System (INIS)

    Osadchy, A V; Obraztsova, E D; Volotovskiy, S G; Golovashkin, D L; Savin, V V

    2016-01-01

    In this paper we present the results of band structure computer simulation of GaSe- based nanostructures using the empirical pseudopotential method. Calculations were performed using a specially developed software that allows performing simulations using cluster computing. Application of this method significantly reduces the demands on computing resources compared to traditional approaches based on ab-initio techniques and provides receiving the adequate comparable results. The use of cluster computing allows to obtain information for structures that require an explicit account of a significant number of atoms, such as quantum dots and quantum pillars. (paper)

  11. A substructure method to compute the 3D fluid-structure interaction during blowdown

    International Nuclear Information System (INIS)

    Guilbaud, D.; Axisa, F.; Gantenbein, F.; Gibert, R.J.

    1983-08-01

    The waves generated by a sudden rupture of a PWR primary pipe have an important mechanical effect on the internal structures of the vessel. This fluid-structure interaction has a strong 3D aspect. 3D finite element explicit methods can be applied. These methods take into account the non linearities of the problem but the calculation is heavy and expensive. We describe in this paper another type of method based on a substructure procedure: the vessel, internals and contained fluid are axisymmetrically described (AQUAMODE computer code). The pipes and contained fluid are monodimensionaly described (TEDEL-FLUIDE Computer Code). These substructures are characterized by their natural modes. Then, they are connected to another (connection of both structural and fluid nodes) the TRISTANA Computer Code. This method allows to compute correctly and cheaply the 3D fluid-structure effects. The treatment of certain non linearities is difficult because of the modal characterization of the substructures. However variations of contact conditions versus time can be introduced. We present here some validation tests and comparison with experimental results of the litterature

  12. Methods for teaching geometric modelling and computer graphics

    Energy Technology Data Exchange (ETDEWEB)

    Rotkov, S.I.; Faitel`son, Yu. Ts.

    1992-05-01

    This paper considers methods for teaching the methods and algorithms of geometric modelling and computer graphics to programmers, designers and users of CAD and computer-aided research systems. There is a bibliography that can be used to prepare lectures and practical classes. 37 refs., 1 tab.

  13. A review of computer-based simulators for ultrasound training.

    Science.gov (United States)

    Blum, Tobias; Rieger, Andreas; Navab, Nassir; Friess, Helmut; Martignoni, Marc

    2013-04-01

    Computer-based simulators for ultrasound training are a topic of recent interest. During the last 15 years, many different systems and methods have been proposed. This article provides an overview and classification of systems in this domain and a discussion of their advantages. Systems are classified and discussed according to the image simulation method, user interactions and medical applications. Computer simulation of ultrasound has one key advantage over traditional training. It enables novel training concepts, for example, through advanced visualization, case databases, and automatically generated feedback. Qualitative evaluations have mainly shown positive learning effects. However, few quantitative evaluations have been performed and long-term effects have to be examined.

  14. Quantitative Assessment of Optical Coherence Tomography Imaging Performance with Phantom-Based Test Methods And Computational Modeling

    Science.gov (United States)

    Agrawal, Anant

    Optical coherence tomography (OCT) is a powerful medical imaging modality that uniquely produces high-resolution cross-sectional images of tissue using low energy light. Its clinical applications and technological capabilities have grown substantially since its invention about twenty years ago, but efforts have been limited to develop tools to assess performance of OCT devices with respect to the quality and content of acquired images. Such tools are important to ensure information derived from OCT signals and images is accurate and consistent, in order to support further technology development, promote standardization, and benefit public health. The research in this dissertation investigates new physical and computational models which can provide unique insights into specific performance characteristics of OCT devices. Physical models, known as phantoms, are fabricated and evaluated in the interest of establishing standardized test methods to measure several important quantities relevant to image quality. (1) Spatial resolution is measured with a nanoparticle-embedded phantom and model eye which together yield the point spread function under conditions where OCT is commonly used. (2) A multi-layered phantom is constructed to measure the contrast transfer function along the axis of light propagation, relevant for cross-sectional imaging capabilities. (3) Existing and new methods to determine device sensitivity are examined and compared, to better understand the detection limits of OCT. A novel computational model based on the finite-difference time-domain (FDTD) method, which simulates the physics of light behavior at the sub-microscopic level within complex, heterogeneous media, is developed to probe device and tissue characteristics influencing the information content of an OCT image. This model is first tested in simple geometric configurations to understand its accuracy and limitations, then a highly realistic representation of a biological cell, the retinal

  15. Registration-based Reconstruction of Four-dimensional Cone Beam Computed Tomography

    DEFF Research Database (Denmark)

    Christoffersen, Christian; Hansen, David Christoffer; Poulsen, Per Rugaard

    2013-01-01

    We present a new method for reconstruction of four-dimensional (4D) cone beam computed tomography from an undersampled set of X-ray projections. The novelty of the proposed method lies in utilizing optical flow based registration to facilitate that each temporal phase is reconstructed from the full...

  16. Evaluation of Computer Based Testing in lieu of Regular Examinations in Computer Literacy

    Science.gov (United States)

    Murayama, Koichi

    Because computer based testing (CBT) has many advantages compared with the conventional paper and pencil testing (PPT) examination method, CBT has begun to be used in various situations in Japan, such as in qualifying examinations and in the TOEFL. This paper describes the usefulness and the problems of CBT applied to a regular college examination. The regular computer literacy examinations for first year students were held using CBT, and the results were analyzed. Responses to a questionnaire indicated many students accepted CBT with no unpleasantness and considered CBT a positive factor, improving their motivation to study. CBT also decreased the work of faculty in terms of marking tests and reducing data.

  17. Computational Methods for Large Spatio-temporal Datasets and Functional Data Ranking

    KAUST Repository

    Huang, Huang

    2017-07-16

    This thesis focuses on two topics, computational methods for large spatial datasets and functional data ranking. Both are tackling the challenges of big and high-dimensional data. The first topic is motivated by the prohibitive computational burden in fitting Gaussian process models to large and irregularly spaced spatial datasets. Various approximation methods have been introduced to reduce the computational cost, but many rely on unrealistic assumptions about the process and retaining statistical efficiency remains an issue. We propose a new scheme to approximate the maximum likelihood estimator and the kriging predictor when the exact computation is infeasible. The proposed method provides different types of hierarchical low-rank approximations that are both computationally and statistically efficient. We explore the improvement of the approximation theoretically and investigate the performance by simulations. For real applications, we analyze a soil moisture dataset with 2 million measurements with the hierarchical low-rank approximation and apply the proposed fast kriging to fill gaps for satellite images. The second topic is motivated by rank-based outlier detection methods for functional data. Compared to magnitude outliers, it is more challenging to detect shape outliers as they are often masked among samples. We develop a new notion of functional data depth by taking the integration of a univariate depth function. Having a form of the integrated depth, it shares many desirable features. Furthermore, the novel formation leads to a useful decomposition for detecting both shape and magnitude outliers. Our simulation studies show the proposed outlier detection procedure outperforms competitors in various outlier models. We also illustrate our methodology using real datasets of curves, images, and video frames. Finally, we introduce the functional data ranking technique to spatio-temporal statistics for visualizing and assessing covariance properties, such as

  18. Slepian modeling as a computational method in random vibration analysis of hysteretic structures

    DEFF Research Database (Denmark)

    Ditlevsen, Ove Dalager; Tarp-Johansen, Niels Jacob

    1999-01-01

    white noise. The computation time for obtaining estimates of relevant statistics on a given accuracy level is decreased by factors of one ormore orders of size as compared to the computation time needed for direct elasto-plastic displacementresponse simulations by vectorial Markov sequence techniques....... Moreover the Slepian method gives valuablephysical insight about the details of the plastic displacement development by time.The paper gives a general self-contained mathematical description of the Slepian method based plasticdisplacement analysis of Gaussian white noise excited EPOs. Experiences...

  19. Identity-Based Authentication for Cloud Computing

    Science.gov (United States)

    Li, Hongwei; Dai, Yuanshun; Tian, Ling; Yang, Haomiao

    Cloud computing is a recently developed new technology for complex systems with massive-scale services sharing among numerous users. Therefore, authentication of both users and services is a significant issue for the trust and security of the cloud computing. SSL Authentication Protocol (SAP), once applied in cloud computing, will become so complicated that users will undergo a heavily loaded point both in computation and communication. This paper, based on the identity-based hierarchical model for cloud computing (IBHMCC) and its corresponding encryption and signature schemes, presented a new identity-based authentication protocol for cloud computing and services. Through simulation testing, it is shown that the authentication protocol is more lightweight and efficient than SAP, specially the more lightweight user side. Such merit of our model with great scalability is very suited to the massive-scale cloud.

  20. A Novel Automated Method for Analyzing Cylindrical Computed Tomography Data

    Science.gov (United States)

    Roth, D. J.; Burke, E. R.; Rauser, R. W.; Martin, R. E.

    2011-01-01

    A novel software method is presented that is applicable for analyzing cylindrical and partially cylindrical objects inspected using computed tomography. This method involves unwrapping and re-slicing data so that the CT data from the cylindrical object can be viewed as a series of 2-D sheets in the vertical direction in addition to volume rendering and normal plane views provided by traditional CT software. The method is based on interior and exterior surface edge detection and under proper conditions, is FULLY AUTOMATED and requires no input from the user except the correct voxel dimension from the CT scan. The software is available from NASA in 32- and 64-bit versions that can be applied to gigabyte-sized data sets, processing data either in random access memory or primarily on the computer hard drive. Please inquire with the presenting author if further interested. This software differentiates itself in total from other possible re-slicing software solutions due to complete automation and advanced processing and analysis capabilities.

  1. An efficient computer based wavelets approximation method to solve Fuzzy boundary value differential equations

    Science.gov (United States)

    Alam Khan, Najeeb; Razzaq, Oyoon Abdul

    2016-03-01

    In the present work a wavelets approximation method is employed to solve fuzzy boundary value differential equations (FBVDEs). Essentially, a truncated Legendre wavelets series together with the Legendre wavelets operational matrix of derivative are utilized to convert FB- VDE into a simple computational problem by reducing it into a system of fuzzy algebraic linear equations. The capability of scheme is investigated on second order FB- VDE considered under generalized H-differentiability. Solutions are represented graphically showing competency and accuracy of this method.

  2. Computer-aided head film analysis: the University of California San Francisco method.

    Science.gov (United States)

    Baumrind, S; Miller, D M

    1980-07-01

    Computer technology is already assuming an important role in the management of orthodontic practices. The next 10 years are likely to see expansion in computer usage into the areas of diagnosis, treatment planning, and treatment-record keeping. In the areas of diagnosis and treatment planning, one of the first problems to be attacked will be the automation of head film analysis. The problems of constructing computer-aided systems for this purpose are considered herein in the light of the authors' 10 years of experience in developing a similar system for research purposes. The need for building in methods for automatic detection and correction of gross errors is discussed and the authors' method for doing so is presented. The construction of a rudimentary machine-readable data base for research and clinical purposes is described.

  3. Three-dimensional protein structure prediction: Methods and computational strategies.

    Science.gov (United States)

    Dorn, Márcio; E Silva, Mariel Barbachan; Buriol, Luciana S; Lamb, Luis C

    2014-10-12

    A long standing problem in structural bioinformatics is to determine the three-dimensional (3-D) structure of a protein when only a sequence of amino acid residues is given. Many computational methodologies and algorithms have been proposed as a solution to the 3-D Protein Structure Prediction (3-D-PSP) problem. These methods can be divided in four main classes: (a) first principle methods without database information; (b) first principle methods with database information; (c) fold recognition and threading methods; and (d) comparative modeling methods and sequence alignment strategies. Deterministic computational techniques, optimization techniques, data mining and machine learning approaches are typically used in the construction of computational solutions for the PSP problem. Our main goal with this work is to review the methods and computational strategies that are currently used in 3-D protein prediction. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. An Improved Computing Method for 3D Mechanical Connectivity Rates Based on a Polyhedral Simulation Model of Discrete Fracture Network in Rock Masses

    Science.gov (United States)

    Li, Mingchao; Han, Shuai; Zhou, Sibao; Zhang, Ye

    2018-06-01

    Based on a 3D model of a discrete fracture network (DFN) in a rock mass, an improved projective method for computing the 3D mechanical connectivity rate was proposed. The Monte Carlo simulation method, 2D Poisson process and 3D geological modeling technique were integrated into a polyhedral DFN modeling approach, and the simulation results were verified by numerical tests and graphical inspection. Next, the traditional projective approach for calculating the rock mass connectivity rate was improved using the 3D DFN models by (1) using the polyhedral model to replace the Baecher disk model; (2) taking the real cross section of the rock mass, rather than a part of the cross section, as the test plane; and (3) dynamically searching the joint connectivity rates using different dip directions and dip angles at different elevations to calculate the maximum, minimum and average values of the joint connectivity at each elevation. In a case study, the improved method and traditional method were used to compute the mechanical connectivity rate of the slope of a dam abutment. The results of the two methods were further used to compute the cohesive force of the rock masses. Finally, a comparison showed that the cohesive force derived from the traditional method had a higher error, whereas the cohesive force derived from the improved method was consistent with the suggested values. According to the comparison, the effectivity and validity of the improved method were verified indirectly.

  5. Comparison of Five Computational Methods for Computing Q Factors in Photonic Crystal Membrane Cavities

    DEFF Research Database (Denmark)

    Novitsky, Andrey; de Lasson, Jakob Rosenkrantz; Frandsen, Lars Hagedorn

    2017-01-01

    Five state-of-the-art computational methods are benchmarked by computing quality factors and resonance wavelengths in photonic crystal membrane L5 and L9 line defect cavities. The convergence of the methods with respect to resolution, degrees of freedom and number of modes is investigated. Specia...

  6. Exploring the Environment/Energy Pareto Optimal Front of an Office Room Using Computational Fluid Dynamics-Based Interactive Optimization Method

    Directory of Open Access Journals (Sweden)

    Kangji Li

    2017-02-01

    Full Text Available This paper is concerned with the development of a high-resolution and control-friendly optimization framework in enclosed environments that helps improve thermal comfort, indoor air quality (IAQ, and energy costs of heating, ventilation and air conditioning (HVAC system simultaneously. A computational fluid dynamics (CFD-based optimization method which couples algorithms implemented in Matlab with CFD simulation is proposed. The key part of this method is a data interactive mechanism which efficiently passes parameters between CFD simulations and optimization functions. A two-person office room is modeled for the numerical optimization. The multi-objective evolutionary algorithm—non-dominated-and-crowding Sorting Genetic Algorithm II (NSGA-II—is realized to explore the environment/energy Pareto front of the enclosed space. Performance analysis will demonstrate the effectiveness of the presented optimization method.

  7. A Computer-Based Simulation of an Acid-Base Titration

    Science.gov (United States)

    Boblick, John M.

    1971-01-01

    Reviews the advantages of computer simulated environments for experiments, referring in particular to acid-base titrations. Includes pre-lab instructions and a sample computer printout of a student's use of an acid-base simulation. Ten references. (PR)

  8. Methods in computed angiotomography of the brain

    International Nuclear Information System (INIS)

    Yamamoto, Yuji; Asari, Shoji; Sadamoto, Kazuhiko.

    1985-01-01

    Authors introduce the methods in computed angiotomography of the brain. Setting of the scan planes and levels and the minimum dose bolus (MinDB) injection of contrast medium are described in detail. These methods are easily and safely employed with the use of already propagated CT scanners. Computed angiotomography is expected for clinical applications in many institutions because of its diagnostic value in screening of cerebrovascular lesions and in demonstrating the relationship between pathological lesions and cerebral vessels. (author)

  9. Learning-based computing techniques in geoid modeling for precise height transformation

    Science.gov (United States)

    Erol, B.; Erol, S.

    2013-03-01

    Precise determination of local geoid is of particular importance for establishing height control in geodetic GNSS applications, since the classical leveling technique is too laborious. A geoid model can be accurately obtained employing properly distributed benchmarks having GNSS and leveling observations using an appropriate computing algorithm. Besides the classical multivariable polynomial regression equations (MPRE), this study attempts an evaluation of learning based computing algorithms: artificial neural networks (ANNs), adaptive network-based fuzzy inference system (ANFIS) and especially the wavelet neural networks (WNNs) approach in geoid surface approximation. These algorithms were developed parallel to advances in computer technologies and recently have been used for solving complex nonlinear problems of many applications. However, they are rather new in dealing with precise modeling problem of the Earth gravity field. In the scope of the study, these methods were applied to Istanbul GPS Triangulation Network data. The performances of the methods were assessed considering the validation results of the geoid models at the observation points. In conclusion the ANFIS and WNN revealed higher prediction accuracies compared to ANN and MPRE methods. Beside the prediction capabilities, these methods were also compared and discussed from the practical point of view in conclusions.

  10. Thai Language Sentence Similarity Computation Based on Syntactic Structure and Semantic Vector

    Science.gov (United States)

    Wang, Hongbin; Feng, Yinhan; Cheng, Liang

    2018-03-01

    Sentence similarity computation plays an increasingly important role in text mining, Web page retrieval, machine translation, speech recognition and question answering systems. Thai language as a kind of resources scarce language, it is not like Chinese language with HowNet and CiLin resources. So the Thai sentence similarity research faces some challenges. In order to solve this problem of the Thai language sentence similarity computation. This paper proposes a novel method to compute the similarity of Thai language sentence based on syntactic structure and semantic vector. This method firstly uses the Part-of-Speech (POS) dependency to calculate two sentences syntactic structure similarity, and then through the word vector to calculate two sentences semantic similarity. Finally, we combine the two methods to calculate two Thai language sentences similarity. The proposed method not only considers semantic, but also considers the sentence syntactic structure. The experiment result shows that this method in Thai language sentence similarity computation is feasible.

  11. A comparison of two analytical evaluation methods for educational computer games for young children

    NARCIS (Netherlands)

    Bekker, M.M.; Baauw, E.; Barendregt, W.

    2008-01-01

    In this paper we describe a comparison of two analytical methods for educational computer games for young children. The methods compared in the study are the Structured Expert Evaluation Method (SEEM) and the Combined Heuristic Evaluation (HE) (based on a combination of Nielsen’s HE and the

  12. Hilbert-Twin – A Novel Hilbert Transform-Based Method To Compute Envelope Of Free Decaying Oscillations Embedded In Noise, And The Logarithmic Decrement In High-Resolution Mechanical Spectroscopy HRMS

    Directory of Open Access Journals (Sweden)

    Magalas L.B.

    2015-06-01

    Full Text Available In this work, we present a novel Hilbert-twin method to compute an envelope and the logarithmic decrement, δ, from exponentially damped time-invariant harmonic strain signals embedded in noise. The results obtained from five computing methods: (1 the parametric OMI (Optimization in Multiple Intervals method, two interpolated discrete Fourier transform-based (IpDFT methods: (2 the Yoshida-Magalas (YM method and (3 the classic Yoshida (Y method, (4 the novel Hilbert-twin (H-twin method based on the Hilbert transform, and (5 the conventional Hilbert transform (HT method are analyzed and compared. The fundamental feature of the Hilbert-twin method is the efficient elimination of intrinsic asymmetrical oscillations of the envelope, aHT (t, obtained from the discrete Hilbert transform of analyzed signals. Excellent performance in estimation of the logarithmic decrement from the Hilbert-twin method is comparable to that of the OMI and YM for the low- and high-damping levels. The Hilbert-twin method proved to be robust and effective in computing the logarithmic decrement and the resonant frequency of exponentially damped free decaying signals embedded in experimental noise. The Hilbert-twin method is also appropriate to detect nonlinearities in mechanical loss measurements of metals and alloys.

  13. Three-dimensional photoacoustic tomography based on graphics-processing-unit-accelerated finite element method.

    Science.gov (United States)

    Peng, Kuan; He, Ling; Zhu, Ziqiang; Tang, Jingtian; Xiao, Jiaying

    2013-12-01

    Compared with commonly used analytical reconstruction methods, the frequency-domain finite element method (FEM) based approach has proven to be an accurate and flexible algorithm for photoacoustic tomography. However, the FEM-based algorithm is computationally demanding, especially for three-dimensional cases. To enhance the algorithm's efficiency, in this work a parallel computational strategy is implemented in the framework of the FEM-based reconstruction algorithm using a graphic-processing-unit parallel frame named the "compute unified device architecture." A series of simulation experiments is carried out to test the accuracy and accelerating effect of the improved method. The results obtained indicate that the parallel calculation does not change the accuracy of the reconstruction algorithm, while its computational cost is significantly reduced by a factor of 38.9 with a GTX 580 graphics card using the improved method.

  14. Variational-moment method for computing magnetohydrodynamic equilibria

    International Nuclear Information System (INIS)

    Lao, L.L.

    1983-08-01

    A fast yet accurate method to compute magnetohydrodynamic equilibria is provided by the variational-moment method, which is similar to the classical Rayleigh-Ritz-Galerkin approximation. The equilibrium solution sought is decomposed into a spectral representation. The partial differential equations describing the equilibrium are then recast into their equivalent variational form and systematically reduced to an optimum finite set of coupled ordinary differential equations. An appropriate spectral decomposition can make the series representing the solution coverge rapidly and hence substantially reduces the amount of computational time involved. The moment method was developed first to compute fixed-boundary inverse equilibria in axisymmetric toroidal geometry, and was demonstrated to be both efficient and accurate. The method since has been generalized to calculate free-boundary axisymmetric equilibria, to include toroidal plasma rotation and pressure anisotropy, and to treat three-dimensional toroidal geometry. In all these formulations, the flux surfaces are assumed to be smooth and nested so that the solutions can be decomposed in Fourier series in inverse coordinates. These recent developments and the advantages and limitations of the moment method are reviewed. The use of alternate coordinates for decomposition is discussed

  15. Computer Aided Flowsheet Design using Group Contribution Methods

    DEFF Research Database (Denmark)

    Bommareddy, Susilpa; Eden, Mario R.; Gani, Rafiqul

    2011-01-01

    In this paper, a systematic group contribution based framework is presented for synthesis of process flowsheets from a given set of input and output specifications. Analogous to the group contribution methods developed for molecular design, the framework employs process groups to represent...... information of each flowsheet to minimize the computational load and information storage. The design variables for the selected flowsheet(s) are identified through a reverse simulation approach and are used as initial estimates for rigorous simulation to verify the feasibility and performance of the design....

  16. Mathematical Methods and Algorithms of Mobile Parallel Computing on the Base of Multi-core Processors

    Directory of Open Access Journals (Sweden)

    Alexander B. Bakulev

    2012-11-01

    Full Text Available This article deals with mathematical models and algorithms, providing mobility of sequential programs parallel representation on the high-level language, presents formal model of operation environment processes management, based on the proposed model of programs parallel representation, presenting computation process on the base of multi-core processors.

  17. 26 CFR 1.167(b)-0 - Methods of computing depreciation.

    Science.gov (United States)

    2010-04-01

    ... 26 Internal Revenue 2 2010-04-01 2010-04-01 false Methods of computing depreciation. 1.167(b)-0....167(b)-0 Methods of computing depreciation. (a) In general. Any reasonable and consistently applied method of computing depreciation may be used or continued in use under section 167. Regardless of the...

  18. Computational method and system for modeling, analyzing, and optimizing DNA amplification and synthesis

    Science.gov (United States)

    Vandersall, Jennifer A.; Gardner, Shea N.; Clague, David S.

    2010-05-04

    A computational method and computer-based system of modeling DNA synthesis for the design and interpretation of PCR amplification, parallel DNA synthesis, and microarray chip analysis. The method and system include modules that address the bioinformatics, kinetics, and thermodynamics of DNA amplification and synthesis. Specifically, the steps of DNA selection, as well as the kinetics and thermodynamics of DNA hybridization and extensions, are addressed, which enable the optimization of the processing and the prediction of the products as a function of DNA sequence, mixing protocol, time, temperature and concentration of species.

  19. A new multivariate empirical mode decomposition method for improving the performance of SSVEP-based brain-computer interface

    Science.gov (United States)

    Chen, Yi-Feng; Atal, Kiran; Xie, Sheng-Quan; Liu, Quan

    2017-08-01

    Objective. Accurate and efficient detection of steady-state visual evoked potentials (SSVEP) in electroencephalogram (EEG) is essential for the related brain-computer interface (BCI) applications. Approach. Although the canonical correlation analysis (CCA) has been applied extensively and successfully to SSVEP recognition, the spontaneous EEG activities and artifacts that often occur during data recording can deteriorate the recognition performance. Therefore, it is meaningful to extract a few frequency sub-bands of interest to avoid or reduce the influence of unrelated brain activity and artifacts. This paper presents an improved method to detect the frequency component associated with SSVEP using multivariate empirical mode decomposition (MEMD) and CCA (MEMD-CCA). EEG signals from nine healthy volunteers were recorded to evaluate the performance of the proposed method for SSVEP recognition. Main results. We compared our method with CCA and temporally local multivariate synchronization index (TMSI). The results suggest that the MEMD-CCA achieved significantly higher accuracy in contrast to standard CCA and TMSI. It gave the improvements of 1.34%, 3.11%, 3.33%, 10.45%, 15.78%, 18.45%, 15.00% and 14.22% on average over CCA at time windows from 0.5 s to 5 s and 0.55%, 1.56%, 7.78%, 14.67%, 13.67%, 7.33% and 7.78% over TMSI from 0.75 s to 5 s. The method outperformed the filter-based decomposition (FB), empirical mode decomposition (EMD) and wavelet decomposition (WT) based CCA for SSVEP recognition. Significance. The results demonstrate the ability of our proposed MEMD-CCA to improve the performance of SSVEP-based BCI.

  20. Improved look-up table method of computer-generated holograms.

    Science.gov (United States)

    Wei, Hui; Gong, Guanghong; Li, Ni

    2016-11-10

    Heavy computation load and vast memory requirements are major bottlenecks of computer-generated holograms (CGHs), which are promising and challenging in three-dimensional displays. To solve these problems, an improved look-up table (LUT) method suitable for arbitrarily sampled object points is proposed and implemented on a graphics processing unit (GPU) whose reconstructed object quality is consistent with that of the coherent ray-trace (CRT) method. The concept of distance factor is defined, and the distance factors are pre-computed off-line and stored in a look-up table. The results show that while reconstruction quality close to that of the CRT method is obtained, the on-line computation time is dramatically reduced compared with the LUT method on the GPU and the memory usage is lower than that of the novel-LUT considerably. Optical experiments are carried out to validate the effectiveness of the proposed method.

  1. Situation awareness and trust in computer-based procedures in nuclear power plant operations

    Energy Technology Data Exchange (ETDEWEB)

    Throneburg, E. B.; Jones, J. M. [AREVA NP Inc., 7207 IBM Drive, Charlotte, NC 28262 (United States)

    2006-07-01

    Situation awareness and trust are two issues that need to be addressed in the design of computer-based procedures for nuclear power plants. Situation awareness, in relation to computer-based procedures, concerns the operators' knowledge of the plant's state while following the procedures. Trust concerns the amount of faith that the operators put into the automated procedures, which can affect situation awareness. This paper first discusses the advantages and disadvantages of computer-based procedures. It then discusses the known aspects of situation awareness and trust as applied to computer-based procedures in nuclear power plants. An outline of a proposed experiment is then presented that includes methods of measuring situation awareness and trust so that these aspects can be analyzed for further study. (authors)

  2. Situation awareness and trust in computer-based procedures in nuclear power plant operations

    International Nuclear Information System (INIS)

    Throneburg, E. B.; Jones, J. M.

    2006-01-01

    Situation awareness and trust are two issues that need to be addressed in the design of computer-based procedures for nuclear power plants. Situation awareness, in relation to computer-based procedures, concerns the operators' knowledge of the plant's state while following the procedures. Trust concerns the amount of faith that the operators put into the automated procedures, which can affect situation awareness. This paper first discusses the advantages and disadvantages of computer-based procedures. It then discusses the known aspects of situation awareness and trust as applied to computer-based procedures in nuclear power plants. An outline of a proposed experiment is then presented that includes methods of measuring situation awareness and trust so that these aspects can be analyzed for further study. (authors)

  3. Methods and experimental techniques in computer engineering

    CERN Document Server

    Schiaffonati, Viola

    2014-01-01

    Computing and science reveal a synergic relationship. On the one hand, it is widely evident that computing plays an important role in the scientific endeavor. On the other hand, the role of scientific method in computing is getting increasingly important, especially in providing ways to experimentally evaluate the properties of complex computing systems. This book critically presents these issues from a unitary conceptual and methodological perspective by addressing specific case studies at the intersection between computing and science. The book originates from, and collects the experience of, a course for PhD students in Information Engineering held at the Politecnico di Milano. Following the structure of the course, the book features contributions from some researchers who are working at the intersection between computing and science.

  4. SmartShadow models and methods for pervasive computing

    CERN Document Server

    Wu, Zhaohui

    2013-01-01

    SmartShadow: Models and Methods for Pervasive Computing offers a new perspective on pervasive computing with SmartShadow, which is designed to model a user as a personality ""shadow"" and to model pervasive computing environments as user-centric dynamic virtual personal spaces. Just like human beings' shadows in the physical world, it follows people wherever they go, providing them with pervasive services. The model, methods, and software infrastructure for SmartShadow are presented and an application for smart cars is also introduced.  The book can serve as a valuable reference work for resea

  5. Hybrid method based on embedded coupled simulation of vortex particles in grid based solution

    Science.gov (United States)

    Kornev, Nikolai

    2017-09-01

    The paper presents a novel hybrid approach developed to improve the resolution of concentrated vortices in computational fluid mechanics. The method is based on combination of a grid based and the grid free computational vortex (CVM) methods. The large scale flow structures are simulated on the grid whereas the concentrated structures are modeled using CVM. Due to this combination the advantages of both methods are strengthened whereas the disadvantages are diminished. The procedure of the separation of small concentrated vortices from the large scale ones is based on LES filtering idea. The flow dynamics is governed by two coupled transport equations taking two-way interaction between large and fine structures into account. The fine structures are mapped back to the grid if their size grows due to diffusion. Algorithmic aspects of the hybrid method are discussed. Advantages of the new approach are illustrated on some simple two dimensional canonical flows containing concentrated vortices.

  6. GPU-accelerated Lattice Boltzmann method for anatomical extraction in patient-specific computational hemodynamics

    Science.gov (United States)

    Yu, H.; Wang, Z.; Zhang, C.; Chen, N.; Zhao, Y.; Sawchuk, A. P.; Dalsing, M. C.; Teague, S. D.; Cheng, Y.

    2014-11-01

    Existing research of patient-specific computational hemodynamics (PSCH) heavily relies on software for anatomical extraction of blood arteries. Data reconstruction and mesh generation have to be done using existing commercial software due to the gap between medical image processing and CFD, which increases computation burden and introduces inaccuracy during data transformation thus limits the medical applications of PSCH. We use lattice Boltzmann method (LBM) to solve the level-set equation over an Eulerian distance field and implicitly and dynamically segment the artery surfaces from radiological CT/MRI imaging data. The segments seamlessly feed to the LBM based CFD computation of PSCH thus explicit mesh construction and extra data management are avoided. The LBM is ideally suited for GPU (graphic processing unit)-based parallel computing. The parallel acceleration over GPU achieves excellent performance in PSCH computation. An application study will be presented which segments an aortic artery from a chest CT dataset and models PSCH of the segmented artery.

  7. Task-based image quality evaluation of iterative reconstruction methods for low dose CT using computer simulations

    Science.gov (United States)

    Xu, Jingyan; Fuld, Matthew K.; Fung, George S. K.; Tsui, Benjamin M. W.

    2015-04-01

    Iterative reconstruction (IR) methods for x-ray CT is a promising approach to improve image quality or reduce radiation dose to patients. The goal of this work was to use task based image quality measures and the channelized Hotelling observer (CHO) to evaluate both analytic and IR methods for clinical x-ray CT applications. We performed realistic computer simulations at five radiation dose levels, from a clinical reference low dose D0 to 25% D0. A fixed size and contrast lesion was inserted at different locations into the liver of the XCAT phantom to simulate a weak signal. The simulated data were reconstructed on a commercial CT scanner (SOMATOM Definition Flash; Siemens, Forchheim, Germany) using the vendor-provided analytic (WFBP) and IR (SAFIRE) methods. The reconstructed images were analyzed by CHOs with both rotationally symmetric (RS) and rotationally oriented (RO) channels, and with different numbers of lesion locations (5, 10, and 20) in a signal known exactly (SKE), background known exactly but variable (BKEV) detection task. The area under the receiver operating characteristic curve (AUC) was used as a summary measure to compare the IR and analytic methods; the AUC was also used as the equal performance criterion to derive the potential dose reduction factor of IR. In general, there was a good agreement in the relative AUC values of different reconstruction methods using CHOs with RS and RO channels, although the CHO with RO channels achieved higher AUCs than RS channels. The improvement of IR over analytic methods depends on the dose level. The reference dose level D0 was based on a clinical low dose protocol, lower than the standard dose due to the use of IR methods. At 75% D0, the performance improvement was statistically significant (p < 0.05). The potential dose reduction factor also depended on the detection task. For the SKE/BKEV task involving 10 lesion locations, a dose reduction of at least 25% from D0 was achieved.

  8. Reshaping of computational system for dosimetry in neutron and photons radiotherapy based in stochastic methods - SISCODES

    International Nuclear Information System (INIS)

    Trindade, Bruno Machado

    2011-02-01

    This work shows the remodeling of the Computer System for Dosimetry of Neutrons and Photons in Radiotherapy Based on Stochastic Methods . SISCODES. The initial description and status, the alterations and expansions (proposed and concluded), and the latest system development status are shown. The SISCODES is a system that allows the execution of a 3D computational planning in radiation therapy, based on MCNP5 nuclear particle transport code. The SISCODES provides tools to build a patient's voxels model, to define a treatment planning, to simulate this planning, and to view the results of the simulation. The SISCODES implements a database of tissues, sources and nuclear data and an interface to access then. The graphical SISCODES modules were rewritten or were implemented using C++ language and GTKmm library. Studies about dose deviations were performed simulating a homogeneous water phantom as analogue of the human body in radiotherapy planning and a heterogeneous voxel phantom, pointing out possible dose miscalculations. The Soft-RT and PROPLAN computer codes that do interface with SISCODES are described. A set of voxels models created on the SISCODES are presented with its respective sizes and resolutions. To demonstrate the use of SISCODES, examples of radiation therapy and dosimetry simulations for prostate and heart are shown. Three protocols were simulated on the heart voxel model: Sm-153 filled balloon and P-32 stent, to prevent angioplasty restenosis; and Tl-201 myocardial perfusion, to imaging. Teletherapy with 6MV and 15MV beams were simulated to the prostate, and brachytherapy with I-125 seeds. The results of these simulations are shown on isodose curves and on dose-volume histograms. The SISCODES shows to be a useful tool for research of new radiation therapy treatments and, in future, can also be useful in medical practice. At the end, future improvements are proposed. I hope this work can contribute to develop more effective radiation therapy

  9. Cost-effectiveness analysis of computer-based assessment

    Directory of Open Access Journals (Sweden)

    Pauline Loewenberger

    2003-12-01

    Full Text Available The need for more cost-effective and pedagogically acceptable combinations of teaching and learning methods to sustain increasing student numbers means that the use of innovative methods, using technology, is accelerating. There is an expectation that economies of scale might provide greater cost-effectiveness whilst also enhancing student learning. The difficulties and complexities of these expectations are considered in this paper, which explores the challenges faced by those wishing to evaluate the costeffectiveness of computer-based assessment (CBA. The paper outlines the outcomes of a survey which attempted to gather information about the costs and benefits of CBA.

  10. Computational prediction of drug-drug interactions based on drugs functional similarities.

    Science.gov (United States)

    Ferdousi, Reza; Safdari, Reza; Omidi, Yadollah

    2017-06-01

    Therapeutic activities of drugs are often influenced by co-administration of drugs that may cause inevitable drug-drug interactions (DDIs) and inadvertent side effects. Prediction and identification of DDIs are extremely vital for the patient safety and success of treatment modalities. A number of computational methods have been employed for the prediction of DDIs based on drugs structures and/or functions. Here, we report on a computational method for DDIs prediction based on functional similarity of drugs. The model was set based on key biological elements including carriers, transporters, enzymes and targets (CTET). The model was applied for 2189 approved drugs. For each drug, all the associated CTETs were collected, and the corresponding binary vectors were constructed to determine the DDIs. Various similarity measures were conducted to detect DDIs. Of the examined similarity methods, the inner product-based similarity measures (IPSMs) were found to provide improved prediction values. Altogether, 2,394,766 potential drug pairs interactions were studied. The model was able to predict over 250,000 unknown potential DDIs. Upon our findings, we propose the current method as a robust, yet simple and fast, universal in silico approach for identification of DDIs. We envision that this proposed method can be used as a practical technique for the detection of possible DDIs based on the functional similarities of drugs. Copyright © 2017. Published by Elsevier Inc.

  11. Computational methods for two-phase flow and particle transport

    CERN Document Server

    Lee, Wen Ho

    2013-01-01

    This book describes mathematical formulations and computational methods for solving two-phase flow problems with a computer code that calculates thermal hydraulic problems related to light water and fast breeder reactors. The physical model also handles the particle and gas flow problems that arise from coal gasification and fluidized beds. The second part of this book deals with the computational methods for particle transport.

  12. A comparison of efficient methods for the computation of Born gluon amplitudes

    International Nuclear Information System (INIS)

    Dinsdale, Michael; Ternick, Marko; Weinzierl, Stefan

    2006-01-01

    We compare four different methods for the numerical computation of the pure gluonic amplitudes in the Born approximation. We are in particular interested in the efficiency of the various methods as the number n of the external particles increases. In addition we investigate the numerical accuracy in critical phase space regions. The methods considered are based on (i) Berends-Giele recurrence relations, (ii) scalar diagrams, (iii) MHV vertices and (iv) BCF recursion relations

  13. Interval sampling methods and measurement error: a computer simulation.

    Science.gov (United States)

    Wirth, Oliver; Slaven, James; Taylor, Matthew A

    2014-01-01

    A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments. © Society for the Experimental Analysis of Behavior.

  14. Computer-based Programs in Speech Therapy of Dyslalia and Dyslexia- Dysgraphia

    Directory of Open Access Journals (Sweden)

    Mirela Danubianu

    2010-04-01

    Full Text Available During the last years, the researchers and therapists in speech therapy have been more and more concerned with the elaboration and use of computer programs in speech disorders therapy. The main objective of this study was to evaluate the therapeutic effectiveness of computer-based programs for the Romanian language in speech therapy. Along the study, we will present the experimental research through assessing the effectiveness of computer programs in the speech therapy for speech disorders: dyslalia, dyslexia and dysgraphia. Methodologically, the use of the computer in the therapeutic phases was carried out with the help of some computer-based programs (Logomon, Dislex-Test etc. that we elaborated and we experimented with during several years of therapeutic activity. The sample used in our experiments was composed of 120 subjects; two groups of 60 children with speech disorders were selected for both speech disorders: 30 for the experimental ('computer-based' group and 30 for the control ('classical method' group. The study hypotheses verified whether the results, obtained by the subjects within the experimental group, improved significantly after using the computer-based program, compared to the subjects within the control group, who did not use this program but got a classical therapy. The hypotheses were confirmed for the speech disorders included in this research; the conclusions of the study confirm the advantages of using computer-based programs within speech therapy by correcting these disorders, as well as due to the positive influence these programs have on the development of children’s personality.

  15. Method-independent, Computationally Frugal Convergence Testing for Sensitivity Analysis Techniques

    Science.gov (United States)

    Mai, J.; Tolson, B.

    2017-12-01

    The increasing complexity and runtime of environmental models lead to the current situation that the calibration of all model parameters or the estimation of all of their uncertainty is often computationally infeasible. Hence, techniques to determine the sensitivity of model parameters are used to identify most important parameters. All subsequent model calibrations or uncertainty estimation procedures focus then only on these subsets of parameters and are hence less computational demanding. While the examination of the convergence of calibration and uncertainty methods is state-of-the-art, the convergence of the sensitivity methods is usually not checked. If any, bootstrapping of the sensitivity results is used to determine the reliability of the estimated indexes. Bootstrapping, however, might as well become computationally expensive in case of large model outputs and a high number of bootstraps. We, therefore, present a Model Variable Augmentation (MVA) approach to check the convergence of sensitivity indexes without performing any additional model run. This technique is method- and model-independent. It can be applied either during the sensitivity analysis (SA) or afterwards. The latter case enables the checking of already processed sensitivity indexes. To demonstrate the method's independency of the convergence testing method, we applied it to two widely used, global SA methods: the screening method known as Morris method or Elementary Effects (Morris 1991) and the variance-based Sobol' method (Solbol' 1993). The new convergence testing method is first scrutinized using 12 analytical benchmark functions (Cuntz & Mai et al. 2015) where the true indexes of aforementioned three methods are known. This proof of principle shows that the method reliably determines the uncertainty of the SA results when different budgets are used for the SA. The results show that the new frugal method is able to test the convergence and therefore the reliability of SA results in an

  16. Fast Simulation of Large-Scale Floods Based on GPU Parallel Computing

    Directory of Open Access Journals (Sweden)

    Qiang Liu

    2018-05-01

    Full Text Available Computing speed is a significant issue of large-scale flood simulations for real-time response to disaster prevention and mitigation. Even today, most of the large-scale flood simulations are generally run on supercomputers due to the massive amounts of data and computations necessary. In this work, a two-dimensional shallow water model based on an unstructured Godunov-type finite volume scheme was proposed for flood simulation. To realize a fast simulation of large-scale floods on a personal computer, a Graphics Processing Unit (GPU-based, high-performance computing method using the OpenACC application was adopted to parallelize the shallow water model. An unstructured data management method was presented to control the data transportation between the GPU and CPU (Central Processing Unit with minimum overhead, and then both computation and data were offloaded from the CPU to the GPU, which exploited the computational capability of the GPU as much as possible. The parallel model was validated using various benchmarks and real-world case studies. The results demonstrate that speed-ups of up to one order of magnitude can be achieved in comparison with the serial model. The proposed parallel model provides a fast and reliable tool with which to quickly assess flood hazards in large-scale areas and, thus, has a bright application prospect for dynamic inundation risk identification and disaster assessment.

  17. OpenCL-based vicinity computation for 3D multiresolution mesh compression

    Science.gov (United States)

    Hachicha, Soumaya; Elkefi, Akram; Ben Amar, Chokri

    2017-03-01

    3D multiresolution mesh compression systems are still widely addressed in many domains. These systems are more and more requiring volumetric data to be processed in real-time. Therefore, the performance is becoming constrained by material resources usage and an overall reduction in the computational time. In this paper, our contribution entirely lies on computing, in real-time, triangles neighborhood of 3D progressive meshes for a robust compression algorithm based on the scan-based wavelet transform(WT) technique. The originality of this latter algorithm is to compute the WT with minimum memory usage by processing data as they are acquired. However, with large data, this technique is considered poor in term of computational complexity. For that, this work exploits the GPU to accelerate the computation using OpenCL as a heterogeneous programming language. Experiments demonstrate that, aside from the portability across various platforms and the flexibility guaranteed by the OpenCL-based implementation, this method can improve performance gain in speedup factor of 5 compared to the sequential CPU implementation.

  18. Computer-based and web-based radiation safety training

    Energy Technology Data Exchange (ETDEWEB)

    Owen, C., LLNL

    1998-03-01

    The traditional approach to delivering radiation safety training has been to provide a stand-up lecture of the topic, with the possible aid of video, and to repeat the same material periodically. New approaches to meeting training requirements are needed to address the advent of flexible work hours and telecommuting, and to better accommodate individuals learning at their own pace. Computer- based and web-based radiation safety training can provide this alternative. Computer-based and web- based training is an interactive form of learning that the student controls, resulting in enhanced and focused learning at a time most often chosen by the student.

  19. Computational methods for structural load and resistance modeling

    Science.gov (United States)

    Thacker, B. H.; Millwater, H. R.; Harren, S. V.

    1991-01-01

    An automated capability for computing structural reliability considering uncertainties in both load and resistance variables is presented. The computations are carried out using an automated Advanced Mean Value iteration algorithm (AMV +) with performance functions involving load and resistance variables obtained by both explicit and implicit methods. A complete description of the procedures used is given as well as several illustrative examples, verified by Monte Carlo Analysis. In particular, the computational methods described in the paper are shown to be quite accurate and efficient for a material nonlinear structure considering material damage as a function of several primitive random variables. The results show clearly the effectiveness of the algorithms for computing the reliability of large-scale structural systems with a maximum number of resolutions.

  20. Benchmarking gate-based quantum computers

    Science.gov (United States)

    Michielsen, Kristel; Nocon, Madita; Willsch, Dennis; Jin, Fengping; Lippert, Thomas; De Raedt, Hans

    2017-11-01

    With the advent of public access to small gate-based quantum processors, it becomes necessary to develop a benchmarking methodology such that independent researchers can validate the operation of these processors. We explore the usefulness of a number of simple quantum circuits as benchmarks for gate-based quantum computing devices and show that circuits performing identity operations are very simple, scalable and sensitive to gate errors and are therefore very well suited for this task. We illustrate the procedure by presenting benchmark results for the IBM Quantum Experience, a cloud-based platform for gate-based quantum computing.

  1. Computer methods in physics 250 problems with guided solutions

    CERN Document Server

    Landau, Rubin H

    2018-01-01

    Our future scientists and professionals must be conversant in computational techniques. In order to facilitate integration of computer methods into existing physics courses, this textbook offers a large number of worked examples and problems with fully guided solutions in Python as well as other languages (Mathematica, Java, C, Fortran, and Maple). It’s also intended as a self-study guide for learning how to use computer methods in physics. The authors include an introductory chapter on numerical tools and indication of computational and physics difficulty level for each problem.

  2. THE DESIGNING OF ELECTRONIC TEACHING-METHODS COMPLEX «GRAPHICS» FOR REALIZATION OF COMPUTER-BASED LEARNING OF ENGINEERING-GRAPHIC DISCIPLINES

    Directory of Open Access Journals (Sweden)

    Іван Нищак

    2015-12-01

    Full Text Available The article contains Theoretical Foundations of designing of author’s electronic educational-methodical complex (EEMC «Graphics», intended to implement the engineering-graphic preparation of future teachers of technology in terms of computer-based learning. The process of designing of electronic educational-methodical complex “Graphics” includes the following successive stages: 1 identification of didactic goals and objectives; 2the designing of patterns of EEMC; 3 the selection of contents and systematization of educational material; 4 the program-technical implementation of EEMC; 5 interface design; 6 expert assessment of quality of EEMC; 7 testing of EEMC; 8 adjusting the software; 9 the development of guidelines and instructions for the use of EEMC.

  3. Filter-based reconstruction methods for tomography

    NARCIS (Netherlands)

    Pelt, D.M.

    2016-01-01

    In X-ray tomography, a three-dimensional image of the interior of an object is computed from multiple X-ray images, acquired over a range of angles. Two types of methods are commonly used to compute such an image: analytical methods and iterative methods. Analytical methods are computationally

  4. Rehabilitation of patients with motor disabilities using computer vision based techniques

    Directory of Open Access Journals (Sweden)

    Alejandro Reyes-Amaro

    2012-05-01

    Full Text Available In this paper we present details about the implementation of computer vision based applications for the rehabilitation of patients with motor disabilities. The applications are conceived as serious games, where the computer-patient interaction during playing contributes to the development of different motor skills. The use of computer vision methods allows the automatic guidance of the patient’s movements making constant specialized supervision unnecessary. The hardware requirements are limited to low-cost devices like usual webcams and Netbooks.

  5. Computer vision-based method for classification of wheat grains using artificial neural network.

    Science.gov (United States)

    Sabanci, Kadir; Kayabasi, Ahmet; Toktas, Abdurrahim

    2017-06-01

    A simplified computer vision-based application using artificial neural network (ANN) depending on multilayer perceptron (MLP) for accurately classifying wheat grains into bread or durum is presented. The images of 100 bread and 100 durum wheat grains are taken via a high-resolution camera and subjected to pre-processing. The main visual features of four dimensions, three colors and five textures are acquired using image-processing techniques (IPTs). A total of 21 visual features are reproduced from the 12 main features to diversify the input population for training and testing the ANN model. The data sets of visual features are considered as input parameters of the ANN model. The ANN with four different input data subsets is modelled to classify the wheat grains into bread or durum. The ANN model is trained with 180 grains and its accuracy tested with 20 grains from a total of 200 wheat grains. Seven input parameters that are most effective on the classifying results are determined using the correlation-based CfsSubsetEval algorithm to simplify the ANN model. The results of the ANN model are compared in terms of accuracy rate. The best result is achieved with a mean absolute error (MAE) of 9.8 × 10 -6 by the simplified ANN model. This shows that the proposed classifier based on computer vision can be successfully exploited to automatically classify a variety of grains. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  6. International Conference on Robust Rank-Based and Nonparametric Methods

    CERN Document Server

    McKean, Joseph

    2016-01-01

    The contributors to this volume include many of the distinguished researchers in this area. Many of these scholars have collaborated with Joseph McKean to develop underlying theory for these methods, obtain small sample corrections, and develop efficient algorithms for their computation. The papers cover the scope of the area, including robust nonparametric rank-based procedures through Bayesian and big data rank-based analyses. Areas of application include biostatistics and spatial areas. Over the last 30 years, robust rank-based and nonparametric methods have developed considerably. These procedures generalize traditional Wilcoxon-type methods for one- and two-sample location problems. Research into these procedures has culminated in complete analyses for many of the models used in practice including linear, generalized linear, mixed, and nonlinear models. Settings are both multivariate and univariate. With the development of R packages in these areas, computation of these procedures is easily shared with r...

  7. An Emotional Agent Model Based on Granular Computing

    Directory of Open Access Journals (Sweden)

    Jun Hu

    2012-01-01

    Full Text Available Affective computing has a very important significance for fulfilling intelligent information processing and harmonious communication between human being and computers. A new model for emotional agent is proposed in this paper to make agent have the ability of handling emotions, based on the granular computing theory and the traditional BDI agent model. Firstly, a new emotion knowledge base based on granular computing for emotion expression is presented in the model. Secondly, a new emotional reasoning algorithm based on granular computing is proposed. Thirdly, a new emotional agent model based on granular computing is presented. Finally, based on the model, an emotional agent for patient assistant in hospital is realized, experiment results show that it is efficient to handle simple emotions.

  8. Benchmark test cases for evaluation of computer-based methods for detection of setup errors: realistic digitally reconstructed electronic portal images with known setup errors

    International Nuclear Information System (INIS)

    Fritsch, Daniel S.; Raghavan, Suraj; Boxwala, Aziz; Earnhart, Jon; Tracton, Gregg; Cullip, Timothy; Chaney, Edward L.

    1997-01-01

    Purpose: The purpose of this investigation was to develop methods and software for computing realistic digitally reconstructed electronic portal images with known setup errors for use as benchmark test cases for evaluation and intercomparison of computer-based methods for image matching and detecting setup errors in electronic portal images. Methods and Materials: An existing software tool for computing digitally reconstructed radiographs was modified to compute simulated megavoltage images. An interface was added to allow the user to specify which setup parameter(s) will contain computer-induced random and systematic errors in a reference beam created during virtual simulation. Other software features include options for adding random and structured noise, Gaussian blurring to simulate geometric unsharpness, histogram matching with a 'typical' electronic portal image, specifying individual preferences for the appearance of the 'gold standard' image, and specifying the number of images generated. The visible male computed tomography data set from the National Library of Medicine was used as the planning image. Results: Digitally reconstructed electronic portal images with known setup errors have been generated and used to evaluate our methods for automatic image matching and error detection. Any number of different sets of test cases can be generated to investigate setup errors involving selected setup parameters and anatomic volumes. This approach has proved to be invaluable for determination of error detection sensitivity under ideal (rigid body) conditions and for guiding further development of image matching and error detection methods. Example images have been successfully exported for similar use at other sites. Conclusions: Because absolute truth is known, digitally reconstructed electronic portal images with known setup errors are well suited for evaluation of computer-aided image matching and error detection methods. High-quality planning images, such as

  9. A computationally efficient approach for template matching-based ...

    Indian Academy of Sciences (India)

    In this paper, a new computationally efficient image registration method is ...... the proposed method requires less computational time as compared to traditional methods. ... Zitová B and Flusser J 2003 Image registration methods: A survey.

  10. Protein consensus-based surface engineering (ProCoS): a computer-assisted method for directed protein evolution.

    Science.gov (United States)

    Shivange, Amol V; Hoeffken, Hans Wolfgang; Haefner, Stefan; Schwaneberg, Ulrich

    2016-12-01

    Protein consensus-based surface engineering (ProCoS) is a simple and efficient method for directed protein evolution combining computational analysis and molecular biology tools to engineer protein surfaces. ProCoS is based on the hypothesis that conserved residues originated from a common ancestor and that these residues are crucial for the function of a protein, whereas highly variable regions (situated on the surface of a protein) can be targeted for surface engineering to maximize performance. ProCoS comprises four main steps: ( i ) identification of conserved and highly variable regions; ( ii ) protein sequence design by substituting residues in the highly variable regions, and gene synthesis; ( iii ) in vitro DNA recombination of synthetic genes; and ( iv ) screening for active variants. ProCoS is a simple method for surface mutagenesis in which multiple sequence alignment is used for selection of surface residues based on a structural model. To demonstrate the technique's utility for directed evolution, the surface of a phytase enzyme from Yersinia mollaretii (Ymphytase) was subjected to ProCoS. Screening just 1050 clones from ProCoS engineering-guided mutant libraries yielded an enzyme with 34 amino acid substitutions. The surface-engineered Ymphytase exhibited 3.8-fold higher pH stability (at pH 2.8 for 3 h) and retained 40% of the enzyme's specific activity (400 U/mg) compared with the wild-type Ymphytase. The pH stability might be attributed to a significantly increased (20 percentage points; from 9% to 29%) number of negatively charged amino acids on the surface of the engineered phytase.

  11. A computationally efficient OMP-based compressed sensing reconstruction for dynamic MRI

    International Nuclear Information System (INIS)

    Usman, M; Prieto, C; Schaeffter, T; Batchelor, P G; Odille, F; Atkinson, D

    2011-01-01

    Compressed sensing (CS) methods in MRI are computationally intensive. Thus, designing novel CS algorithms that can perform faster reconstructions is crucial for everyday applications. We propose a computationally efficient orthogonal matching pursuit (OMP)-based reconstruction, specifically suited to cardiac MR data. According to the energy distribution of a y-f space obtained from a sliding window reconstruction, we label the y-f space as static or dynamic. For static y-f space images, a computationally efficient masked OMP reconstruction is performed, whereas for dynamic y-f space images, standard OMP reconstruction is used. The proposed method was tested on a dynamic numerical phantom and two cardiac MR datasets. Depending on the field of view composition of the imaging data, compared to the standard OMP method, reconstruction speedup factors ranging from 1.5 to 2.5 are achieved. (note)

  12. Computational methods in molecular imaging technologies

    CERN Document Server

    Gunjan, Vinit Kumar; Venkatesh, C; Amarnath, M

    2017-01-01

    This book highlights the experimental investigations that have been carried out on magnetic resonance imaging and computed tomography (MRI & CT) images using state-of-the-art Computational Image processing techniques, and tabulates the statistical values wherever necessary. In a very simple and straightforward way, it explains how image processing methods are used to improve the quality of medical images and facilitate analysis. It offers a valuable resource for researchers, engineers, medical doctors and bioinformatics experts alike.

  13. Computational and experimental methods for enclosed natural convection

    International Nuclear Information System (INIS)

    Larson, D.W.; Gartling, D.K.; Schimmel, W.P. Jr.

    1977-10-01

    Two computational procedures and one optical experimental procedure for studying enclosed natural convection are described. The finite-difference and finite-element numerical methods are developed and several sample problems are solved. Results obtained from the two computational approaches are compared. A temperature-visualization scheme using laser holographic interferometry is described, and results from this experimental procedure are compared with results from both numerical methods

  14. Deterministic methods for sensitivity and uncertainty analysis in large-scale computer models

    International Nuclear Information System (INIS)

    Worley, B.A.; Oblow, E.M.; Pin, F.G.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.; Lucius, J.L.

    1987-01-01

    The fields of sensitivity and uncertainty analysis are dominated by statistical techniques when large-scale modeling codes are being analyzed. This paper reports on the development and availability of two systems, GRESS and ADGEN, that make use of computer calculus compilers to automate the implementation of deterministic sensitivity analysis capability into existing computer models. This automation removes the traditional limitation of deterministic sensitivity methods. The paper describes a deterministic uncertainty analysis method (DUA) that uses derivative information as a basis to propagate parameter probability distributions to obtain result probability distributions. The paper demonstrates the deterministic approach to sensitivity and uncertainty analysis as applied to a sample problem that models the flow of water through a borehole. The sample problem is used as a basis to compare the cumulative distribution function of the flow rate as calculated by the standard statistical methods and the DUA method. The DUA method gives a more accurate result based upon only two model executions compared to fifty executions in the statistical case

  15. A brief introduction to computer-intensive methods, with a view towards applications in spatial statistics and stereology.

    Science.gov (United States)

    Mattfeldt, Torsten

    2011-04-01

    Computer-intensive methods may be defined as data analytical procedures involving a huge number of highly repetitive computations. We mention resampling methods with replacement (bootstrap methods), resampling methods without replacement (randomization tests) and simulation methods. The resampling methods are based on simple and robust principles and are largely free from distributional assumptions. Bootstrap methods may be used to compute confidence intervals for a scalar model parameter and for summary statistics from replicated planar point patterns, and for significance tests. For some simple models of planar point processes, point patterns can be simulated by elementary Monte Carlo methods. The simulation of models with more complex interaction properties usually requires more advanced computing methods. In this context, we mention simulation of Gibbs processes with Markov chain Monte Carlo methods using the Metropolis-Hastings algorithm. An alternative to simulations on the basis of a parametric model consists of stochastic reconstruction methods. The basic ideas behind the methods are briefly reviewed and illustrated by simple worked examples in order to encourage novices in the field to use computer-intensive methods. © 2010 The Authors Journal of Microscopy © 2010 Royal Microscopical Society.

  16. Computational methods for stellerator configurations

    International Nuclear Information System (INIS)

    Betancourt, O.

    1992-01-01

    This project had two main objectives. The first one was to continue to develop computational methods for the study of three dimensional magnetic confinement configurations. The second one was to collaborate and interact with researchers in the field who can use these techniques to study and design fusion experiments. The first objective has been achieved with the development of the spectral code BETAS and the formulation of a new variational approach for the study of magnetic island formation in a self consistent fashion. The code can compute the correct island width corresponding to the saturated island, a result shown by comparing the computed island with the results of unstable tearing modes in Tokamaks and with experimental results in the IMS Stellarator. In addition to studying three dimensional nonlinear effects in Tokamaks configurations, these self consistent computed island equilibria will be used to study transport effects due to magnetic island formation and to nonlinearly bifurcated equilibria. The second objective was achieved through direct collaboration with Steve Hirshman at Oak Ridge, D. Anderson and R. Talmage at Wisconsin as well as through participation in the Sherwood and APS meetings

  17. Combinatorial methods with computer applications

    CERN Document Server

    Gross, Jonathan L

    2007-01-01

    Combinatorial Methods with Computer Applications provides in-depth coverage of recurrences, generating functions, partitions, and permutations, along with some of the most interesting graph and network topics, design constructions, and finite geometries. Requiring only a foundation in discrete mathematics, it can serve as the textbook in a combinatorial methods course or in a combined graph theory and combinatorics course.After an introduction to combinatorics, the book explores six systematic approaches within a comprehensive framework: sequences, solving recurrences, evaluating summation exp

  18. Hybrid Monte Carlo methods in computational finance

    NARCIS (Netherlands)

    Leitao Rodriguez, A.

    2017-01-01

    Monte Carlo methods are highly appreciated and intensively employed in computational finance in the context of financial derivatives valuation or risk management. The method offers valuable advantages like flexibility, easy interpretation and straightforward implementation. Furthermore, the

  19. Multi-scale multi-physics computational chemistry simulation based on ultra-accelerated quantum chemical molecular dynamics method for structural materials in boiling water reactor

    International Nuclear Information System (INIS)

    Miyamoto, Akira; Sato, Etsuko; Sato, Ryo; Inaba, Kenji; Hatakeyama, Nozomu

    2014-01-01

    In collaboration with experimental experts we have reported in the present conference (Hatakeyama, N. et al., “Experiment-integrated multi-scale, multi-physics computational chemistry simulation applied to corrosion behaviour of BWR structural materials”) the results of multi-scale multi-physics computational chemistry simulations applied to the corrosion behaviour of BWR structural materials. In macro-scale, a macroscopic simulator of anode polarization curve was developed to solve the spatially one-dimensional electrochemical equations on the material surface in continuum level in order to understand the corrosion behaviour of typical BWR structural material, SUS304. The experimental anode polarization behaviours of each pure metal were reproduced by fitting all the rates of electrochemical reactions and then the anode polarization curve of SUS304 was calculated by using the same parameters and found to reproduce the experimental behaviour successfully. In meso-scale, a kinetic Monte Carlo (KMC) simulator was applied to an actual-time simulation of the morphological corrosion behaviour under the influence of an applied voltage. In micro-scale, an ultra-accelerated quantum chemical molecular dynamics (UA-QCMD) code was applied to various metallic oxide surfaces of Fe 2 O 3 , Fe 3 O 4 , Cr 2 O 3 modelled as same as water molecules and dissolved metallic ions on the surfaces, then the dissolution and segregation behaviours were successfully simulated dynamically by using UA-QCMD. In this paper we describe details of the multi-scale, multi-physics computational chemistry method especially the UA-QCMD method. This method is approximately 10,000,000 times faster than conventional first-principles molecular dynamics methods based on density-functional theory (DFT), and the accuracy was also validated for various metals and metal oxides compared with DFT results. To assure multi-scale multi-physics computational chemistry simulation based on the UA-QCMD method for

  20. EEG feature selection method based on decision tree.

    Science.gov (United States)

    Duan, Lijuan; Ge, Hui; Ma, Wei; Miao, Jun

    2015-01-01

    This paper aims to solve automated feature selection problem in brain computer interface (BCI). In order to automate feature selection process, we proposed a novel EEG feature selection method based on decision tree (DT). During the electroencephalogram (EEG) signal processing, a feature extraction method based on principle component analysis (PCA) was used, and the selection process based on decision tree was performed by searching the feature space and automatically selecting optimal features. Considering that EEG signals are a series of non-linear signals, a generalized linear classifier named support vector machine (SVM) was chosen. In order to test the validity of the proposed method, we applied the EEG feature selection method based on decision tree to BCI Competition II datasets Ia, and the experiment showed encouraging results.

  1. Geometric computations with interval and new robust methods applications in computer graphics, GIS and computational geometry

    CERN Document Server

    Ratschek, H

    2003-01-01

    This undergraduate and postgraduate text will familiarise readers with interval arithmetic and related tools to gain reliable and validated results and logically correct decisions for a variety of geometric computations plus the means for alleviating the effects of the errors. It also considers computations on geometric point-sets, which are neither robust nor reliable in processing with standard methods. The authors provide two effective tools for obtaining correct results: (a) interval arithmetic, and (b) ESSA the new powerful algorithm which improves many geometric computations and makes th

  2. Electromagnetic Computation and Visualization of Transmission Particle Model and Its Simulation Based on GPU

    Directory of Open Access Journals (Sweden)

    Yingnian Wu

    2014-01-01

    Full Text Available Electromagnetic calculation plays an important role in both military and civic fields. Some methods and models proposed for calculation of electromagnetic wave propagation in a large range bring heavy burden in CPU computation and also require huge amount of memory. Using the GPU to accelerate computation and visualization can reduce the computational burden on the CPU. Based on forward ray-tracing method, a transmission particle model (TPM for calculating electromagnetic field is presented to combine the particle method. The movement of a particle obeys the principle of the propagation of electromagnetic wave, and then the particle distribution density in space reflects the electromagnetic distribution status. The algorithm with particle transmission, movement, reflection, and diffraction is described in detail. Since the particles in TPM are completely independent, it is very suitable for the parallel computing based on GPU. Deduction verification of TPM with the electric dipole antenna as the transmission source is conducted to prove that the particle movement itself represents the variation of electromagnetic field intensity caused by diffusion. Finally, the simulation comparisons are made against the forward and backward ray-tracing methods. The simulation results verified the effectiveness of the proposed method.

  3. A Research Roadmap for Computation-Based Human Reliability Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Boring, Ronald [Idaho National Lab. (INL), Idaho Falls, ID (United States); Mandelli, Diego [Idaho National Lab. (INL), Idaho Falls, ID (United States); Joe, Jeffrey [Idaho National Lab. (INL), Idaho Falls, ID (United States); Smith, Curtis [Idaho National Lab. (INL), Idaho Falls, ID (United States); Groth, Katrina [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-08-01

    The United States (U.S.) Department of Energy (DOE) is sponsoring research through the Light Water Reactor Sustainability (LWRS) program to extend the life of the currently operating fleet of commercial nuclear power plants. The Risk Informed Safety Margin Characterization (RISMC) research pathway within LWRS looks at ways to maintain and improve the safety margins of these plants. The RISMC pathway includes significant developments in the area of thermalhydraulics code modeling and the development of tools to facilitate dynamic probabilistic risk assessment (PRA). PRA is primarily concerned with the risk of hardware systems at the plant; yet, hardware reliability is often secondary in overall risk significance to human errors that can trigger or compound undesirable events at the plant. This report highlights ongoing efforts to develop a computation-based approach to human reliability analysis (HRA). This computation-based approach differs from existing static and dynamic HRA approaches in that it: (i) interfaces with a dynamic computation engine that includes a full scope plant model, and (ii) interfaces with a PRA software toolset. The computation-based HRA approach presented in this report is called the Human Unimodels for Nuclear Technology to Enhance Reliability (HUNTER) and incorporates in a hybrid fashion elements of existing HRA methods to interface with new computational tools developed under the RISMC pathway. The goal of this research effort is to model human performance more accurately than existing approaches, thereby minimizing modeling uncertainty found in current plant risk models.

  4. A Research Roadmap for Computation-Based Human Reliability Analysis

    International Nuclear Information System (INIS)

    Boring, Ronald; Mandelli, Diego; Joe, Jeffrey; Smith, Curtis; Groth, Katrina

    2015-01-01

    The United States (U.S.) Department of Energy (DOE) is sponsoring research through the Light Water Reactor Sustainability (LWRS) program to extend the life of the currently operating fleet of commercial nuclear power plants. The Risk Informed Safety Margin Characterization (RISMC) research pathway within LWRS looks at ways to maintain and improve the safety margins of these plants. The RISMC pathway includes significant developments in the area of thermalhydraulics code modeling and the development of tools to facilitate dynamic probabilistic risk assessment (PRA). PRA is primarily concerned with the risk of hardware systems at the plant; yet, hardware reliability is often secondary in overall risk significance to human errors that can trigger or compound undesirable events at the plant. This report highlights ongoing efforts to develop a computation-based approach to human reliability analysis (HRA). This computation-based approach differs from existing static and dynamic HRA approaches in that it: (i) interfaces with a dynamic computation engine that includes a full scope plant model, and (ii) interfaces with a PRA software toolset. The computation-based HRA approach presented in this report is called the Human Unimodels for Nuclear Technology to Enhance Reliability (HUNTER) and incorporates in a hybrid fashion elements of existing HRA methods to interface with new computational tools developed under the RISMC pathway. The goal of this research effort is to model human performance more accurately than existing approaches, thereby minimizing modeling uncertainty found in current plant risk models.

  5. Automated Extraction of Cranial Landmarks from Computed Tomography Data using a Combined Method of Knowledge and Pattern Based Approaches

    Directory of Open Access Journals (Sweden)

    Roshan N. RAJAPAKSE

    2016-03-01

    Full Text Available Accurate identification of anatomical structures from medical imaging data is a significant and critical function in the medical domain. Past studies in this context have mainly utilized two main approaches, the knowledge and learning methodologies based methods. Further, most of previous reported studies have focused on identification of landmarks from lateral X-ray Computed Tomography (CT data, particularly in the field of orthodontics. However, this study focused on extracting cranial landmarks from large sets of cross sectional CT slices using a combined method of the two aforementioned approaches. The proposed method of this study is centered mainly on template data sets, which were created using the actual contour patterns extracted from CT cases for each of the landmarks in consideration. Firstly, these templates were used to devise rules which are a characteristic of the knowledge based method. Secondly, the same template sets were employed to perform template matching related to the learning methodologies approach. The proposed method was tested on two landmarks, the Dorsum sellae and the Pterygoid plate, using CT cases of 5 subjects. The results indicate that, out of the 10 tests, the output images were within the expected range (desired accuracy in 7 instances and acceptable range (near accuracy for 2 instances, thus verifying the effectiveness of the combined template sets centric approach proposed in this study.

  6. Methodics of computing the results of monitoring the exploratory gallery

    Directory of Open Access Journals (Sweden)

    Krúpa Víazoslav

    2000-09-01

    Full Text Available At building site of motorway tunnel Višòové-Dubná skala , the priority is given to driving of exploration galley that secures in detail: geologic, engineering geology, hydrogeology and geotechnics research. This research is based on gathering information for a supposed use of the full profile driving machine that would drive the motorway tunnel. From a part of the exploration gallery which is driven by the TBM method, a fulfilling information is gathered about the parameters of the driving process , those are gathered by a computer monitoring system. The system is mounted on a driving machine. This monitoring system is based on the industrial computer PC 104. It records 4 basic values of the driving process: the electromotor performance of the driving machine Voest-Alpine ATB 35HA, the speed of driving advance, the rotation speed of the disintegrating head TBM and the total head pressure. The pressure force is evaluated from the pressure in the hydraulic cylinders of the machine. Out of these values, the strength of rock mass, the angle of inner friction, etc. are mathematically calculated. These values characterize rock mass properties as their changes. To define the effectivity of the driving process, the value of specific energy and the working ability of driving head is used. The article defines the methodics of computing the gathered monitoring information, that is prepared for the driving machine Voest – Alpine ATB 35H at the Institute of Geotechnics SAS. It describes the input forms (protocols of the developed method created by an EXCEL program and shows selected samples of the graphical elaboration of the first monitoring results obtained from exploratory gallery driving process in the Višòové – Dubná skala motorway tunnel.

  7. Application of computational methods in genetic study of inflammatory bowel disease.

    Science.gov (United States)

    Li, Jin; Wei, Zhi; Hakonarson, Hakon

    2016-01-21

    Genetic factors play an important role in the etiology of inflammatory bowel disease (IBD). The launch of genome-wide association study (GWAS) represents a landmark in the genetic study of human complex disease. Concurrently, computational methods have undergone rapid development during the past a few years, which led to the identification of numerous disease susceptibility loci. IBD is one of the successful examples of GWAS and related analyses. A total of 163 genetic loci and multiple signaling pathways have been identified to be associated with IBD. Pleiotropic effects were found for many of these loci; and risk prediction models were built based on a broad spectrum of genetic variants. Important gene-gene, gene-environment interactions and key contributions of gut microbiome are being discovered. Here we will review the different types of analyses that have been applied to IBD genetic study, discuss the computational methods for each type of analysis, and summarize the discoveries made in IBD research with the application of these methods.

  8. Real Time Animation of Trees Based on BBSC in Computer Games

    Directory of Open Access Journals (Sweden)

    Xuefeng Ao

    2009-01-01

    Full Text Available That researchers in the field of computer games usually find it is difficult to simulate the motion of actual 3D model trees lies in the fact that the tree model itself has very complicated structure, and many sophisticated factors need to be considered during the simulation. Though there are some works on simulating 3D tree and its motion, few of them are used in computer games due to the high demand for real-time in computer games. In this paper, an approach of animating trees in computer games based on a novel tree model representation—Ball B-Spline Curves (BBSCs are proposed. By taking advantage of the good features of the BBSC-based model, physical simulation of the motion of leafless trees with wind blowing becomes easier and more efficient. The method can generate realistic 3D tree animation in real-time, which meets the high requirement for real time in computer games.

  9. An efficient computational method for global sensitivity analysis and its application to tree growth modelling

    International Nuclear Information System (INIS)

    Wu, Qiong-Li; Cournède, Paul-Henry; Mathieu, Amélie

    2012-01-01

    Global sensitivity analysis has a key role to play in the design and parameterisation of functional–structural plant growth models which combine the description of plant structural development (organogenesis and geometry) and functional growth (biomass accumulation and allocation). We are particularly interested in this study in Sobol's method which decomposes the variance of the output of interest into terms due to individual parameters but also to interactions between parameters. Such information is crucial for systems with potentially high levels of non-linearity and interactions between processes, like plant growth. However, the computation of Sobol's indices relies on Monte Carlo sampling and re-sampling, whose costs can be very high, especially when model evaluation is also expensive, as for tree models. In this paper, we thus propose a new method to compute Sobol's indices inspired by Homma–Saltelli, which improves slightly their use of model evaluations, and then derive for this generic type of computational methods an estimator of the error estimation of sensitivity indices with respect to the sampling size. It allows the detailed control of the balance between accuracy and computing time. Numerical tests on a simple non-linear model are convincing and the method is finally applied to a functional–structural model of tree growth, GreenLab, whose particularity is the strong level of interaction between plant functioning and organogenesis. - Highlights: ► We study global sensitivity analysis in the context of functional–structural plant modelling. ► A new estimator based on Homma–Saltelli method is proposed to compute Sobol indices, based on a more balanced re-sampling strategy. ► The estimation accuracy of sensitivity indices for a class of Sobol's estimators can be controlled by error analysis. ► The proposed algorithm is implemented efficiently to compute Sobol indices for a complex tree growth model.

  10. Domain decomposition methods and parallel computing

    International Nuclear Information System (INIS)

    Meurant, G.

    1991-01-01

    In this paper, we show how to efficiently solve large linear systems on parallel computers. These linear systems arise from discretization of scientific computing problems described by systems of partial differential equations. We show how to get a discrete finite dimensional system from the continuous problem and the chosen conjugate gradient iterative algorithm is briefly described. Then, the different kinds of parallel architectures are reviewed and their advantages and deficiencies are emphasized. We sketch the problems found in programming the conjugate gradient method on parallel computers. For this algorithm to be efficient on parallel machines, domain decomposition techniques are introduced. We give results of numerical experiments showing that these techniques allow a good rate of convergence for the conjugate gradient algorithm as well as computational speeds in excess of a billion of floating point operations per second. (author). 5 refs., 11 figs., 2 tabs., 1 inset

  11. Usability test of the ImPRO, computer-based procedure system

    International Nuclear Information System (INIS)

    Jung, Y.; Lee, J.

    2006-01-01

    ImPRO is a computer based procedure in both flowchart and success logic tree. It is evaluated on the basis of computer based procedure guidelines. It satisfies most requirements such as presentations and functionalities. Besides, SGTR has been performed with ImPRO to evaluate reading comprehension and situation awareness. ImPRO is a software engine which can interpret procedure script language, so that ImPRO is reliable by nature and verified with formal method. One bug, however, had hidden one year after release, but it was fixed. Finally backup paper procedures can be prepared on the same format as VDU in case of ImPRO failure. (authors)

  12. An efficient and general numerical method to compute steady uniform vortices

    Science.gov (United States)

    Luzzatto-Fegiz, Paolo; Williamson, Charles H. K.

    2011-07-01

    Steady uniform vortices are widely used to represent high Reynolds number flows, yet their efficient computation still presents some challenges. Existing Newton iteration methods become inefficient as the vortices develop fine-scale features; in addition, these methods cannot, in general, find solutions with specified Casimir invariants. On the other hand, available relaxation approaches are computationally inexpensive, but can fail to converge to a solution. In this paper, we overcome these limitations by introducing a new discretization, based on an inverse-velocity map, which radically increases the efficiency of Newton iteration methods. In addition, we introduce a procedure to prescribe Casimirs and remove the degeneracies in the steady vorticity equation, thus ensuring convergence for general vortex configurations. We illustrate our methodology by considering several unbounded flows involving one or two vortices. Our method enables the computation, for the first time, of steady vortices that do not exhibit any geometric symmetry. In addition, we discover that, as the limiting vortex state for each flow is approached, each family of solutions traces a clockwise spiral in a bifurcation plot consisting of a velocity-impulse diagram. By the recently introduced "IVI diagram" stability approach [Phys. Rev. Lett. 104 (2010) 044504], each turn of this spiral is associated with a loss of stability for the steady flows. Such spiral structure is suggested to be a universal feature of steady, uniform-vorticity flows.

  13. GPU-based parallel algorithm for blind image restoration using midfrequency-based methods

    Science.gov (United States)

    Xie, Lang; Luo, Yi-han; Bao, Qi-liang

    2013-08-01

    GPU-based general-purpose computing is a new branch of modern parallel computing, so the study of parallel algorithms specially designed for GPU hardware architecture is of great significance. In order to solve the problem of high computational complexity and poor real-time performance in blind image restoration, the midfrequency-based algorithm for blind image restoration was analyzed and improved in this paper. Furthermore, a midfrequency-based filtering method is also used to restore the image hardly with any recursion or iteration. Combining the algorithm with data intensiveness, data parallel computing and GPU execution model of single instruction and multiple threads, a new parallel midfrequency-based algorithm for blind image restoration is proposed in this paper, which is suitable for stream computing of GPU. In this algorithm, the GPU is utilized to accelerate the estimation of class-G point spread functions and midfrequency-based filtering. Aiming at better management of the GPU threads, the threads in a grid are scheduled according to the decomposition of the filtering data in frequency domain after the optimization of data access and the communication between the host and the device. The kernel parallelism structure is determined by the decomposition of the filtering data to ensure the transmission rate to get around the memory bandwidth limitation. The results show that, with the new algorithm, the operational speed is significantly increased and the real-time performance of image restoration is effectively improved, especially for high-resolution images.

  14. Talbot's method for the numerical inversion of Laplace transforms: an implementation for personal computers

    International Nuclear Information System (INIS)

    Garratt, T.J.

    1989-05-01

    Safety assessments of radioactive waste disposal require efficient computer models for the important processes. The present paper is based on an efficient computational technique which can be used to solve a wide variety of safety assessment models. It involves the numerical inversion of analytical solutions to the Laplace-transformed differential equations using a method proposed by Talbot. This method has been implemented on a personal computer in a user-friendly manner. The steps required to implement a particular transform and run the program are outlined. Four examples are described which illustrate the flexibility, accuracy and efficiency of the program. The improvements in computational efficiency described in this paper have application to the probabilistic safety assessment codes ESCORT and MASCOT which are currently under development. Also, it is hoped that the present work will form the basis of software for personal computers which could be used to demonstrate safety assessment procedures to a wide audience. (author)

  15. A sediment graph model based on SCS-CN method

    Science.gov (United States)

    Singh, P. K.; Bhunya, P. K.; Mishra, S. K.; Chaube, U. C.

    2008-01-01

    SummaryThis paper proposes new conceptual sediment graph models based on coupling of popular and extensively used methods, viz., Nash model based instantaneous unit sediment graph (IUSG), soil conservation service curve number (SCS-CN) method, and Power law. These models vary in their complexity and this paper tests their performance using data of the Nagwan watershed (area = 92.46 km 2) (India). The sensitivity of total sediment yield and peak sediment flow rate computations to model parameterisation is analysed. The exponent of the Power law, β, is more sensitive than other model parameters. The models are found to have substantial potential for computing sediment graphs (temporal sediment flow rate distribution) as well as total sediment yield.

  16. BLUES function method in computational physics

    Science.gov (United States)

    Indekeu, Joseph O.; Müller-Nedebock, Kristian K.

    2018-04-01

    We introduce a computational method in physics that goes ‘beyond linear use of equation superposition’ (BLUES). A BLUES function is defined as a solution of a nonlinear differential equation (DE) with a delta source that is at the same time a Green’s function for a related linear DE. For an arbitrary source, the BLUES function can be used to construct an exact solution to the nonlinear DE with a different, but related source. Alternatively, the BLUES function can be used to construct an approximate piecewise analytical solution to the nonlinear DE with an arbitrary source. For this alternative use the related linear DE need not be known. The method is illustrated in a few examples using analytical calculations and numerical computations. Areas for further applications are suggested.

  17. Pervasive Computing Support for Hospitals: An Overview of the Activity-Based Computing Project

    DEFF Research Database (Denmark)

    Christensen, Henrik Bærbak; Bardram, Jakob E

    2007-01-01

    The activity-based computing project researched pervasive computing support for clinical hospital work. Such technologies have potential for supporting the mobile, collaborative, and disruptive use of heterogeneous embedded devices in a hospital......The activity-based computing project researched pervasive computing support for clinical hospital work. Such technologies have potential for supporting the mobile, collaborative, and disruptive use of heterogeneous embedded devices in a hospital...

  18. Comparison of computer-assisted instruction (CAI) versus traditional textbook methods for training in abdominal examination (Japanese experience).

    Science.gov (United States)

    Qayumi, A K; Kurihara, Y; Imai, M; Pachev, G; Seo, H; Hoshino, Y; Cheifetz, R; Matsuura, K; Momoi, M; Saleem, M; Lara-Guerra, H; Miki, Y; Kariya, Y

    2004-10-01

    This study aimed to compare the effects of computer-assisted, text-based and computer-and-text learning conditions on the performances of 3 groups of medical students in the pre-clinical years of their programme, taking into account their academic achievement to date. A fourth group of students served as a control (no-study) group. Participants were recruited from the pre-clinical years of the training programmes in 2 medical schools in Japan, Jichi Medical School near Tokyo and Kochi Medical School near Osaka. Participants were randomly assigned to 4 learning conditions and tested before and after the study on their knowledge of and skill in performing an abdominal examination, in a multiple-choice test and an objective structured clinical examination (OSCE), respectively. Information about performance in the programme was collected from school records and students were classified as average, good or excellent. Student and faculty evaluations of their experience in the study were explored by means of a short evaluation survey. Compared to the control group, all 3 study groups exhibited significant gains in performance on knowledge and performance measures. For the knowledge measure, the gains of the computer-assisted and computer-assisted plus text-based learning groups were significantly greater than the gains of the text-based learning group. The performances of the 3 groups did not differ on the OSCE measure. Analyses of gains by performance level revealed that high achieving students' learning was independent of study method. Lower achieving students performed better after using computer-based learning methods. The results suggest that computer-assisted learning methods will be of greater help to students who do not find the traditional methods effective. Explorations of the factors behind this are a matter for future research.

  19. Reheating breakfast: Age and multitasking on a computer-based and a non-computer-based task

    OpenAIRE

    Feinkohl, I.; Cress, U.; Kimmerle, J.

    2016-01-01

    Computer-based assessments are popular means to measure individual differences, including age differences, in cognitive ability, but are rarely tested for the extent to which they correspond to more realistic behavior. In the present study, we explored the extent to which performance on an existing computer-based task of multitasking ('cooking breakfast') may be generalizable by comparing it with a newly developed version of the same task that required interaction with physical objects. Twent...

  20. Design Of Computer Based Test Using The Unified Modeling Language

    Science.gov (United States)

    Tedyyana, Agus; Danuri; Lidyawati

    2017-12-01

    The Admission selection of Politeknik Negeri Bengkalis through interest and talent search (PMDK), Joint Selection of admission test for state Polytechnics (SB-UMPN) and Independent (UM-Polbeng) were conducted by using paper-based Test (PBT). Paper Based Test model has some weaknesses. They are wasting too much paper, the leaking of the questios to the public, and data manipulation of the test result. This reasearch was Aimed to create a Computer-based Test (CBT) models by using Unified Modeling Language (UML) the which consists of Use Case diagrams, Activity diagram and sequence diagrams. During the designing process of the application, it is important to pay attention on the process of giving the password for the test questions before they were shown through encryption and description process. RSA cryptography algorithm was used in this process. Then, the questions shown in the questions banks were randomized by using the Fisher-Yates Shuffle method. The network architecture used in Computer Based test application was a client-server network models and Local Area Network (LAN). The result of the design was the Computer Based Test application for admission to the selection of Politeknik Negeri Bengkalis.

  1. Recent Advances in Computational Methods for Nuclear Magnetic Resonance Data Processing

    KAUST Repository

    Gao, Xin

    2013-01-11

    Although three-dimensional protein structure determination using nuclear magnetic resonance (NMR) spectroscopy is a computationally costly and tedious process that would benefit from advanced computational techniques, it has not garnered much research attention from specialists in bioinformatics and computational biology. In this paper, we review recent advances in computational methods for NMR protein structure determination. We summarize the advantages of and bottlenecks in the existing methods and outline some open problems in the field. We also discuss current trends in NMR technology development and suggest directions for research on future computational methods for NMR.

  2. Electromagnetic field computation by network methods

    CERN Document Server

    Felsen, Leopold B; Russer, Peter

    2009-01-01

    This monograph proposes a systematic and rigorous treatment of electromagnetic field representations in complex structures. The book presents new strong models by combining important computational methods. This is the last book of the late Leopold Felsen.

  3. Power quality events recognition using a SVM-based method

    Energy Technology Data Exchange (ETDEWEB)

    Cerqueira, Augusto Santiago; Ferreira, Danton Diego; Ribeiro, Moises Vidal; Duque, Carlos Augusto [Department of Electrical Circuits, Federal University of Juiz de Fora, Campus Universitario, 36036 900, Juiz de Fora MG (Brazil)

    2008-09-15

    In this paper, a novel SVM-based method for power quality event classification is proposed. A simple approach for feature extraction is introduced, based on the subtraction of the fundamental component from the acquired voltage signal. The resulting signal is presented to a support vector machine for event classification. Results from simulation are presented and compared with two other methods, the OTFR and the LCEC. The proposed method shown an improved performance followed by a reasonable computational cost. (author)

  4. Computational botany methods for automated species identification

    CERN Document Server

    Remagnino, Paolo; Wilkin, Paul; Cope, James; Kirkup, Don

    2017-01-01

    This book discusses innovative methods for mining information from images of plants, especially leaves, and highlights the diagnostic features that can be implemented in fully automatic systems for identifying plant species. Adopting a multidisciplinary approach, it explores the problem of plant species identification, covering both the concepts of taxonomy and morphology. It then provides an overview of morphometrics, including the historical background and the main steps in the morphometric analysis of leaves together with a number of applications. The core of the book focuses on novel diagnostic methods for plant species identification developed from a computer scientist’s perspective. It then concludes with a chapter on the characterization of botanists' visions, which highlights important cognitive aspects that can be implemented in a computer system to more accurately replicate the human expert’s fixation process. The book not only represents an authoritative guide to advanced computational tools fo...

  5. Using a computer simulation for teaching communication skills: A blinded multisite mixed methods randomized controlled trial

    Science.gov (United States)

    Kron, Frederick W.; Fetters, Michael D.; Scerbo, Mark W.; White, Casey B.; Lypson, Monica L.; Padilla, Miguel A.; Gliva-McConvey, Gayle A.; Belfore, Lee A.; West, Temple; Wallace, Amelia M.; Guetterman, Timothy C.; Schleicher, Lauren S.; Kennedy, Rebecca A.; Mangrulkar, Rajesh S.; Cleary, James F.; Marsella, Stacy C.; Becker, Daniel M.

    2016-01-01

    Objectives To assess advanced communication skills among second-year medical students exposed either to a computer simulation (MPathic-VR) featuring virtual humans, or to a multimedia computer-based learning module, and to understand each group’s experiences and learning preferences. Methods A single-blinded, mixed methods, randomized, multisite trial compared MPathic-VR (N=210) to computer-based learning (N=211). Primary outcomes: communication scores during repeat interactions with MPathic-VR’s intercultural and interprofessional communication scenarios and scores on a subsequent advanced communication skills objective structured clinical examination (OSCE). Multivariate analysis of variance was used to compare outcomes. Secondary outcomes: student attitude surveys and qualitative assessments of their experiences with MPathic-VR or computer-based learning. Results MPathic-VR-trained students improved their intercultural and interprofessional communication performance between their first and second interactions with each scenario. They also achieved significantly higher composite scores on the OSCE than computer-based learning-trained students. Attitudes and experiences were more positive among students trained with MPathic-VR, who valued its providing immediate feedback, teaching nonverbal communication skills, and preparing them for emotion-charged patient encounters. Conclusions MPathic-VR was effective in training advanced communication skills and in enabling knowledge transfer into a more realistic clinical situation. Practice Implications MPathic-VR’s virtual human simulation offers an effective and engaging means of advanced communication training. PMID:27939846

  6. Identification of natural images and computer-generated graphics based on statistical and textural features.

    Science.gov (United States)

    Peng, Fei; Li, Jiao-ting; Long, Min

    2015-03-01

    To discriminate the acquisition pipelines of digital images, a novel scheme for the identification of natural images and computer-generated graphics is proposed based on statistical and textural features. First, the differences between them are investigated from the view of statistics and texture, and 31 dimensions of feature are acquired for identification. Then, LIBSVM is used for the classification. Finally, the experimental results are presented. The results show that it can achieve an identification accuracy of 97.89% for computer-generated graphics, and an identification accuracy of 97.75% for natural images. The analyses also demonstrate the proposed method has excellent performance, compared with some existing methods based only on statistical features or other features. The method has a great potential to be implemented for the identification of natural images and computer-generated graphics. © 2014 American Academy of Forensic Sciences.

  7. Computer Simulation of Nonuniform MTLs via Implicit Wendroff and State-Variable Methods

    Directory of Open Access Journals (Sweden)

    L. Brancik

    2011-04-01

    Full Text Available The paper deals with techniques for a computer simulation of nonuniform multiconductor transmission lines (MTLs based on the implicit Wendroff and the statevariable methods. The techniques fall into a class of finitedifference time-domain (FDTD methods useful to solve various electromagnetic systems. Their basic variants are extended and modified to enable solving both voltage and current distributions along nonuniform MTL’s wires and their sensitivities with respect to lumped and distributed parameters. An experimental error analysis is performed based on the Thomson cable whose analytical solutions are known, and some examples of simulation of both uniform and nonuniform MTLs are presented. Based on the Matlab language programme, CPU times are analyzed to compare efficiency of the methods. Some results for nonlinear MTLs simulation are presented as well.

  8. Practice Makes Perfect: Using a Computer-Based Business Simulation in Entrepreneurship Education

    Science.gov (United States)

    Armer, Gina R. M.

    2011-01-01

    This article explains the use of a specific computer-based simulation program as a successful experiential learning model and as a way to increase student motivation while augmenting conventional methods of business instruction. This model is based on established adult learning principles.

  9. Simulation-Based Planning of Optimal Conditions for Industrial Computed Tomography

    DEFF Research Database (Denmark)

    Reisinger, S.; Kasperl, S.; Franz, M.

    2011-01-01

    We present a method to optimise conditions for industrial computed tomography (CT). This optimisation is based on a deterministic simulation. Our algorithm finds task-specific CT equipment settings to achieve optimal exposure parameters by means of an STL-model of the specimen and a raytracing...

  10. Computational studies of physical properties of Nb-Si based alloys

    Energy Technology Data Exchange (ETDEWEB)

    Ouyang, Lizhi [Middle Tennessee State Univ., Murfreesboro, TN (United States)

    2015-04-16

    The overall goal is to provide physical properties data supplementing experiments for thermodynamic modeling and other simulations such as phase filed simulation for microstructure and continuum simulations for mechanical properties. These predictive computational modeling and simulations may yield insights that can be used to guide materials design, processing, and manufacture. Ultimately, they may lead to usable Nb-Si based alloy which could play an important role in current plight towards greener energy. The main objectives of the proposed projects are: (1) developing a first principles method based supercell approach for calculating thermodynamic and mechanic properties of ordered crystals and disordered lattices including solid solution; (2) application of the supercell approach to Nb-Si base alloy to compute physical properties data that can be used for thermodynamic modeling and other simulations to guide the optimal design of Nb-Si based alloy.

  11. Evaluation of computer-based ultrasonic inservice inspection systems

    International Nuclear Information System (INIS)

    Harris, R.V. Jr.; Angel, L.J.; Doctor, S.R.; Park, W.R.; Schuster, G.J.; Taylor, T.T.

    1994-03-01

    This report presents the principles, practices, terminology, and technology of computer-based ultrasonic testing for inservice inspection (UT/ISI) of nuclear power plants, with extensive use of drawings, diagrams, and LTT images. The presentation is technical but assumes limited specific knowledge of ultrasonics or computers. The report is divided into 9 sections covering conventional LTT, computer-based LTT, and evaluation methodology. Conventional LTT topics include coordinate axes, scanning, instrument operation, RF and video signals, and A-, B-, and C-scans. Computer-based topics include sampling, digitization, signal analysis, image presentation, SAFI, ultrasonic holography, transducer arrays, and data interpretation. An evaluation methodology for computer-based LTT/ISI systems is presented, including questions, detailed procedures, and test block designs. Brief evaluations of several computer-based LTT/ISI systems are given; supplementary volumes will provide detailed evaluations of selected systems

  12. Learners’ views about cloud computing-based group activities

    Directory of Open Access Journals (Sweden)

    Yildirim Serkan

    2017-01-01

    Full Text Available Thanks to its use independently of time and place during the process of software development and by making it easier to access to information with mobile technologies, cloud based environments attracted the attention of education world and this technology started to be used in various activities. In this study, for programming education, the effects of extracurricular group assignments in cloud based environments on learners were evaluated in terms of group work satisfaction, ease of use and user satisfaction. Within the scope of computer programming education lasting eight weeks, a total of 100 students participated in the study including 34 men and 66 women. Participants were divided into groups of at least three people considering the advantages of cooperative learning in programming education. In this study carried out in both conventional and cloud based environments, between groups factorial design was used as research design. The data collected by questionnaires of opinions of group work were examined with quantitative analysis method. According to the study results extracurricular learning activities as group activity created satisfaction. However, perceptions of easy use of the environment and user satisfaction were partly positive. Despite the similar understandings; male participants were easier to perceive use of cloud computing based environments. Some variables such as class level, satisfaction, computer and internet usage time do not have any effect on satisfaction and perceptions of ease of use. Evening class students stated that they found it easy to use cloud based learning environments and became more satisfied with using these environments besides being happier with group work than daytime students.

  13. Property-Based Anonymous Attestation in Trusted Cloud Computing

    Directory of Open Access Journals (Sweden)

    Zhen-Hu Ning

    2014-01-01

    Full Text Available In the remote attestation on Trusted Computer (TC computing mode TCCP, the trusted computer TC has an excessive burden, and anonymity and platform configuration information security of computing nodes cannot be guaranteed. To overcome these defects, based on the research on and analysis of current schemes, we propose an anonymous proof protocol based on property certificate. The platform configuration information is converted by the matrix algorithm into the property certificate, and the remote attestation is implemented by trusted ring signature scheme based on Strong RSA Assumption. By the trusted ring signature scheme based on property certificate, we achieve the anonymity of computing nodes and prevent the leakage of platform configuration information. By simulation, we obtain the computational efficiency of the scheme. We also expand the protocol and obtain the anonymous attestation based on ECC. By scenario comparison, we obtain the trusted ring signature scheme based on RSA, which has advantages with the growth of the ring numbers.

  14. Array processors based on Gaussian fraction-free method

    Energy Technology Data Exchange (ETDEWEB)

    Peng, S; Sedukhin, S [Aizu Univ., Aizuwakamatsu, Fukushima (Japan); Sedukhin, I

    1998-03-01

    The design of algorithmic array processors for solving linear systems of equations using fraction-free Gaussian elimination method is presented. The design is based on a formal approach which constructs a family of planar array processors systematically. These array processors are synthesized and analyzed. It is shown that some array processors are optimal in the framework of linear allocation of computations and in terms of number of processing elements and computing time. (author)

  15. Computer Based Modelling and Simulation

    Indian Academy of Sciences (India)

    GENERAL I ARTICLE. Computer Based ... universities, and later did system analysis, ... sonal computers (PC) and low cost software packages and tools. They can serve as useful learning experience through student projects. Models are .... Let us consider a numerical example: to calculate the velocity of a trainer aircraft ...

  16. High-efficiency photorealistic computer-generated holograms based on the backward ray-tracing technique

    Science.gov (United States)

    Wang, Yuan; Chen, Zhidong; Sang, Xinzhu; Li, Hui; Zhao, Linmin

    2018-03-01

    Holographic displays can provide the complete optical wave field of a three-dimensional (3D) scene, including the depth perception. However, it often takes a long computation time to produce traditional computer-generated holograms (CGHs) without more complex and photorealistic rendering. The backward ray-tracing technique is able to render photorealistic high-quality images, which noticeably reduce the computation time achieved from the high-degree parallelism. Here, a high-efficiency photorealistic computer-generated hologram method is presented based on the ray-tracing technique. Rays are parallelly launched and traced under different illuminations and circumstances. Experimental results demonstrate the effectiveness of the proposed method. Compared with the traditional point cloud CGH, the computation time is decreased to 24 s to reconstruct a 3D object of 100 ×100 rays with continuous depth change.

  17. Self-guaranteed measurement-based quantum computation

    Science.gov (United States)

    Hayashi, Masahito; Hajdušek, Michal

    2018-05-01

    In order to guarantee the output of a quantum computation, we usually assume that the component devices are trusted. However, when the total computation process is large, it is not easy to guarantee the whole system when we have scaling effects, unexpected noise, or unaccounted for correlations between several subsystems. If we do not trust the measurement basis or the prepared entangled state, we do need to be worried about such uncertainties. To this end, we propose a self-guaranteed protocol for verification of quantum computation under the scheme of measurement-based quantum computation where no prior-trusted devices (measurement basis or entangled state) are needed. The approach we present enables the implementation of verifiable quantum computation using the measurement-based model in the context of a particular instance of delegated quantum computation where the server prepares the initial computational resource and sends it to the client, who drives the computation by single-qubit measurements. Applying self-testing procedures, we are able to verify the initial resource as well as the operation of the quantum devices and hence the computation itself. The overhead of our protocol scales with the size of the initial resource state to the power of 4 times the natural logarithm of the initial state's size.

  18. Design of Intelligent Robot as A Tool for Teaching Media Based on Computer Interactive Learning and Computer Assisted Learning to Improve the Skill of University Student

    Science.gov (United States)

    Zuhrie, M. S.; Basuki, I.; Asto B, I. G. P.; Anifah, L.

    2018-01-01

    The focus of the research is the teaching module which incorporates manufacturing, planning mechanical designing, controlling system through microprocessor technology and maneuverability of the robot. Computer interactive and computer-assisted learning is strategies that emphasize the use of computers and learning aids (computer assisted learning) in teaching and learning activity. This research applied the 4-D model research and development. The model is suggested by Thiagarajan, et.al (1974). 4-D Model consists of four stages: Define Stage, Design Stage, Develop Stage, and Disseminate Stage. This research was conducted by applying the research design development with an objective to produce a tool of learning in the form of intelligent robot modules and kit based on Computer Interactive Learning and Computer Assisted Learning. From the data of the Indonesia Robot Contest during the period of 2009-2015, it can be seen that the modules that have been developed confirm the fourth stage of the research methods of development; disseminate method. The modules which have been developed for students guide students to produce Intelligent Robot Tool for Teaching Based on Computer Interactive Learning and Computer Assisted Learning. Results of students’ responses also showed a positive feedback to relate to the module of robotics and computer-based interactive learning.

  19. An Overview of Computer-Based Natural Language Processing.

    Science.gov (United States)

    Gevarter, William B.

    Computer-based Natural Language Processing (NLP) is the key to enabling humans and their computer-based creations to interact with machines using natural languages (English, Japanese, German, etc.) rather than formal computer languages. NLP is a major research area in the fields of artificial intelligence and computational linguistics. Commercial…

  20. Computational methods for protein identification from mass spectrometry data.

    Directory of Open Access Journals (Sweden)

    Leo McHugh

    2008-02-01

    Full Text Available Protein identification using mass spectrometry is an indispensable computational tool in the life sciences. A dramatic increase in the use of proteomic strategies to understand the biology of living systems generates an ongoing need for more effective, efficient, and accurate computational methods for protein identification. A wide range of computational methods, each with various implementations, are available to complement different proteomic approaches. A solid knowledge of the range of algorithms available and, more critically, the accuracy and effectiveness of these techniques is essential to ensure as many of the proteins as possible, within any particular experiment, are correctly identified. Here, we undertake a systematic review of the currently available methods and algorithms for interpreting, managing, and analyzing biological data associated with protein identification. We summarize the advances in computational solutions as they have responded to corresponding advances in mass spectrometry hardware. The evolution of scoring algorithms and metrics for automated protein identification are also discussed with a focus on the relative performance of different techniques. We also consider the relative advantages and limitations of different techniques in particular biological contexts. Finally, we present our perspective on future developments in the area of computational protein identification by considering the most recent literature on new and promising approaches to the problem as well as identifying areas yet to be explored and the potential application of methods from other areas of computational biology.

  1. Contributions of computational chemistry and biophysical techniques to fragment-based drug discovery.

    Science.gov (United States)

    Gozalbes, Rafael; Carbajo, Rodrigo J; Pineda-Lucena, Antonio

    2010-01-01

    In the last decade, fragment-based drug discovery (FBDD) has evolved from a novel approach in the search of new hits to a valuable alternative to the high-throughput screening (HTS) campaigns of many pharmaceutical companies. The increasing relevance of FBDD in the drug discovery universe has been concomitant with an implementation of the biophysical techniques used for the detection of weak inhibitors, e.g. NMR, X-ray crystallography or surface plasmon resonance (SPR). At the same time, computational approaches have also been progressively incorporated into the FBDD process and nowadays several computational tools are available. These stretch from the filtering of huge chemical databases in order to build fragment-focused libraries comprising compounds with adequate physicochemical properties, to more evolved models based on different in silico methods such as docking, pharmacophore modelling, QSAR and virtual screening. In this paper we will review the parallel evolution and complementarities of biophysical techniques and computational methods, providing some representative examples of drug discovery success stories by using FBDD.

  2. Application of CT-PSF-based computer-simulated lung nodules for evaluating the accuracy of computer-aided volumetry.

    Science.gov (United States)

    Funaki, Ayumu; Ohkubo, Masaki; Wada, Shinichi; Murao, Kohei; Matsumoto, Toru; Niizuma, Shinji

    2012-07-01

    With the wide dissemination of computed tomography (CT) screening for lung cancer, measuring the nodule volume accurately with computer-aided volumetry software is increasingly important. Many studies for determining the accuracy of volumetry software have been performed using a phantom with artificial nodules. These phantom studies are limited, however, in their ability to reproduce the nodules both accurately and in the variety of sizes and densities required. Therefore, we propose a new approach of using computer-simulated nodules based on the point spread function measured in a CT system. The validity of the proposed method was confirmed by the excellent agreement obtained between computer-simulated nodules and phantom nodules regarding the volume measurements. A practical clinical evaluation of the accuracy of volumetry software was achieved by adding simulated nodules onto clinical lung images, including noise and artifacts. The tested volumetry software was revealed to be accurate within an error of 20 % for nodules >5 mm and with the difference between nodule density and background (lung) (CT value) being 400-600 HU. Such a detailed analysis can provide clinically useful information on the use of volumetry software in CT screening for lung cancer. We concluded that the proposed method is effective for evaluating the performance of computer-aided volumetry software.

  3. Reliability-based design optimization via high order response surface method

    International Nuclear Information System (INIS)

    Li, Hong Shuang

    2013-01-01

    To reduce the computational effort of reliability-based design optimization (RBDO), the response surface method (RSM) has been widely used to evaluate reliability constraints. We propose an efficient methodology for solving RBDO problems based on an improved high order response surface method (HORSM) that takes advantage of an efficient sampling method, Hermite polynomials and uncertainty contribution concept to construct a high order response surface function with cross terms for reliability analysis. The sampling method generates supporting points from Gauss-Hermite quadrature points, which can be used to approximate response surface function without cross terms, to identify the highest order of each random variable and to determine the significant variables connected with point estimate method. The cross terms between two significant random variables are added to the response surface function to improve the approximation accuracy. Integrating the nested strategy, the improved HORSM is explored in solving RBDO problems. Additionally, a sampling based reliability sensitivity analysis method is employed to reduce the computational effort further when design variables are distributional parameters of input random variables. The proposed methodology is applied on two test problems to validate its accuracy and efficiency. The proposed methodology is more efficient than first order reliability method based RBDO and Monte Carlo simulation based RBDO, and enables the use of RBDO as a practical design tool.

  4. Performance of machine learning methods for ligand-based virtual screening.

    Science.gov (United States)

    Plewczynski, Dariusz; Spieser, Stéphane A H; Koch, Uwe

    2009-05-01

    Computational screening of compound databases has become increasingly popular in pharmaceutical research. This review focuses on the evaluation of ligand-based virtual screening using active compounds as templates in the context of drug discovery. Ligand-based screening techniques are based on comparative molecular similarity analysis of compounds with known and unknown activity. We provide an overview of publications that have evaluated different machine learning methods, such as support vector machines, decision trees, ensemble methods such as boosting, bagging and random forests, clustering methods, neuronal networks, naïve Bayesian, data fusion methods and others.

  5. Method of generating a computer readable model

    DEFF Research Database (Denmark)

    2008-01-01

    A method of generating a computer readable model of a geometrical object constructed from a plurality of interconnectable construction elements, wherein each construction element has a number of connection elements for connecting the construction element with another construction element. The met......A method of generating a computer readable model of a geometrical object constructed from a plurality of interconnectable construction elements, wherein each construction element has a number of connection elements for connecting the construction element with another construction element....... The method comprises encoding a first and a second one of the construction elements as corresponding data structures, each representing the connection elements of the corresponding construction element, and each of the connection elements having associated with it a predetermined connection type. The method...... further comprises determining a first connection element of the first construction element and a second connection element of the second construction element located in a predetermined proximity of each other; and retrieving connectivity information of the corresponding connection types of the first...

  6. A computational method for sharp interface advection

    Science.gov (United States)

    Bredmose, Henrik; Jasak, Hrvoje

    2016-01-01

    We devise a numerical method for passive advection of a surface, such as the interface between two incompressible fluids, across a computational mesh. The method is called isoAdvector, and is developed for general meshes consisting of arbitrary polyhedral cells. The algorithm is based on the volume of fluid (VOF) idea of calculating the volume of one of the fluids transported across the mesh faces during a time step. The novelty of the isoAdvector concept consists of two parts. First, we exploit an isosurface concept for modelling the interface inside cells in a geometric surface reconstruction step. Second, from the reconstructed surface, we model the motion of the face–interface intersection line for a general polygonal face to obtain the time evolution within a time step of the submerged face area. Integrating this submerged area over the time step leads to an accurate estimate for the total volume of fluid transported across the face. The method was tested on simple two-dimensional and three-dimensional interface advection problems on both structured and unstructured meshes. The results are very satisfactory in terms of volume conservation, boundedness, surface sharpness and efficiency. The isoAdvector method was implemented as an OpenFOAM® extension and is published as open source. PMID:28018619

  7. A computational method for sharp interface advection.

    Science.gov (United States)

    Roenby, Johan; Bredmose, Henrik; Jasak, Hrvoje

    2016-11-01

    We devise a numerical method for passive advection of a surface, such as the interface between two incompressible fluids, across a computational mesh. The method is called isoAdvector, and is developed for general meshes consisting of arbitrary polyhedral cells. The algorithm is based on the volume of fluid (VOF) idea of calculating the volume of one of the fluids transported across the mesh faces during a time step. The novelty of the isoAdvector concept consists of two parts. First, we exploit an isosurface concept for modelling the interface inside cells in a geometric surface reconstruction step. Second, from the reconstructed surface, we model the motion of the face-interface intersection line for a general polygonal face to obtain the time evolution within a time step of the submerged face area. Integrating this submerged area over the time step leads to an accurate estimate for the total volume of fluid transported across the face. The method was tested on simple two-dimensional and three-dimensional interface advection problems on both structured and unstructured meshes. The results are very satisfactory in terms of volume conservation, boundedness, surface sharpness and efficiency. The isoAdvector method was implemented as an OpenFOAM ® extension and is published as open source.

  8. A Compute Environment of ABC95 Array Computer Based on Multi-FPGA Chip

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    ABC95 array computer is a multi-function network's computer based on FPGA technology, The multi-function network supports processors conflict-free access data from memory and supports processors access data from processors based on enhanced MESH network.ABC95 instruction's system includes control instructions, scalar instructions, vectors instructions.Mostly net-work instructions are introduced.A programming environment of ABC95 array computer assemble language is designed.A programming environment of ABC95 array computer for VC++ is advanced.It includes load function of ABC95 array computer program and data, store function, run function and so on.Specially, The data type of ABC95 array computer conflict-free access is defined.The results show that these technologies can develop programmer of ABC95 array computer effectively.

  9. Improved Real-time Denoising Method Based on Lifting Wavelet Transform

    Directory of Open Access Journals (Sweden)

    Liu Zhaohua

    2014-06-01

    Full Text Available Signal denoising can not only enhance the signal to noise ratio (SNR but also reduce the effect of noise. In order to satisfy the requirements of real-time signal denoising, an improved semisoft shrinkage real-time denoising method based on lifting wavelet transform was proposed. The moving data window technology realizes the real-time wavelet denoising, which employs wavelet transform based on lifting scheme to reduce computational complexity. Also hyperbolic threshold function and recursive threshold computing can ensure the dynamic characteristics of the system, in addition, it can improve the real-time calculating efficiency as well. The simulation results show that the semisoft shrinkage real-time denoising method has quite a good performance in comparison to the traditional methods, namely soft-thresholding and hard-thresholding. Therefore, this method can solve more practical engineering problems.

  10. A Review of Computational Methods to Predict the Risk of Rupture of Abdominal Aortic Aneurysms

    Directory of Open Access Journals (Sweden)

    Tejas Canchi

    2015-01-01

    Full Text Available Computational methods have played an important role in health care in recent years, as determining parameters that affect a certain medical condition is not possible in experimental conditions in many cases. Computational fluid dynamics (CFD methods have been used to accurately determine the nature of blood flow in the cardiovascular and nervous systems and air flow in the respiratory system, thereby giving the surgeon a diagnostic tool to plan treatment accordingly. Machine learning or data mining (MLD methods are currently used to develop models that learn from retrospective data to make a prediction regarding factors affecting the progression of a disease. These models have also been successful in incorporating factors such as patient history and occupation. MLD models can be used as a predictive tool to determine rupture potential in patients with abdominal aortic aneurysms (AAA along with CFD-based prediction of parameters like wall shear stress and pressure distributions. A combination of these computer methods can be pivotal in bridging the gap between translational and outcomes research in medicine. This paper reviews the use of computational methods in the diagnosis and treatment of AAA.

  11. Transforming bases to bytes: Molecular computing with DNA

    Indian Academy of Sciences (India)

    Despite the popular image of silicon-based computers for computation, an embryonic field of mole- cular computation is emerging, where molecules in solution perform computational ..... [4] Mao C, Sun W, Shen Z and Seeman N C 1999. A nanomechanical device based on the B-Z transition of DNA; Nature 397 144–146.

  12. Comparison of four computational methods for computing Q factors and resonance wavelengths in photonic crystal membrane cavities

    DEFF Research Database (Denmark)

    de Lasson, Jakob Rosenkrantz; Frandsen, Lars Hagedorn; Burger, Sven

    2016-01-01

    We benchmark four state-of-the-art computational methods by computing quality factors and resonance wavelengths in photonic crystal membrane L5 and L9 line defect cavities.The convergence of the methods with respect to resolution, degrees of freedom and number ofmodes is investigated. Special att...... attention is paid to the influence of the size of the computational domain. Convergence is not obtained for some of the methods, indicating that some are moresuitable than others for analyzing line defect cavities....

  13. A Web-based Distributed Voluntary Computing Platform for Large Scale Hydrological Computations

    Science.gov (United States)

    Demir, I.; Agliamzanov, R.

    2014-12-01

    Distributed volunteer computing can enable researchers and scientist to form large parallel computing environments to utilize the computing power of the millions of computers on the Internet, and use them towards running large scale environmental simulations and models to serve the common good of local communities and the world. Recent developments in web technologies and standards allow client-side scripting languages to run at speeds close to native application, and utilize the power of Graphics Processing Units (GPU). Using a client-side scripting language like JavaScript, we have developed an open distributed computing framework that makes it easy for researchers to write their own hydrologic models, and run them on volunteer computers. Users will easily enable their websites for visitors to volunteer sharing their computer resources to contribute running advanced hydrological models and simulations. Using a web-based system allows users to start volunteering their computational resources within seconds without installing any software. The framework distributes the model simulation to thousands of nodes in small spatial and computational sizes. A relational database system is utilized for managing data connections and queue management for the distributed computing nodes. In this paper, we present a web-based distributed volunteer computing platform to enable large scale hydrological simulations and model runs in an open and integrated environment.

  14. Iris features-based heart disease diagnosis by computer vision

    Science.gov (United States)

    Nguchu, Benedictor A.; Li, Li

    2017-07-01

    The study takes advantage of several new breakthroughs in computer vision technology to develop a new mid-irisbiomedical platform that processes iris image for early detection of heart-disease. Guaranteeing early detection of heart disease provides a possibility of having non-surgical treatment as suggested by biomedical researchers and associated institutions. However, our observation discovered that, a clinical practicable solution which could be both sensible and specific for early detection is still lacking. Due to this, the rate of majority vulnerable to death is highly increasing. The delayed diagnostic procedures, inefficiency, and complications of available methods are the other reasons for this catastrophe. Therefore, this research proposes the novel IFB (Iris Features Based) method for diagnosis of premature, and early stage heart disease. The method incorporates computer vision and iridology to obtain a robust, non-contact, nonradioactive, and cost-effective diagnostic tool. The method analyzes abnormal inherent weakness in tissues, change in color and patterns, of a specific region of iris that responds to impulses of heart organ as per Bernard Jensen-iris Chart. The changes in iris infer the presence of degenerative abnormalities in heart organ. These changes are precisely detected and analyzed by IFB method that includes, tensor-based-gradient(TBG), multi orientations gabor filters(GF), textural oriented features(TOF), and speed-up robust features(SURF). Kernel and Multi class oriented support vector machines classifiers are used for classifying normal and pathological iris features. Experimental results demonstrated that the proposed method, not only has better diagnostic performance, but also provides an insight for early detection of other diseases.

  15. The Direct Lighting Computation in Global Illumination Methods

    Science.gov (United States)

    Wang, Changyaw Allen

    1994-01-01

    Creating realistic images is a computationally expensive process, but it is very important for applications such as interior design, product design, education, virtual reality, and movie special effects. To generate realistic images, state-of-art rendering techniques are employed to simulate global illumination, which accounts for the interreflection of light among objects. In this document, we formalize the global illumination problem into a eight -dimensional integral and discuss various methods that can accelerate the process of approximating this integral. We focus on the direct lighting computation, which accounts for the light reaching the viewer from the emitting sources after exactly one reflection, Monte Carlo sampling methods, and light source simplification. Results include a new sample generation method, a framework for the prediction of the total number of samples used in a solution, and a generalized Monte Carlo approach for computing the direct lighting from an environment which for the first time makes ray tracing feasible for highly complex environments.

  16. Cloud computing methods and practical approaches

    CERN Document Server

    Mahmood, Zaigham

    2013-01-01

    This book presents both state-of-the-art research developments and practical guidance on approaches, technologies and frameworks for the emerging cloud paradigm. Topics and features: presents the state of the art in cloud technologies, infrastructures, and service delivery and deployment models; discusses relevant theoretical frameworks, practical approaches and suggested methodologies; offers guidance and best practices for the development of cloud-based services and infrastructures, and examines management aspects of cloud computing; reviews consumer perspectives on mobile cloud computing an

  17. Computer-Based Learning in Chemistry Classes

    Science.gov (United States)

    Pietzner, Verena

    2014-01-01

    Currently not many people would doubt that computers play an essential role in both public and private life in many countries. However, somewhat surprisingly, evidence of computer use is difficult to find in German state schools although other countries have managed to implement computer-based teaching and learning in their schools. This paper…

  18. The derivative based variance sensitivity analysis for the distribution parameters and its computation

    International Nuclear Information System (INIS)

    Wang, Pan; Lu, Zhenzhou; Ren, Bo; Cheng, Lei

    2013-01-01

    The output variance is an important measure for the performance of a structural system, and it is always influenced by the distribution parameters of inputs. In order to identify the influential distribution parameters and make it clear that how those distribution parameters influence the output variance, this work presents the derivative based variance sensitivity decomposition according to Sobol′s variance decomposition, and proposes the derivative based main and total sensitivity indices. By transforming the derivatives of various orders variance contributions into the form of expectation via kernel function, the proposed main and total sensitivity indices can be seen as the “by-product” of Sobol′s variance based sensitivity analysis without any additional output evaluation. Since Sobol′s variance based sensitivity indices have been computed efficiently by the sparse grid integration method, this work also employs the sparse grid integration method to compute the derivative based main and total sensitivity indices. Several examples are used to demonstrate the rationality of the proposed sensitivity indices and the accuracy of the applied method

  19. Distinguishing Computer-Generated Graphics from Natural Images Based on Sensor Pattern Noise and Deep Learning

    Directory of Open Access Journals (Sweden)

    Ye Yao

    2018-04-01

    Full Text Available Computer-generated graphics (CGs are images generated by computer software. The rapid development of computer graphics technologies has made it easier to generate photorealistic computer graphics, and these graphics are quite difficult to distinguish from natural images (NIs with the naked eye. In this paper, we propose a method based on sensor pattern noise (SPN and deep learning to distinguish CGs from NIs. Before being fed into our convolutional neural network (CNN-based model, these images—CGs and NIs—are clipped into image patches. Furthermore, three high-pass filters (HPFs are used to remove low-frequency signals, which represent the image content. These filters are also used to reveal the residual signal as well as SPN introduced by the digital camera device. Different from the traditional methods of distinguishing CGs from NIs, the proposed method utilizes a five-layer CNN to classify the input image patches. Based on the classification results of the image patches, we deploy a majority vote scheme to obtain the classification results for the full-size images. The experiments have demonstrated that (1 the proposed method with three HPFs can achieve better results than that with only one HPF or no HPF and that (2 the proposed method with three HPFs achieves 100% accuracy, although the NIs undergo a JPEG compression with a quality factor of 75.

  20. A method of computer modelling the lithium-ion batteries aging process based on the experimental characteristics

    Science.gov (United States)

    Czerepicki, A.; Koniak, M.

    2017-06-01

    The paper presents a method of modelling the processes of aging lithium-ion batteries, its implementation as a computer application and results for battery state estimation. Authors use previously developed behavioural battery model, which was built using battery operating characteristics obtained from the experiment. This model was implemented in the form of a computer program using a database to store battery characteristics. Batteries aging process is a new extended functionality of the model. Algorithm of computer simulation uses a real measurements of battery capacity as a function of the battery charge and discharge cycles number. Simulation allows to take into account the incomplete cycles of charge or discharge battery, which are characteristic for transport powered by electricity. The developed model was used to simulate the battery state estimation for different load profiles, obtained by measuring the movement of the selected means of transport.

  1. Remote sensing image ship target detection method based on visual attention model

    Science.gov (United States)

    Sun, Yuejiao; Lei, Wuhu; Ren, Xiaodong

    2017-11-01

    The traditional methods of detecting ship targets in remote sensing images mostly use sliding window to search the whole image comprehensively. However, the target usually occupies only a small fraction of the image. This method has high computational complexity for large format visible image data. The bottom-up selective attention mechanism can selectively allocate computing resources according to visual stimuli, thus improving the computational efficiency and reducing the difficulty of analysis. Considering of that, a method of ship target detection in remote sensing images based on visual attention model was proposed in this paper. The experimental results show that the proposed method can reduce the computational complexity while improving the detection accuracy, and improve the detection efficiency of ship targets in remote sensing images.

  2. A fast inverse consistent deformable image registration method based on symmetric optical flow computation

    International Nuclear Information System (INIS)

    Yang Deshan; Li Hua; Low, Daniel A; Deasy, Joseph O; Naqa, Issam El

    2008-01-01

    Deformable image registration is widely used in various radiation therapy applications including daily treatment planning adaptation to map planned tissue or dose to changing anatomy. In this work, a simple and efficient inverse consistency deformable registration method is proposed with aims of higher registration accuracy and faster convergence speed. Instead of registering image I to a second image J, the two images are symmetrically deformed toward one another in multiple passes, until both deformed images are matched and correct registration is therefore achieved. In each pass, a delta motion field is computed by minimizing a symmetric optical flow system cost function using modified optical flow algorithms. The images are then further deformed with the delta motion field in the positive and negative directions respectively, and then used for the next pass. The magnitude of the delta motion field is forced to be less than 0.4 voxel for every pass in order to guarantee smoothness and invertibility for the two overall motion fields that are accumulating the delta motion fields in both positive and negative directions, respectively. The final motion fields to register the original images I and J, in either direction, are calculated by inverting one overall motion field and combining the inversion result with the other overall motion field. The final motion fields are inversely consistent and this is ensured by the symmetric way that registration is carried out. The proposed method is demonstrated with phantom images, artificially deformed patient images and 4D-CT images. Our results suggest that the proposed method is able to improve the overall accuracy (reducing registration error by 30% or more, compared to the original and inversely inconsistent optical flow algorithms), reduce the inverse consistency error (by 95% or more) and increase the convergence rate (by 100% or more). The overall computation speed may slightly decrease, or increase in most cases

  3. 11th International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing

    CERN Document Server

    Nuyens, Dirk

    2016-01-01

    This book presents the refereed proceedings of the Eleventh International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing that was held at the University of Leuven (Belgium) in April 2014. These biennial conferences are major events for Monte Carlo and quasi-Monte Carlo researchers. The proceedings include articles based on invited lectures as well as carefully selected contributed papers on all theoretical aspects and applications of Monte Carlo and quasi-Monte Carlo methods. Offering information on the latest developments in these very active areas, this book is an excellent reference resource for theoreticians and practitioners interested in solving high-dimensional computational problems, arising, in particular, in finance, statistics and computer graphics.

  4. Computer Vision Based Measurement of Wildfire Smoke Dynamics

    Directory of Open Access Journals (Sweden)

    BUGARIC, M.

    2015-02-01

    Full Text Available This article presents a novel method for measurement of wildfire smoke dynamics based on computer vision and augmented reality techniques. The aspect of smoke dynamics is an important feature in video smoke detection that could distinguish smoke from visually similar phenomena. However, most of the existing smoke detection systems are not capable of measuring the real-world size of the detected smoke regions. Using computer vision and GIS-based augmented reality, we measure the real dimensions of smoke plumes, and observe the change in size over time. The measurements are performed on offline video data with known camera parameters and location. The observed data is analyzed in order to create a classifier that could be used to eliminate certain categories of false alarms induced by phenomena with different dynamics than smoke. We carried out an offline evaluation where we measured the improvement in the detection process achieved using the proposed smoke dynamics characteristics. The results show a significant increase in algorithm performance, especially in terms of reducing false alarms rate. From this it follows that the proposed method for measurement of smoke dynamics could be used to improve existing smoke detection algorithms, or taken into account when designing new ones.

  5. A real-time spike sorting method based on the embedded GPU.

    Science.gov (United States)

    Zelan Yang; Kedi Xu; Xiang Tian; Shaomin Zhang; Xiaoxiang Zheng

    2017-07-01

    Microelectrode arrays with hundreds of channels have been widely used to acquire neuron population signals in neuroscience studies. Online spike sorting is becoming one of the most important challenges for high-throughput neural signal acquisition systems. Graphic processing unit (GPU) with high parallel computing capability might provide an alternative solution for increasing real-time computational demands on spike sorting. This study reported a method of real-time spike sorting through computing unified device architecture (CUDA) which was implemented on an embedded GPU (NVIDIA JETSON Tegra K1, TK1). The sorting approach is based on the principal component analysis (PCA) and K-means. By analyzing the parallelism of each process, the method was further optimized in the thread memory model of GPU. Our results showed that the GPU-based classifier on TK1 is 37.92 times faster than the MATLAB-based classifier on PC while their accuracies were the same with each other. The high-performance computing features of embedded GPU demonstrated in our studies suggested that the embedded GPU provide a promising platform for the real-time neural signal processing.

  6. Volumetric quantification of bone-implant contact using micro-computed tomography analysis based on region-based segmentation.

    Science.gov (United States)

    Kang, Sung-Won; Lee, Woo-Jin; Choi, Soon-Chul; Lee, Sam-Sun; Heo, Min-Suk; Huh, Kyung-Hoe; Kim, Tae-Il; Yi, Won-Jin

    2015-03-01

    We have developed a new method of segmenting the areas of absorbable implants and bone using region-based segmentation of micro-computed tomography (micro-CT) images, which allowed us to quantify volumetric bone-implant contact (VBIC) and volumetric absorption (VA). The simple threshold technique generally used in micro-CT analysis cannot be used to segment the areas of absorbable implants and bone. Instead, a region-based segmentation method, a region-labeling method, and subsequent morphological operations were successively applied to micro-CT images. The three-dimensional VBIC and VA of the absorbable implant were then calculated over the entire volume of the implant. Two-dimensional (2D) bone-implant contact (BIC) and bone area (BA) were also measured based on the conventional histomorphometric method. VA and VBIC increased significantly with as the healing period increased (pimplants using micro-CT analysis using a region-based segmentation method.

  7. Novel Schemes for Measurement-Based Quantum Computation

    International Nuclear Information System (INIS)

    Gross, D.; Eisert, J.

    2007-01-01

    We establish a framework which allows one to construct novel schemes for measurement-based quantum computation. The technique develops tools from many-body physics--based on finitely correlated or projected entangled pair states--to go beyond the cluster-state based one-way computer. We identify resource states radically different from the cluster state, in that they exhibit nonvanishing correlations, can be prepared using nonmaximally entangling gates, or have very different local entanglement properties. In the computational models, randomness is compensated in a different manner. It is shown that there exist resource states which are locally arbitrarily close to a pure state. We comment on the possibility of tailoring computational models to specific physical systems

  8. Novel schemes for measurement-based quantum computation.

    Science.gov (United States)

    Gross, D; Eisert, J

    2007-06-01

    We establish a framework which allows one to construct novel schemes for measurement-based quantum computation. The technique develops tools from many-body physics-based on finitely correlated or projected entangled pair states-to go beyond the cluster-state based one-way computer. We identify resource states radically different from the cluster state, in that they exhibit nonvanishing correlations, can be prepared using nonmaximally entangling gates, or have very different local entanglement properties. In the computational models, randomness is compensated in a different manner. It is shown that there exist resource states which are locally arbitrarily close to a pure state. We comment on the possibility of tailoring computational models to specific physical systems.

  9. Local coding based matching kernel method for image classification.

    Directory of Open Access Journals (Sweden)

    Yan Song

    Full Text Available This paper mainly focuses on how to effectively and efficiently measure visual similarity for local feature based representation. Among existing methods, metrics based on Bag of Visual Word (BoV techniques are efficient and conceptually simple, at the expense of effectiveness. By contrast, kernel based metrics are more effective, but at the cost of greater computational complexity and increased storage requirements. We show that a unified visual matching framework can be developed to encompass both BoV and kernel based metrics, in which local kernel plays an important role between feature pairs or between features and their reconstruction. Generally, local kernels are defined using Euclidean distance or its derivatives, based either explicitly or implicitly on an assumption of Gaussian noise. However, local features such as SIFT and HoG often follow a heavy-tailed distribution which tends to undermine the motivation behind Euclidean metrics. Motivated by recent advances in feature coding techniques, a novel efficient local coding based matching kernel (LCMK method is proposed. This exploits the manifold structures in Hilbert space derived from local kernels. The proposed method combines advantages of both BoV and kernel based metrics, and achieves a linear computational complexity. This enables efficient and scalable visual matching to be performed on large scale image sets. To evaluate the effectiveness of the proposed LCMK method, we conduct extensive experiments with widely used benchmark datasets, including 15-Scenes, Caltech101/256, PASCAL VOC 2007 and 2011 datasets. Experimental results confirm the effectiveness of the relatively efficient LCMK method.

  10. Estimation of the flow resistances exerted in coronary arteries using a vessel length-based method.

    Science.gov (United States)

    Lee, Kyung Eun; Kwon, Soon-Sung; Ji, Yoon Cheol; Shin, Eun-Seok; Choi, Jin-Ho; Kim, Sung Joon; Shim, Eun Bo

    2016-08-01

    Flow resistances exerted in the coronary arteries are the key parameters for the image-based computer simulation of coronary hemodynamics. The resistances depend on the anatomical characteristics of the coronary system. A simple and reliable estimation of the resistances is a compulsory procedure to compute the fractional flow reserve (FFR) of stenosed coronary arteries, an important clinical index of coronary artery disease. The cardiac muscle volume reconstructed from computed tomography (CT) images has been used to assess the resistance of the feeding coronary artery (muscle volume-based method). In this study, we estimate the flow resistances exerted in coronary arteries by using a novel method. Based on a physiological observation that longer coronary arteries have more daughter branches feeding a larger mass of cardiac muscle, the method measures the vessel lengths from coronary angiogram or CT images (vessel length-based method) and predicts the coronary flow resistances. The underlying equations are derived from the physiological relation among flow rate, resistance, and vessel length. To validate the present estimation method, we calculate the coronary flow division over coronary major arteries for 50 patients using the vessel length-based method as well as the muscle volume-based one. These results are compared with the direct measurements in a clinical study. Further proving the usefulness of the present method, we compute the coronary FFR from the images of optical coherence tomography.

  11. A qualitative and quantitative laser-based computer-aided flow visualization method. M.S. Thesis, 1992 Final Report

    Science.gov (United States)

    Canacci, Victor A.; Braun, M. Jack

    1994-01-01

    The experimental approach presented here offers a nonintrusive, qualitative and quantitative evaluation of full field flow patterns applicable in various geometries in a variety of fluids. This Full Flow Field Tracking (FFFT) Particle Image Velocimetry (PIV) technique, by means of particle tracers illuminated by a laser light sheet, offers an alternative to Laser Doppler Velocimetry (LDV), and intrusive systems such as Hot Wire/Film Anemometry. The method makes obtainable the flow patterns, and allows quantitative determination of the velocities, accelerations, and mass flows of an entire flow field. The method uses a computer based digitizing system attached through an imaging board to a low luminosity camera. A customized optical train allows the system to become a long distance microscope (LDM), allowing magnifications of areas of interest ranging up to 100 times. Presented in addition to the method itself, are studies in which the flow patterns and velocities were observed and evaluated in three distinct geometries, with three different working fluids. The first study involved pressure and flow analysis of a brush seal in oil. The next application involved studying the velocity and flow patterns in a cowl lip cooling passage of an air breathing aircraft engine using water as the working fluid. Finally, the method was extended to a study in air to examine the flows in a staggered pin arrangement located on one side of a branched duct.

  12. Accelerated H-LBP-based edge extraction method for digital radiography

    Energy Technology Data Exchange (ETDEWEB)

    Qiao, Shuang; Zhao, Chen-yi; Huang, Ji-peng [School of Physics, Northeast Normal University, Changchun 130024 (China); Sun, Jia-ning, E-mail: sunjn118@nenu.edu.cn [School of Mathematics and Statistics, Northeast Normal University, Changchun 130024 (China)

    2015-01-11

    With the goal of achieving real time and efficient edge extraction for digital radiography, an accelerated H-LBP-based edge extraction method (AH-LBP) is presented in this paper by improving the existing framework of local binary pattern with the H function (H-LBP). Since the proposed method avoids computationally expensive operations with no loss of quality, it possesses much lower computational complexity than H-LBP. Experimental results on real radiographies show desirable performance of our method. - Highlights: • An accelerated H-LBP method for edge extraction on digital radiography is proposed. • The novel AH-LBP relies on numerical analysis of the existing H-LBP method. • Aiming at accelerating, H-LBP is reformulated as a direct binary processing. • AH-LBP provides the same edge extraction result as H-LBP does. • AH-LBP has low computational complexity satisfying real time requirements.

  13. Computer aided method for colour calibration and analysis of digital rock photographs

    Directory of Open Access Journals (Sweden)

    Matic Potočnik

    2015-12-01

    Full Text Available The methods used in geology to determine colour and colour coverage are expensive, time consuming, and/ or subjective. Estimates of colour coverage can only be approximate since they are based on rough comparisonbased measuring etalons and subjective estimation, which is dependent upon the skill and experience of the person performing the estimation. We present a method which accelerates, simplifis, and objectifis these tasks using a computer application. It automatically calibrates the colours of a digital photo, and enables the user to read colour values and coverage, even after returning from fild work. Colour identifiation is based on the Munsell colour system. For the purposes of colour calibration we use the X-Rite ColorChecker Passport colour chart placed onto the photographed scene. Our computer application detects the ColorChecker colour chart, and fids a colour space transformation to calibrate the colour in the photo. The user can then use the application to read colours within selected points or regions of the photo. The results of the computerised colour calibration were compared to the reference values of the ColorChecker chart. The values slightly deviate from the exact values, but the deviation is around the limit of human capability for visual comparison. We have devised an experiment, which compares the precision of the computerised colour analysis and manual colour analysis performed on a variety of rock samples with the help of geology students using Munsell Rock-color Chart. The analysis showed that the precision of manual comparative identifiation on multicoloured samples is somewhat problematic, since the choice of representative colours and observation points for a certain part of a sample are subjective. The computer based method has the edge in verifibility and repeatability of the analysis since the application the original photo to be saved with colour calibration, and tagging of colouranalysed points and regions.

  14. Using Computer Simulations for Promoting Model-based Reasoning. Epistemological and Educational Dimensions

    Science.gov (United States)

    Develaki, Maria

    2017-11-01

    Scientific reasoning is particularly pertinent to science education since it is closely related to the content and methodologies of science and contributes to scientific literacy. Much of the research in science education investigates the appropriate framework and teaching methods and tools needed to promote students' ability to reason and evaluate in a scientific way. This paper aims (a) to contribute to an extended understanding of the nature and pedagogical importance of model-based reasoning and (b) to exemplify how using computer simulations can support students' model-based reasoning. We provide first a background for both scientific reasoning and computer simulations, based on the relevant philosophical views and the related educational discussion. This background suggests that the model-based framework provides an epistemologically valid and pedagogically appropriate basis for teaching scientific reasoning and for helping students develop sounder reasoning and decision-taking abilities and explains how using computer simulations can foster these abilities. We then provide some examples illustrating the use of computer simulations to support model-based reasoning and evaluation activities in the classroom. The examples reflect the procedure and criteria for evaluating models in science and demonstrate the educational advantages of their application in classroom reasoning activities.

  15. A blending of computer-based assessment and performance-based assessment: Multimedia-Based Performance Assessment (MBPA). The introduction of a new method of assessment in Dutch Vocational Education and Training (VET)

    NARCIS (Netherlands)

    de Klerk, Sebastiaan; Eggen, Theodorus Johannes Hendrikus Maria; Veldkamp, Bernard P.

    2014-01-01

    Innovation in technology drives innovation in assessment. Since the introduction of computer-based assessment (CBA), a few decades ago, many formerly paper-and-pencil tests have transformed in a computer-based equivalent. CBAs are becoming more complex, including multimedia and simulative elements

  16. Modeling soft factors in computer-based wargames

    Science.gov (United States)

    Alexander, Steven M.; Ross, David O.; Vinarskai, Jonathan S.; Farr, Steven D.

    2002-07-01

    Computer-based wargames have seen much improvement in recent years due to rapid increases in computing power. Because these games have been developed for the entertainment industry, most of these advances have centered on the graphics, sound, and user interfaces integrated into these wargames with less attention paid to the game's fidelity. However, for a wargame to be useful to the military, it must closely approximate as many of the elements of war as possible. Among the elements that are typically not modeled or are poorly modeled in nearly all military computer-based wargames are systematic effects, command and control, intelligence, morale, training, and other human and political factors. These aspects of war, with the possible exception of systematic effects, are individually modeled quite well in many board-based commercial wargames. The work described in this paper focuses on incorporating these elements from the board-based games into a computer-based wargame. This paper will also address the modeling and simulation of the systemic paralysis of an adversary that is implied by the concept of Effects Based Operations (EBO). Combining the fidelity of current commercial board wargames with the speed, ease of use, and advanced visualization of the computer can significantly improve the effectiveness of military decision making and education. Once in place, the process of converting board wargames concepts to computer wargames will allow the infusion of soft factors into military training and planning.

  17. The Extrapolation-Accelerated Multilevel Aggregation Method in PageRank Computation

    Directory of Open Access Journals (Sweden)

    Bing-Yuan Pu

    2013-01-01

    Full Text Available An accelerated multilevel aggregation method is presented for calculating the stationary probability vector of an irreducible stochastic matrix in PageRank computation, where the vector extrapolation method is its accelerator. We show how to periodically combine the extrapolation method together with the multilevel aggregation method on the finest level for speeding up the PageRank computation. Detailed numerical results are given to illustrate the behavior of this method, and comparisons with the typical methods are also made.

  18. A GPU-based mipmapping method for water surface visualization

    Science.gov (United States)

    Li, Hua; Quan, Wei; Xu, Chao; Wu, Yan

    2018-03-01

    Visualization of water surface is a hot topic in computer graphics. In this paper, we presented a fast method to generate wide range of water surface with good image quality both near and far from the viewpoint. This method utilized uniform mesh and Fractal Perlin noise to model water surface. Mipmapping technology was enforced to the surface textures, which adjust the resolution with respect to the distance from the viewpoint and reduce the computing cost. Lighting effect was computed based on shadow mapping technology, Snell's law and Fresnel term. The render pipeline utilizes a CPU-GPU shared memory structure, which improves the rendering efficiency. Experiment results show that our approach visualizes water surface with good image quality at real-time frame rates performance.

  19. Computational Methods for ChIP-seq Data Analysis and Applications

    KAUST Repository

    Ashoor, Haitham

    2017-04-25

    The development of Chromatin immunoprecipitation followed by sequencing (ChIP-seq) technology has enabled the construction of genome-wide maps of protein-DNA interaction. Such maps provide information about transcriptional regulation at the epigenetic level (histone modifications and histone variants) and at the level of transcription factor (TF) activity. This dissertation presents novel computational methods for ChIP-seq data analysis and applications. The work of this dissertation addresses four main challenges. First, I address the problem of detecting histone modifications from ChIP-seq cancer samples. The presence of copy number variations (CNVs) in cancer samples results in statistical biases that lead to inaccurate predictions when standard methods are used. To overcome this issue I developed HMCan, a specially designed algorithm to handle ChIP-seq cancer data by accounting for the presence of CNVs. When using ChIP-seq data from cancer cells, HMCan demonstrates unbiased and accurate predictions compared to the standard state of the art methods. Second, I address the problem of identifying changes in histone modifications between two ChIP-seq samples with different genetic backgrounds (for example cancer vs. normal). In addition to CNVs, different antibody efficiency between samples and presence of samples replicates are challenges for this problem. To overcome these issues, I developed the HMCan-diff algorithm as an extension to HMCan. HMCan-diff implements robust normalization methods to address the challenges listed above. HMCan-diff significantly outperforms another state of the art methods on data containing cancer samples. Third, I investigate and analyze predictions of different methods for enhancer prediction based on ChIP-seq data. The analysis shows that predictions generated by different methods are poorly overlapping. To overcome this issue, I developed DENdb, a database that integrates enhancer predictions from different methods. DENdb also

  20. Computational methods for industrial radiation measurement applications

    International Nuclear Information System (INIS)

    Gardner, R.P.; Guo, P.; Ao, Q.

    1996-01-01

    Computational methods have been used with considerable success to complement radiation measurements in solving a wide range of industrial problems. The almost exponential growth of computer capability and applications in the last few years leads to a open-quotes black boxclose quotes mentality for radiation measurement applications. If a black box is defined as any radiation measurement device that is capable of measuring the parameters of interest when a wide range of operating and sample conditions may occur, then the development of computational methods for industrial radiation measurement applications should now be focused on the black box approach and the deduction of properties of interest from the response with acceptable accuracy and reasonable efficiency. Nowadays, increasingly better understanding of radiation physical processes, more accurate and complete fundamental physical data, and more advanced modeling and software/hardware techniques have made it possible to make giant strides in that direction with new ideas implemented with computer software. The Center for Engineering Applications of Radioisotopes (CEAR) at North Carolina State University has been working on a variety of projects in the area of radiation analyzers and gauges for accomplishing this for quite some time, and they are discussed here with emphasis on current accomplishments

  1. Classroom versus Computer-Based CPR Training: A Comparison of the Effectiveness of Two Instructional Methods

    Science.gov (United States)

    Rehberg, Robb S.; Gazzillo Diaz, Linda; Middlemas, David A.

    2009-01-01

    Objective: The objective of this study was to determine whether computer-based CPR training is comparable to traditional classroom training. Design and Setting: This study was quantitative in design. Data was gathered from a standardized examination and skill performance evaluation which yielded numerical scores. Subjects: The subjects were 64…

  2. Computationally efficient methods for digital control

    NARCIS (Netherlands)

    Guerreiro Tome Antunes, D.J.; Hespanha, J.P.; Silvestre, C.J.; Kataria, N.; Brewer, F.

    2008-01-01

    The problem of designing a digital controller is considered with the novelty of explicitly taking into account the computation cost of the controller implementation. A class of controller emulation methods inspired by numerical analysis is proposed. Through various examples it is shown that these

  3. [Veneer computer aided design based on reverse engineering technology].

    Science.gov (United States)

    Liu, Ming-li; Chen, Xiao-dong; Wang, Yong

    2012-03-01

    To explore the computer aided design (CAD) method of veneer restoration, and to assess if the solution can help prosthesis meet morphology esthetics standard. A volunteer's upper right central incisor needed to be restored with veneer. Super hard stone models of patient's dentition (before and after tooth preparation) were scanned with the three-dimensional laser scanner. The veneer margin was designed as butt-to-butt type. The veneer was constructed using reverse engineering (RE) software. The technique guideline of veneers CAD was explore based on RE software, and the veneers was smooth, continuous and symmetrical, which met esthetics construction needs. It was a feasible method to reconstruct veneer restoration based on RE technology.

  4. Gradient-based methods for production optimization of oil reservoirs

    Energy Technology Data Exchange (ETDEWEB)

    Suwartadi, Eka

    2012-07-01

    Production optimization for water flooding in the secondary phase of oil recovery is the main topic in this thesis. The emphasis has been on numerical optimization algorithms, tested on case examples using simple hypothetical oil reservoirs. Gradientbased optimization, which utilizes adjoint-based gradient computation, is used to solve the optimization problems. The first contribution of this thesis is to address output constraint problems. These kinds of constraints are natural in production optimization. Limiting total water production and water cut at producer wells are examples of such constraints. To maintain the feasibility of an optimization solution, a Lagrangian barrier method is proposed to handle the output constraints. This method incorporates the output constraints into the objective function, thus avoiding additional computations for the constraints gradient (Jacobian) which may be detrimental to the efficiency of the adjoint method. The second contribution is the study of the use of second-order adjoint-gradient information for production optimization. In order to speedup convergence rate in the optimization, one usually uses quasi-Newton approaches such as BFGS and SR1 methods. These methods compute an approximation of the inverse of the Hessian matrix given the first-order gradient from the adjoint method. The methods may not give significant speedup if the Hessian is ill-conditioned. We have developed and implemented the Hessian matrix computation using the adjoint method. Due to high computational cost of the Newton method itself, we instead compute the Hessian-timesvector product which is used in a conjugate gradient algorithm. Finally, the last contribution of this thesis is on surrogate optimization for water flooding in the presence of the output constraints. Two kinds of model order reduction techniques are applied to build surrogate models. These are proper orthogonal decomposition (POD) and the discrete empirical interpolation method (DEIM

  5. Semantic computing and language knowledge bases

    Science.gov (United States)

    Wang, Lei; Wang, Houfeng; Yu, Shiwen

    2017-09-01

    As the proposition of the next-generation Web - semantic Web, semantic computing has been drawing more and more attention within the circle and the industries. A lot of research has been conducted on the theory and methodology of the subject, and potential applications have also been investigated and proposed in many fields. The progress of semantic computing made so far cannot be detached from its supporting pivot - language resources, for instance, language knowledge bases. This paper proposes three perspectives of semantic computing from a macro view and describes the current status of affairs about the construction of language knowledge bases and the related research and applications that have been carried out on the basis of these resources via a case study in the Institute of Computational Linguistics at Peking University.

  6. Constructing a Computer Model of the Human Eye Based on Tissue Slice Images

    OpenAIRE

    Dai, Peishan; Wang, Boliang; Bao, Chunbo; Ju, Ying

    2010-01-01

    Computer simulation of the biomechanical and biological heat transfer in ophthalmology greatly relies on having a reliable computer model of the human eye. This paper proposes a novel method on the construction of a geometric model of the human eye based on tissue slice images. Slice images were obtained from an in vitro Chinese human eye through an embryo specimen processing methods. A level set algorithm was used to extract contour points of eye tissues while a principle component analysi...

  7. Computer assisted analysis of research-based teaching method in English newspaper reading teaching

    Science.gov (United States)

    Jie, Zheng

    2017-06-01

    In recent years, the teaching of English newspaper reading has been developing rapidly. However, the teaching effect of the existing course is not ideal. The paper tries to apply the research-based teaching model to English newspaper reading teaching, investigates the current situation in higher vocational colleges, and analyzes the problems. It designs a teaching model of English newspaper reading and carries out the empirical research conducted by computers. The results show that the teaching mode can use knowledge and ability to stimulate learners interest and comprehensively improve their ability to read newspapers.

  8. Computational Fluid Dynamics Analysis Method Developed for Rocket-Based Combined Cycle Engine Inlet

    Science.gov (United States)

    1997-01-01

    Renewed interest in hypersonic propulsion systems has led to research programs investigating combined cycle engines that are designed to operate efficiently across the flight regime. The Rocket-Based Combined Cycle Engine is a propulsion system under development at the NASA Lewis Research Center. This engine integrates a high specific impulse, low thrust-to-weight, airbreathing engine with a low-impulse, high thrust-to-weight rocket. From takeoff to Mach 2.5, the engine operates as an air-augmented rocket. At Mach 2.5, the engine becomes a dual-mode ramjet; and beyond Mach 8, the rocket is turned back on. One Rocket-Based Combined Cycle Engine variation known as the "Strut-Jet" concept is being investigated jointly by NASA Lewis, the U.S. Air Force, Gencorp Aerojet, General Applied Science Labs (GASL), and Lockheed Martin Corporation. Work thus far has included wind tunnel experiments and computational fluid dynamics (CFD) investigations with the NPARC code. The CFD method was initiated by modeling the geometry of the Strut-Jet with the GRIDGEN structured grid generator. Grids representing a subscale inlet model and the full-scale demonstrator geometry were constructed. These grids modeled one-half of the symmetric inlet flow path, including the precompression plate, diverter, center duct, side duct, and combustor. After the grid generation, full Navier-Stokes flow simulations were conducted with the NPARC Navier-Stokes code. The Chien low-Reynolds-number k-e turbulence model was employed to simulate the high-speed turbulent flow. Finally, the CFD solutions were postprocessed with a Fortran code. This code provided wall static pressure distributions, pitot pressure distributions, mass flow rates, and internal drag. These results were compared with experimental data from a subscale inlet test for code validation; then they were used to help evaluate the demonstrator engine net thrust.

  9. A New Soft Computing Method for K-Harmonic Means Clustering.

    Science.gov (United States)

    Yeh, Wei-Chang; Jiang, Yunzhi; Chen, Yee-Fen; Chen, Zhe

    2016-01-01

    The K-harmonic means clustering algorithm (KHM) is a new clustering method used to group data such that the sum of the harmonic averages of the distances between each entity and all cluster centroids is minimized. Because it is less sensitive to initialization than K-means (KM), many researchers have recently been attracted to studying KHM. In this study, the proposed iSSO-KHM is based on an improved simplified swarm optimization (iSSO) and integrates a variable neighborhood search (VNS) for KHM clustering. As evidence of the utility of the proposed iSSO-KHM, we present extensive computational results on eight benchmark problems. From the computational results, the comparison appears to support the superiority of the proposed iSSO-KHM over previously developed algorithms for all experiments in the literature.

  10. General rigid motion correction for computed tomography imaging based on locally linear embedding

    Science.gov (United States)

    Chen, Mianyi; He, Peng; Feng, Peng; Liu, Baodong; Yang, Qingsong; Wei, Biao; Wang, Ge

    2018-02-01

    The patient motion can damage the quality of computed tomography images, which are typically acquired in cone-beam geometry. The rigid patient motion is characterized by six geometric parameters and are more challenging to correct than in fan-beam geometry. We extend our previous rigid patient motion correction method based on the principle of locally linear embedding (LLE) from fan-beam to cone-beam geometry and accelerate the computational procedure with the graphics processing unit (GPU)-based all scale tomographic reconstruction Antwerp toolbox. The major merit of our method is that we need neither fiducial markers nor motion-tracking devices. The numerical and experimental studies show that the LLE-based patient motion correction is capable of calibrating the six parameters of the patient motion simultaneously, reducing patient motion artifacts significantly.

  11. Metaheuristic Based Scheduling Meta-Tasks in Distributed Heterogeneous Computing Systems

    Directory of Open Access Journals (Sweden)

    Hesam Izakian

    2009-07-01

    Full Text Available Scheduling is a key problem in distributed heterogeneous computing systems in order to benefit from the large computing capacity of such systems and is an NP-complete problem. In this paper, we present a metaheuristic technique, namely the Particle Swarm Optimization (PSO algorithm, for this problem. PSO is a population-based search algorithm based on the simulation of the social behavior of bird flocking and fish schooling. Particles fly in problem search space to find optimal or near-optimal solutions. The scheduler aims at minimizing makespan, which is the time when finishes the latest task. Experimental studies show that the proposed method is more efficient and surpasses those of reported PSO and GA approaches for this problem.

  12. Computer Assisted Instructional Design for Computer-Based Instruction. Final Report. Working Papers.

    Science.gov (United States)

    Russell, Daniel M.; Pirolli, Peter

    Recent advances in artificial intelligence and the cognitive sciences have made it possible to develop successful intelligent computer-aided instructional systems for technical and scientific training. In addition, computer-aided design (CAD) environments that support the rapid development of such computer-based instruction have also been recently…

  13. Evolutionary Computation Methods and their applications in Statistics

    Directory of Open Access Journals (Sweden)

    Francesco Battaglia

    2013-05-01

    Full Text Available A brief discussion of the genesis of evolutionary computation methods, their relationship to artificial intelligence, and the contribution of genetics and Darwin’s theory of natural evolution is provided. Then, the main evolutionary computation methods are illustrated: evolution strategies, genetic algorithms, estimation of distribution algorithms, differential evolution, and a brief description of some evolutionary behavior methods such as ant colony and particle swarm optimization. We also discuss the role of the genetic algorithm for multivariate probability distribution random generation, rather than as a function optimizer. Finally, some relevant applications of genetic algorithm to statistical problems are reviewed: selection of variables in regression, time series model building, outlier identification, cluster analysis, design of experiments.

  14. An Entropy-Based Network Anomaly Detection Method

    Directory of Open Access Journals (Sweden)

    Przemysław Bereziński

    2015-04-01

    Full Text Available Data mining is an interdisciplinary subfield of computer science involving methods at the intersection of artificial intelligence, machine learning and statistics. One of the data mining tasks is anomaly detection which is the analysis of large quantities of data to identify items, events or observations which do not conform to an expected pattern. Anomaly detection is applicable in a variety of domains, e.g., fraud detection, fault detection, system health monitoring but this article focuses on application of anomaly detection in the field of network intrusion detection.The main goal of the article is to prove that an entropy-based approach is suitable to detect modern botnet-like malware based on anomalous patterns in network. This aim is achieved by realization of the following points: (i preparation of a concept of original entropy-based network anomaly detection method, (ii implementation of the method, (iii preparation of original dataset, (iv evaluation of the method.

  15. Fast subcellular localization by cascaded fusion of signal-based and homology-based methods

    Directory of Open Access Journals (Sweden)

    Wang Wei

    2011-10-01

    Full Text Available Abstract Background The functions of proteins are closely related to their subcellular locations. In the post-genomics era, the amount of gene and protein data grows exponentially, which necessitates the prediction of subcellular localization by computational means. Results This paper proposes mitigating the computation burden of alignment-based approaches to subcellular localization prediction by a cascaded fusion of cleavage site prediction and profile alignment. Specifically, the informative segments of protein sequences are identified by a cleavage site predictor using the information in their N-terminal shorting signals. Then, the sequences are truncated at the cleavage site positions, and the shortened sequences are passed to PSI-BLAST for computing their profiles. Subcellular localization are subsequently predicted by a profile-to-profile alignment support-vector-machine (SVM classifier. To further reduce the training and recognition time of the classifier, the SVM classifier is replaced by a new kernel method based on the perturbational discriminant analysis (PDA. Conclusions Experimental results on a new dataset based on Swiss-Prot Release 57.5 show that the method can make use of the best property of signal- and homology-based approaches and can attain an accuracy comparable to that achieved by using full-length sequences. Analysis of profile-alignment score matrices suggest that both profile creation time and profile alignment time can be reduced without significant reduction in subcellular localization accuracy. It was found that PDA enjoys a short training time as compared to the conventional SVM. We advocate that the method will be important for biologists to conduct large-scale protein annotation or for bioinformaticians to perform preliminary investigations on new algorithms that involve pairwise alignments.

  16. Computer methods for transient fluid-structure analysis of nuclear reactors

    International Nuclear Information System (INIS)

    Belytschko, T.; Liu, W.K.

    1985-01-01

    Fluid-structure interaction problems in nuclear engineering are categorized according to the dominant physical phenomena and the appropriate computational methods. Linear fluid models that are considered include acoustic fluids, incompressible fluids undergoing small disturbances, and small amplitude sloshing. Methods available in general-purpose codes for these linear fluid problems are described. For nonlinear fluid problems, the major features of alternative computational treatments are reviewed; some special-purpose and multipurpose computer codes applicable to these problems are then described. For illustration, some examples of nuclear reactor problems that entail coupled fluid-structure analysis are described along with computational results

  17. Computational biology in the cloud: methods and new insights from computing at scale.

    Science.gov (United States)

    Kasson, Peter M

    2013-01-01

    The past few years have seen both explosions in the size of biological data sets and the proliferation of new, highly flexible on-demand computing capabilities. The sheer amount of information available from genomic and metagenomic sequencing, high-throughput proteomics, experimental and simulation datasets on molecular structure and dynamics affords an opportunity for greatly expanded insight, but it creates new challenges of scale for computation, storage, and interpretation of petascale data. Cloud computing resources have the potential to help solve these problems by offering a utility model of computing and storage: near-unlimited capacity, the ability to burst usage, and cheap and flexible payment models. Effective use of cloud computing on large biological datasets requires dealing with non-trivial problems of scale and robustness, since performance-limiting factors can change substantially when a dataset grows by a factor of 10,000 or more. New computing paradigms are thus often needed. The use of cloud platforms also creates new opportunities to share data, reduce duplication, and to provide easy reproducibility by making the datasets and computational methods easily available.

  18. Introduction of a computer-based method for automated planning of reduction paths under consideration of simulated muscular forces.

    Science.gov (United States)

    Buschbaum, Jan; Fremd, Rainer; Pohlemann, Tim; Kristen, Alexander

    2017-08-01

    Reduction is a crucial step in the surgical treatment of bone fractures. Finding an optimal path for restoring anatomical alignment is considered technically demanding because collisions as well as high forces caused by surrounding soft tissues can avoid desired reduction movements. The repetition of reduction movements leads to a trial-and-error process which causes a prolonged duration of surgery. By planning an appropriate reduction path-an optimal sequence of target-directed movements-these problems should be overcome. For this purpose, a computer-based method has been developed. Using the example of simple femoral shaft fractures, 3D models are generated out of CT images. A reposition algorithm aligns both fragments by reconstructing their broken edges. According to the criteria of a deduced planning strategy, a modified A*-algorithm searches collision-free route of minimal force from the dislocated into the computed target position. Muscular forces are considered using a musculoskeletal reduction model (OpenSim model), and bone collisions are detected by an appropriate method. Five femoral SYNBONE models were broken into different fracture classification types and were automatically reduced from ten randomly selected displaced positions. Highest mean translational and rotational error for achieving target alignment is [Formula: see text] and [Formula: see text]. Mean value and standard deviation of occurring forces are [Formula: see text] for M. tensor fasciae latae and [Formula: see text] for M. semitendinosus over all trials. These pathways are precise, collision-free, required forces are minimized, and thus regarded as optimal paths. A novel method for planning reduction paths under consideration of collisions and muscular forces is introduced. The results deliver additional knowledge for an appropriate tactical reduction procedure and can provide a basis for further navigated or robotic-assisted developments.

  19. An Applet-based Anonymous Distributed Computing System.

    Science.gov (United States)

    Finkel, David; Wills, Craig E.; Ciaraldi, Michael J.; Amorin, Kevin; Covati, Adam; Lee, Michael

    2001-01-01

    Defines anonymous distributed computing systems and focuses on the specifics of a Java, applet-based approach for large-scale, anonymous, distributed computing on the Internet. Explains the possibility of a large number of computers participating in a single computation and describes a test of the functionality of the system. (Author/LRW)

  20. Study on Parallel Processing for Efficient Flexible Multibody Analysis based on Subsystem Synthesis Method

    Energy Technology Data Exchange (ETDEWEB)

    Han, Jong-Boo; Song, Hajun; Kim, Sung-Soo [Chungnam Nat’l Univ., Daejeon (Korea, Republic of)

    2017-06-15

    Flexible multibody simulations are widely used in the industry to design mechanical systems. In flexible multibody dynamics, deformation coordinates are described either relatively in the body reference frame that is floating in the space or in the inertial reference frame. Moreover, these deformation coordinates are generated based on the discretization of the body according to the finite element approach. Therefore, the formulation of the flexible multibody system always deals with a huge number of degrees of freedom and the numerical solution methods require a substantial amount of computational time. Parallel computational methods are a solution for efficient computation. However, most of the parallel computational methods are focused on the efficient solution of large-sized linear equations. For multibody analysis, we need to develop an efficient formulation that could be suitable for parallel computation. In this paper, we developed a subsystem synthesis method for a flexible multibody system and proposed efficient parallel computational schemes based on the OpenMP API in order to achieve efficient computation. Simulations of a rotating blade system, which consists of three identical blades, were carried out with two different parallel computational schemes. Actual CPU times were measured to investigate the efficiency of the proposed parallel schemes.

  1. Data analysis through interactive computer animation method (DATICAM)

    International Nuclear Information System (INIS)

    Curtis, J.N.; Schwieder, D.H.

    1983-01-01

    DATICAM is an interactive computer animation method designed to aid in the analysis of nuclear research data. DATICAM was developed at the Idaho National Engineering Laboratory (INEL) by EG and G Idaho, Inc. INEL analysts use DATICAM to produce computer codes that are better able to predict the behavior of nuclear power reactors. In addition to increased code accuracy, DATICAM has saved manpower and computer costs. DATICAM has been generalized to assist in the data analysis of virtually any data-producing dynamic process

  2. A computer-based anaglyphic system for the treatment of amblyopia

    Directory of Open Access Journals (Sweden)

    Rastegarpour A

    2011-09-01

    Full Text Available Ali Rastegarpour Ophthalmic Research Center, Shahid Beheshti University of Medical Sciences, Tehran, Iran Purpose: Virtual reality (VR-based treatment has been introduced as a potential option for amblyopia management, presumably without involving the problems of occlusion and penalization, including variable and unsatisfactory outcomes, long duration of treatment, poor compliance, psychological impact, and complications. However, VR-based treatment is costly and not accessible for most children. This paper introduces a method that encompasses the advantages of VR-based treatment at a lower cost. Methods: The presented system consists of a pair of glasses with two color filters and software for use on a personal computer. The software is designed such that some active graphic components can only be seen by the amblyopic eye and are filtered out for the other eye. Some components would be seen by both to encourage fusion. The result is that the patient must use both eyes, and specifically the amblyopic eye, to play the games. Results: A prototype of the system, the ABG InSight, was found capable of successfully filtering out elements of a certain color and therefore, could prove to be a viable alternative to VR-based treatment for amblyopia. Conclusion: The anaglyphic system maintains most of the advantages of VR-based systems, but is less costly and highly accessible. It fulfills the means that VR-based systems are designed to achieve, and warrants further investigation. Keywords: amblyopia, computer-based, open source, virtual reality, color filters, 3-D

  3. A Novel Query Method for Spatial Data in Mobile Cloud Computing Environment

    Directory of Open Access Journals (Sweden)

    Guangsheng Chen

    2018-01-01

    Full Text Available With the development of network communication, a 1000-fold increase in traffic demand from 4G to 5G, it is critical to provide efficient and fast spatial data access interface for applications in mobile environment. In view of the low I/O efficiency and high latency of existing methods, this paper presents a memory-based spatial data query method that uses the distributed memory file system Alluxio to store data and build a two-level index based on the Alluxio key-value structure; moreover, it aims to solve the problem of low efficiency of traditional method; according to the characteristics of Spark computing framework, a data input format for spatial data query is proposed, which can selectively read the file data and reduce the data I/O. The comparative experiments show that the memory-based file system Alluxio has better I/O performance than the disk file system; compared with the traditional distributed query method, the method we proposed reduces the retrieval time greatly.

  4. An Augmented Fast Marching Method for Computing Skeletons and Centerlines

    NARCIS (Netherlands)

    Telea, Alexandru; Wijk, Jarke J. van

    2002-01-01

    We present a simple and robust method for computing skeletons for arbitrary planar objects and centerlines for 3D objects. We augment the Fast Marching Method (FMM) widely used in level set applications by computing the paramterized boundary location every pixel came from during the boundary

  5. Numerical computer methods part E

    CERN Document Server

    Johnson, Michael L

    2004-01-01

    The contributions in this volume emphasize analysis of experimental data and analytical biochemistry, with examples taken from biochemistry. They serve to inform biomedical researchers of the modern data analysis methods that have developed concomitantly with computer hardware. Selected Contents: A practical approach to interpretation of SVD results; modeling of oscillations in endocrine networks with feedback; quantifying asynchronous breathing; sample entropy; wavelet modeling and processing of nasal airflow traces.

  6. A combined vector potential-scalar potential method for FE computation of 3D magnetic fields in electrical devices with iron cores

    Science.gov (United States)

    Wang, R.; Demerdash, N. A.

    1991-01-01

    A method of combined use of magnetic vector potential based finite-element (FE) formulations and magnetic scalar potential (MSP) based formulations for computation of three-dimensional magnetostatic fields is introduced. In this method, the curl-component of the magnetic field intensity is computed by a reduced magnetic vector potential. This field intensity forms the basic of a forcing function for a global magnetic scalar potential solution over the entire volume of the region. This method allows one to include iron portions sandwiched in between conductors within partitioned current-carrying subregions. The method is most suited for large-scale global-type 3-D magnetostatic field computations in electrical devices, and in particular rotating electric machinery.

  7. The Experiment Method for Manufacturing Grid Development on Single Computer

    Institute of Scientific and Technical Information of China (English)

    XIAO Youan; ZHOU Zude

    2006-01-01

    In this paper, an experiment method for the Manufacturing Grid application system development in the single personal computer environment is proposed. The characteristic of the proposed method is constructing a full prototype Manufacturing Grid application system which is hosted on a single personal computer with the virtual machine technology. Firstly, it builds all the Manufacturing Grid physical resource nodes on an abstraction layer of a single personal computer with the virtual machine technology. Secondly, all the virtual Manufacturing Grid resource nodes will be connected with virtual network and the application software will be deployed on each Manufacturing Grid nodes. Then, we can obtain a prototype Manufacturing Grid application system which is working in the single personal computer, and can carry on the experiment on this foundation. Compared with the known experiment methods for the Manufacturing Grid application system development, the proposed method has the advantages of the known methods, such as cost inexpensively, operation simple, and can get the confidence experiment result easily. The Manufacturing Grid application system constructed with the proposed method has the high scalability, stability and reliability. It is can be migrated to the real application environment rapidly.

  8. A novel computer-aided method to fabricate a custom one-piece glass fiber dowel-and-core based on digitized impression and crown preparation data.

    Science.gov (United States)

    Chen, Zhiyu; Li, Ya; Deng, Xuliang; Wang, Xinzhi

    2014-06-01

    Fiber-reinforced composite dowels have been widely used for their superior biomechanical properties; however, their preformed shape cannot fit irregularly shaped root canals. This study aimed to describe a novel computer-aided method to create a custom-made one-piece dowel-and-core based on the digitization of impressions and clinical standard crown preparations. A standard maxillary die stone model containing three prepared teeth each (maxillary lateral incisor, canine, premolar) requiring dowel restorations was made. It was then mounted on an average value articulator with the mandibular stone model to simulate natural occlusion. Impressions for each tooth were obtained using vinylpolysiloxane with a sectional dual-arch tray and digitized with an optical scanner. The dowel-and-core virtual model was created by slicing 3D dowel data from impression digitization with core data selected from a standard crown preparation database of 107 records collected from clinics and digitized. The position of the chosen digital core was manually regulated to coordinate with the adjacent teeth to fulfill the crown restorative requirements. Based on virtual models, one-piece custom dowel-and-cores for three experimental teeth were milled from a glass fiber block with computer-aided manufacturing techniques. Furthermore, two patients were treated to evaluate the practicality of this new method. The one-piece glass fiber dowel-and-core made for experimental teeth fulfilled the clinical requirements for dowel restorations. Moreover, two patients were treated to validate the technique. This novel computer-aided method to create a custom one-piece glass fiber dowel-and-core proved to be practical and efficient. © 2013 by the American College of Prosthodontists.

  9. Classification Method to Define Synchronization Capability Limits of Line-Start Permanent-Magnet Motor Using Mesh-Based Magnetic Equivalent Circuit Computation Results

    Directory of Open Access Journals (Sweden)

    Bart Wymeersch

    2018-04-01

    Full Text Available Line start permanent magnet synchronous motors (LS-PMSM are energy-efficient synchronous motors that can start asynchronously due to a squirrel cage in the rotor. The drawback, however, with this motor type is the chance of failure to synchronize after start-up. To identify the problem, and the stable operation limits, the synchronization at various parameter combinations is investigated. For accurate knowledge of the operation limits to assure synchronization with the utility grid, an accurate classification of parameter combinations is needed. As for this, many simulations have to be executed, a rapid evaluation method is indispensable. To simulate the dynamic behavior in the time domain, several modeling methods exist. In this paper, a discussion is held with respect to different modeling methods. In order to include spatial factors and magnetic nonlinearities, on the one hand, and to restrict the computation time on the other hand, a magnetic equivalent circuit (MEC modeling method is developed. In order to accelerate numerical convergence, a mesh-based analysis method is applied. The novelty in this paper is the implementation of support vector machine (SVM to classify the results of simulations at various parameter combinations into successful or unsuccessful synchronization, in order to define the synchronization capability limits. It is explained how these techniques can benefit the simulation time and the evaluation process. The results of the MEC modeling correspond to those obtained with finite element analysis (FEA, despite the reduced computation time. In addition, simulation results obtained with MEC modeling are experimentally validated.

  10. Ammonia-based quantum computer

    International Nuclear Information System (INIS)

    Ferguson, Andrew J.; Cain, Paul A.; Williams, David A.; Briggs, G. Andrew D.

    2002-01-01

    We propose a scheme for quantum computation using two eigenstates of ammonia or similar molecules. Individual ammonia molecules are confined inside fullerenes and used as two-level qubit systems. Interaction between these ammonia qubits takes place via the electric dipole moments, and in particular we show how a controlled-NOT gate could be implemented. After computation the qubit is measured with a single-electron electrometer sensitive enough to differentiate between the dipole moments of different states. We also discuss a possible implementation based on a quantum cellular automaton

  11. EEG-Based Computer Aided Diagnosis of Autism Spectrum Disorder Using Wavelet, Entropy, and ANN

    Directory of Open Access Journals (Sweden)

    Ridha Djemal

    2017-01-01

    Full Text Available Autism spectrum disorder (ASD is a type of neurodevelopmental disorder with core impairments in the social relationships, communication, imagination, or flexibility of thought and restricted repertoire of activity and interest. In this work, a new computer aided diagnosis (CAD of autism ‎based on electroencephalography (EEG signal analysis is investigated. The proposed method is based on discrete wavelet transform (DWT, entropy (En, and artificial neural network (ANN. DWT is used to decompose EEG signals into approximation and details coefficients to obtain EEG subbands. The feature vector is constructed by computing Shannon entropy values from each EEG subband. ANN classifies the corresponding EEG signal into normal or autistic based on the extracted features. The experimental results show the effectiveness of the proposed method for assisting autism diagnosis. A receiver operating characteristic (ROC curve metric is used to quantify the performance of the proposed method. The proposed method obtained promising results tested using real dataset provided by King Abdulaziz Hospital, Jeddah, Saudi Arabia.

  12. Computational methods for molecular imaging

    CERN Document Server

    Shi, Kuangyu; Li, Shuo

    2015-01-01

    This volume contains original submissions on the development and application of molecular imaging computing. The editors invited authors to submit high-quality contributions on a wide range of topics including, but not limited to: • Image Synthesis & Reconstruction of Emission Tomography (PET, SPECT) and other Molecular Imaging Modalities • Molecular Imaging Enhancement • Data Analysis of Clinical & Pre-clinical Molecular Imaging • Multi-Modal Image Processing (PET/CT, PET/MR, SPECT/CT, etc.) • Machine Learning and Data Mining in Molecular Imaging. Molecular imaging is an evolving clinical and research discipline enabling the visualization, characterization and quantification of biological processes taking place at the cellular and subcellular levels within intact living subjects. Computational methods play an important role in the development of molecular imaging, from image synthesis to data analysis and from clinical diagnosis to therapy individualization. This work will bring readers fro...

  13. Three numerical methods for the computation of the electrostatic energy

    International Nuclear Information System (INIS)

    Poenaru, D.N.; Galeriu, D.

    1975-01-01

    The FORTRAN programs for computation of the electrostatic energy of a body with axial symmetry by Lawrence, Hill-Wheeler and Beringer methods are presented in detail. The accuracy, time of computation and the required memory of these methods are tested at various deformations for two simple parametrisations: two overlapping identical spheres and a spheroid. On this basis the field of application of each method is recomended

  14. Computer based workstation for development of software for high energy physics experiments

    International Nuclear Information System (INIS)

    Ivanchenko, I.M.; Sedykh, Yu.V.

    1987-01-01

    Methodical principles and results of a successful attempt to create on the base of IBM-PC/AT personal computer of effective means for development of programs for high energy physics experiments are analysed. The obtained results permit to combine the best properties and a positive materialized experience accumulated on the existing time sharing collective systems with a high quality of data representation, reliability and convenience of personal computer applications

  15. Incompressible Navier-Stokes inverse design method based on adaptive unstructured meshes

    International Nuclear Information System (INIS)

    Rahmati, M.T.; Charlesworth, D.; Zangeneh, M.

    2005-01-01

    An inverse method for blade design based on Navier-Stokes equations on adaptive unstructured meshes has been developed. In the method, unlike the method based on inviscid equations, the effect of viscosity is directly taken into account. In the method, the pressure (or pressure loading) is prescribed. The design method then computes the blade shape that would accomplish the target prescribed pressure distribution. The method is implemented using a cell-centered finite volume method, which solves the incompressible Navier-Stokes equations on unstructured meshes. An adaptive unstructured mesh method based on grid subdivision and local adaptive mesh method is utilized for increasing the accuracy. (author)

  16. 3-D computer graphics based on integral photography.

    Science.gov (United States)

    Naemura, T; Yoshida, T; Harashima, H

    2001-02-12

    Integral photography (IP), which is one of the ideal 3-D photographic technologies, can be regarded as a method of capturing and displaying light rays passing through a plane. The NHK Science and Technical Research Laboratories have developed a real-time IP system using an HDTV camera and an optical fiber array. In this paper, the authors propose a method of synthesizing arbitrary views from IP images captured by the HDTV camera. This is a kind of image-based rendering system, founded on the 4-D data space Representation of light rays. Experimental results show the potential to improve the quality of images rendered by computer graphics techniques.

  17. A hybrid method for the parallel computation of Green's functions

    International Nuclear Information System (INIS)

    Petersen, Dan Erik; Li Song; Stokbro, Kurt; Sorensen, Hans Henrik B.; Hansen, Per Christian; Skelboe, Stig; Darve, Eric

    2009-01-01

    Quantum transport models for nanodevices using the non-equilibrium Green's function method require the repeated calculation of the block tridiagonal part of the Green's and lesser Green's function matrices. This problem is related to the calculation of the inverse of a sparse matrix. Because of the large number of times this calculation needs to be performed, this is computationally very expensive even on supercomputers. The classical approach is based on recurrence formulas which cannot be efficiently parallelized. This practically prevents the solution of large problems with hundreds of thousands of atoms. We propose new recurrences for a general class of sparse matrices to calculate Green's and lesser Green's function matrices which extend formulas derived by Takahashi and others. We show that these recurrences may lead to a dramatically reduced computational cost because they only require computing a small number of entries of the inverse matrix. Then, we propose a parallelization strategy for block tridiagonal matrices which involves a combination of Schur complement calculations and cyclic reduction. It achieves good scalability even on problems of modest size.

  18. Effects of feedback in a computer-based learning environment on students’ learning outcomes: a meta-analysis

    NARCIS (Netherlands)

    van der Kleij, Fabienne; Feskens, Remco C.W.; Eggen, Theodorus Johannes Hendrikus Maria

    2015-01-01

    In this meta-analysis, we investigated the effects of methods for providing item-based feedback in a computer-based environment on students’ learning outcomes. From 40 studies, 70 effect sizes were computed, which ranged from −0.78 to 2.29. A mixed model was used for the data analysis. The results

  19. Rediscovering the Economics of Keynes in an Agent-Based Computational Setting

    DEFF Research Database (Denmark)

    Bruun, Charlotte

    The aim of this paper is to use agent-based computational economics to explore the economic thinking of Keynes. Taking his starting point at the macroeconomic level, Keynes argued that economic systems are characterized by fundamental uncertainty - an uncertainty that makes rule-based behaviour...... and reliance on monetary magnitudes more optimal to the economic agent than profit- and utility optimazation in the traditional sense. Unfortunately more systematic studies of the properties of such a system was not possible at the time of Keynes. The system envisioned by Keynes holds a lot of properties...... in commen with what we today call complex dynamic systems, and today we may aply the method of agent-based computational economics to the ideas of Keynes. The presented agent-based Keynesian model demonstrate, as argued by Keynes, that the economy can selforganize without relying on price movement...

  20. Method and apparatus for managing transactions with connected computers

    Science.gov (United States)

    Goldsmith, Steven Y.; Phillips, Laurence R.; Spires, Shannon V.

    2003-01-01

    The present invention provides a method and apparatus that make use of existing computer and communication resources and that reduce the errors and delays common to complex transactions such as international shipping. The present invention comprises an agent-based collaborative work environment that assists geographically distributed commercial and government users in the management of complex transactions such as the transshipment of goods across the U.S.-Mexico border. Software agents can mediate the creation, validation and secure sharing of shipment information and regulatory documentation over the Internet, using the World-Wide Web to interface with human users.

  1. Description of a method for computing fluid-structure interaction

    International Nuclear Information System (INIS)

    Gantenbein, F.

    1982-02-01

    A general formulation allowing computation of structure vibrations in a dense fluid is described. It is based on fluid modelisation by fluid finite elements. For each fluid node are associated two variables: the pressure p and a variable π defined as p=d 2 π/dt 2 . Coupling between structure and fluid is introduced by surface elements. This method is easy to introduce in a general finite element code. Validation was obtained by analytical calculus and tests. It is widely used for vibrational and seismic studies of pipes and internals of nuclear reactors some applications are presented [fr

  2. Poster — Thur Eve — 69: Computational Study of DVH-guided Cancer Treatment Planning Optimization Methods

    Energy Technology Data Exchange (ETDEWEB)

    Ghomi, Pooyan Shirvani; Zinchenko, Yuriy [University of Calgary, Department of Mathematics and Statistics (Canada)

    2014-08-15

    Purpose: To compare methods to incorporate the Dose Volume Histogram (DVH) curves into the treatment planning optimization. Method: The performance of three methods, namely, the conventional Mixed Integer Programming (MIP) model, a convex moment-based constrained optimization approach, and an unconstrained convex moment-based penalty approach, is compared using anonymized data of a prostate cancer patient. Three plans we generated using the corresponding optimization models. Four Organs at Risk (OARs) and one Tumor were involved in the treatment planning. The OARs and Tumor were discretized into total of 50,221 voxels. The number of beamlets was 943. We used commercially available optimization software Gurobi and Matlab to solve the models. Plan comparison was done by recording the model runtime followed by visual inspection of the resulting dose volume histograms. Conclusion: We demonstrate the effectiveness of the moment-based approaches to replicate the set of prescribed DVH curves. The unconstrained convex moment-based penalty approach is concluded to have the greatest potential to reduce the computational effort and holds a promise of substantial computational speed up.

  3. Computer-based feedback in formative assessment

    NARCIS (Netherlands)

    van der Kleij, Fabienne

    2013-01-01

    Formative assessment concerns any assessment that provides feedback that is intended to support learning and can be used by teachers and/or students. Computers could offer a solution to overcoming obstacles encountered in implementing formative assessment. For example, computer-based assessments

  4. Research on Linux Trusted Boot Method Based on Reverse Integrity Verification

    Directory of Open Access Journals (Sweden)

    Chenlin Huang

    2016-01-01

    Full Text Available Trusted computing aims to build a trusted computing environment for information systems with the help of secure hardware TPM, which has been proved to be an effective way against network security threats. However, the TPM chips are not yet widely deployed in most computing devices so far, thus limiting the applied scope of trusted computing technology. To solve the problem of lacking trusted hardware in existing computing platform, an alternative security hardware USBKey is introduced in this paper to simulate the basic functions of TPM and a new reverse USBKey-based integrity verification model is proposed to implement the reverse integrity verification of the operating system boot process, which can achieve the effect of trusted boot of the operating system in end systems without TPMs. A Linux operating system booting method based on reverse integrity verification is designed and implemented in this paper, with which the integrity of data and executable files in the operating system are verified and protected during the trusted boot process phase by phase. It implements the trusted boot of operation system without TPM and supports remote attestation of the platform. Enhanced by our method, the flexibility of the trusted computing technology is greatly improved and it is possible for trusted computing to be applied in large-scale computing environment.

  5. A Simple Method for Dynamic Scheduling in a Heterogeneous Computing System

    OpenAIRE

    Žumer, Viljem; Brest, Janez

    2002-01-01

    A simple method for the dynamic scheduling on a heterogeneous computing system is proposed in this paper. It was implemented to minimize the parallel program execution time. The proposed method decomposes the program workload into computationally homogeneous subtasks, which may be of the different size, depending on the current load of each machine in a heterogeneous computing system.

  6. Computer-Aided Modelling Methods and Tools

    DEFF Research Database (Denmark)

    Cameron, Ian; Gani, Rafiqul

    2011-01-01

    The development of models for a range of applications requires methods and tools. In many cases a reference model is required that allows the generation of application specific models that are fit for purpose. There are a range of computer aided modelling tools available that help to define the m...

  7. Numerical methods design, analysis, and computer implementation of algorithms

    CERN Document Server

    Greenbaum, Anne

    2012-01-01

    Numerical Methods provides a clear and concise exploration of standard numerical analysis topics, as well as nontraditional ones, including mathematical modeling, Monte Carlo methods, Markov chains, and fractals. Filled with appealing examples that will motivate students, the textbook considers modern application areas, such as information retrieval and animation, and classical topics from physics and engineering. Exercises use MATLAB and promote understanding of computational results. The book gives instructors the flexibility to emphasize different aspects--design, analysis, or computer implementation--of numerical algorithms, depending on the background and interests of students. Designed for upper-division undergraduates in mathematics or computer science classes, the textbook assumes that students have prior knowledge of linear algebra and calculus, although these topics are reviewed in the text. Short discussions of the history of numerical methods are interspersed throughout the chapters. The book a...

  8. Tomographs based on non-conventional radiation sources and methods

    International Nuclear Information System (INIS)

    Barbuzza, R.; Fresno, M. del; Venere, Marcelo J.; Clausse, Alejandro; Moreno, C.

    2000-01-01

    Computer techniques for tomographic reconstruction of objects X-rayed with a compact plasma focus (PF) are presented. The implemented reconstruction algorithms are based on stochastic searching of solutions of Radon equation, using Genetic Algorithms and Monte Carlo methods. Numerical experiments using actual projections were performed concluding the feasibility of the application of both methods in tomographic reconstruction problem. (author)

  9. Permeability computation on a REV with an immersed finite element method

    International Nuclear Information System (INIS)

    Laure, P.; Puaux, G.; Silva, L.; Vincent, M.

    2011-01-01

    An efficient method to compute permeability of fibrous media is presented. An immersed domain approach is used to represent the porous material at its microscopic scale and the flow motion is computed with a stabilized mixed finite element method. Therefore the Stokes equation is solved on the whole domain (including solid part) using a penalty method. The accuracy is controlled by refining the mesh around the solid-fluid interface defined by a level set function. Using homogenisation techniques, the permeability of a representative elementary volume (REV) is computed. The computed permeabilities of regular fibre packings are compared to classical analytical relations found in the bibliography.

  10. Computational Intelligence based techniques for islanding detection of distributed generation in distribution network: A review

    International Nuclear Information System (INIS)

    Laghari, J.A.; Mokhlis, H.; Karimi, M.; Bakar, A.H.A.; Mohamad, Hasmaini

    2014-01-01

    Highlights: • Unintentional and intentional islanding, their causes, and solutions are presented. • Remote, passive, active and hybrid islanding detection techniques are discussed. • The limitation of these techniques in accurately detect islanding are discussed. • Computational intelligence techniques ability in detecting islanding is discussed. • Review of ANN, fuzzy logic control, ANFIS, Decision tree techniques is provided. - Abstract: Accurate and fast islanding detection of distributed generation is highly important for its successful operation in distribution networks. Up to now, various islanding detection technique based on communication, passive, active and hybrid methods have been proposed. However, each technique suffers from certain demerits that cause inaccuracies in islanding detection. Computational intelligence based techniques, due to their robustness and flexibility in dealing with complex nonlinear systems, is an option that might solve this problem. This paper aims to provide a comprehensive review of computational intelligence based techniques applied for islanding detection of distributed generation. Moreover, the paper compares the accuracies of computational intelligence based techniques over existing techniques to provide a handful of information for industries and utility researchers to determine the best method for their respective system

  11. Rudder Based Roll Control via host-computer of A Robotic Boat

    Directory of Open Access Journals (Sweden)

    Xinping Bao

    2009-03-01

    Full Text Available Rudder based roll control of a small-sized robotic boat is a key technique for the devices on board to achieve good performance. This paper introduces a host-based robotic boat capable of performing basic movement operations. The course keeping and roll reduction are studied via rudder based method in simulations and sea trials. The boat dynamic model is built with the combination of mathematical analysis and system identification technique. A mixed sensitivity H control method design is selected since yaw and roll motion are posed in different frequency domains. Computer simulations and experiments carried out show that successful results are achieved.

  12. Rudder Based Roll Control via Host-Computer of a Robotic Boat

    Directory of Open Access Journals (Sweden)

    Xinping Bao

    2009-03-01

    Full Text Available Rudder based roll control of a small-sized robotic boat is a key technique for the devices on board to achieve good performance. This paper introduces a host-based robotic boat capable of performing basic movement operations. The course keeping and roll reduction are studied via rudder based method in simulations and sea trials. The boat dynamic model is built with the combination of mathematical analysis and system identification technique. A mixed sensitivity H∞ control method design is selected since yaw and roll motion are posed in different frequency domains. Computer simulations and experiments carried out show that successful results are achieved.

  13. The soft computing-based approach to investigate allergic diseases: a systematic review.

    Science.gov (United States)

    Tartarisco, Gennaro; Tonacci, Alessandro; Minciullo, Paola Lucia; Billeci, Lucia; Pioggia, Giovanni; Incorvaia, Cristoforo; Gangemi, Sebastiano

    2017-01-01

    Early recognition of inflammatory markers and their relation to asthma, adverse drug reactions, allergic rhinitis, atopic dermatitis and other allergic diseases is an important goal in allergy. The vast majority of studies in the literature are based on classic statistical methods; however, developments in computational techniques such as soft computing-based approaches hold new promise in this field. The aim of this manuscript is to systematically review the main soft computing-based techniques such as artificial neural networks, support vector machines, bayesian networks and fuzzy logic to investigate their performances in the field of allergic diseases. The review was conducted following PRISMA guidelines and the protocol was registered within PROSPERO database (CRD42016038894). The research was performed on PubMed and ScienceDirect, covering the period starting from September 1, 1990 through April 19, 2016. The review included 27 studies related to allergic diseases and soft computing performances. We observed promising results with an overall accuracy of 86.5%, mainly focused on asthmatic disease. The review reveals that soft computing-based approaches are suitable for big data analysis and can be very powerful, especially when dealing with uncertainty and poorly characterized parameters. Furthermore, they can provide valuable support in case of lack of data and entangled cause-effect relationships, which make it difficult to assess the evolution of disease. Although most works deal with asthma, we believe the soft computing approach could be a real breakthrough and foster new insights into other allergic diseases as well.

  14. A computer aided measurement method for unstable pelvic fractures based on standardized radiographs

    International Nuclear Information System (INIS)

    Zhao, Jing-xin; Zhao, Zhe; Zhang, Li-cheng; Su, Xiu-yun; Du, Hai-long; Zhang, Li-ning; Zhang, Li-hai; Tang, Pei-fu

    2015-01-01

    To set up a method for measuring radiographic displacement of unstable pelvic ring fractures based on standardized X-ray images and then test its reliability and validity using a software-based measurement technique. Twenty-five patients that were diagnosed as AO/OTA type B or C pelvic fractures with unilateral pelvis fractured and dislocated were eligible for inclusion by a review of medical records in our clinical centre. Based on the input pelvic preoperative CT data, the standardized X-ray images, including inlet, outlet, and anterior-posterior (AP) radiographs, were simulated using Armira software (Visage Imaging GmbH, Berlin, Germany). After representative anatomic landmarks were marked on the standardized X-ray images, the 2-dimensional (2D) coordinates of these points could be revealed in Digimizer software (Model: Mitutoyo Corp., Tokyo, Japan). Subsequently, we developed a formula that indicated the translational and rotational displacement patterns of the injured hemipelvis. Five separate observers calculated the displacement outcomes using the established formula and determined the rotational patterns using a 3D-CT model based on their overall impression. We performed 3D reconstruction of all the fractured pelvises using Mimics (Materialise, Haasrode, Belgium) and determined the translational and rotational displacement using 3-matic suite. The interobserver reliability of the new method was assessed by comparing the continuous measure and categorical outcomes using intraclass correlation coefficient (ICC) and kappa statistic, respectively. The interobserver reliability of the new method for translational and rotational measurement was high, with both ICCs above 0.9. Rotational outcome assessed by the new method was the same as that concluded by 3-matic software. The agreement for rotational outcome among orthopaedic surgeons based on overall impression was poor (kappa statistic, 0.250 to 0.426). Compared with the 3D reconstruction outcome, the

  15. A fast iterative method for computing particle beams penetrating matter

    International Nuclear Information System (INIS)

    Boergers, C.

    1997-01-01

    Beams of microscopic particles penetrating matter are important in several fields. The application motivating our parameter choices in this paper is electron beam cancer therapy. Mathematically, a steady particle beam penetrating matter, or a configuration of several such beams, is modeled by a boundary value problem for a Boltzmann equation. Grid-based discretization of this problem leads to a system of algebraic equations. This system is typically very large because of the large number of independent variables in the Boltzmann equation (six if time independence is the only dimension-reducing assumption). If grid-based methods are to be practical at all, it is therefore necessary to develop fast solvers for the discretized problems. This is the subject of the present paper. For two-dimensional, mono-energetic, linear particle beam problems, we describe an iterative domain decomposition algorithm based on overlapping decompositions of the set of particle directions and computationally demonstrate its rapid, grid independent convergence. There appears to be no fundamental obstacle to generalizing the method to three-dimensional, energy dependent problems. 34 refs., 15 figs., 6 tabs

  16. Quartet-net: a quartet-based method to reconstruct phylogenetic networks.

    Science.gov (United States)

    Yang, Jialiang; Grünewald, Stefan; Wan, Xiu-Feng

    2013-05-01

    Phylogenetic networks can model reticulate evolutionary events such as hybridization, recombination, and horizontal gene transfer. However, reconstructing such networks is not trivial. Popular character-based methods are computationally inefficient, whereas distance-based methods cannot guarantee reconstruction accuracy because pairwise genetic distances only reflect partial information about a reticulate phylogeny. To balance accuracy and computational efficiency, here we introduce a quartet-based method to construct a phylogenetic network from a multiple sequence alignment. Unlike distances that only reflect the relationship between a pair of taxa, quartets contain information on the relationships among four taxa; these quartets provide adequate capacity to infer a more accurate phylogenetic network. In applications to simulated and biological data sets, we demonstrate that this novel method is robust and effective in reconstructing reticulate evolutionary events and it has the potential to infer more accurate phylogenetic distances than other conventional phylogenetic network construction methods such as Neighbor-Joining, Neighbor-Net, and Split Decomposition. This method can be used in constructing phylogenetic networks from simple evolutionary events involving a few reticulate events to complex evolutionary histories involving a large number of reticulate events. A software called "Quartet-Net" is implemented and available at http://sysbio.cvm.msstate.edu/QuartetNet/.

  17. A Cloud Theory-Based Trust Computing Model in Social Networks

    Directory of Open Access Journals (Sweden)

    Fengming Liu

    2016-12-01

    Full Text Available How to develop a trust management model and then to efficiently control and manage nodes is an important issue in the scope of social network security. In this paper, a trust management model based on a cloud model is proposed. The cloud model uses a specific computation operator to achieve the transformation from qualitative concepts to quantitative computation. Additionally, this can also be used to effectively express the fuzziness, randomness and the relationship between them of the subjective trust. The node trust is divided into reputation trust and transaction trust. In addition, evaluation methods are designed, respectively. Firstly, the two-dimension trust cloud evaluation model is designed based on node’s comprehensive and trading experience to determine the reputation trust. The expected value reflects the average trust status of nodes. Then, entropy and hyper-entropy are used to describe the uncertainty of trust. Secondly, the calculation methods of the proposed direct transaction trust and the recommendation transaction trust involve comprehensively computation of the transaction trust of each node. Then, the choosing strategies were designed for node to trade based on trust cloud. Finally, the results of a simulation experiment in P2P network file sharing on an experimental platform directly reflect the objectivity, accuracy and robustness of the proposed model, and could also effectively identify the malicious or unreliable service nodes in the system. In addition, this can be used to promote the service reliability of the nodes with high credibility, by which the stability of the whole network is improved.

  18. An adaptive angle-doppler compensation method for airborne bistatic radar based on PAST

    Science.gov (United States)

    Hang, Xu; Jun, Zhao

    2018-05-01

    Adaptive angle-Doppler compensation method extract the requisite information based on the data itself adaptively, thus avoiding the problem of performance degradation caused by inertia system error. However, this method requires estimation and egiendecomposition of sample covariance matrix, which has a high computational complexity and limits its real-time application. In this paper, an adaptive angle Doppler compensation method based on projection approximation subspace tracking (PAST) is studied. The method uses cyclic iterative processing to quickly estimate the positions of the spectral center of the maximum eigenvector of each range cell, and the computational burden of matrix estimation and eigen-decompositon is avoided, and then the spectral centers of all range cells is overlapped by two dimensional compensation. Simulation results show the proposed method can effectively reduce the no homogeneity of airborne bistatic radar, and its performance is similar to that of egien-decomposition algorithms, but the computation load is obviously reduced and easy to be realized.

  19. Reconceptualizing Pedagogical Usability of and Teachers' Roles in Computer Game-Based Learning in School

    Science.gov (United States)

    Tzuo, Pei-Wen; Ling, Jennifer Isabelle Ong Pei; Yang, Chien-Hui; Chen, Vivian Hsueh-Hua

    2012-01-01

    At present, methods for the optimal use of two approaches to computer game-based learning in school to enhance students' learning, namely, computer game play and game design, are obscure because past research has been devoted more to designing rather than evaluating the implementation of these approaches in school. In addition, most studies…

  20. Operational statistical analysis of the results of computer-based testing of students

    Directory of Open Access Journals (Sweden)

    Виктор Иванович Нардюжев

    2018-12-01

    Full Text Available The article is devoted to the issues of statistical analysis of results of computer-based testing for evaluation of educational achievements of students. The issues are relevant due to the fact that computerbased testing in Russian universities has become an important method for evaluation of educational achievements of students and quality of modern educational process. Usage of modern methods and programs for statistical analysis of results of computer-based testing and assessment of quality of developed tests is an actual problem for every university teacher. The article shows how the authors solve this problem using their own program “StatInfo”. For several years the program has been successfully applied in a credit system of education at such technological stages as loading computerbased testing protocols into a database, formation of queries, generation of reports, lists, and matrices of answers for statistical analysis of quality of test items. Methodology, experience and some results of its usage by university teachers are described in the article. Related topics of a test development, models, algorithms, technologies, and software for large scale computer-based testing has been discussed by the authors in their previous publications which are presented in the reference list.

  1. Computer prediction of subsurface radionuclide transport: an adaptive numerical method

    International Nuclear Information System (INIS)

    Neuman, S.P.

    1983-01-01

    Radionuclide transport in the subsurface is often modeled with the aid of the advection-dispersion equation. A review of existing computer methods for the solution of this equation shows that there is need for improvement. To answer this need, a new adaptive numerical method is proposed based on an Eulerian-Lagrangian formulation. The method is based on a decomposition of the concentration field into two parts, one advective and one dispersive, in a rigorous manner that does not leave room for ambiguity. The advective component of steep concentration fronts is tracked forward with the aid of moving particles clustered around each front. Away from such fronts the advection problem is handled by an efficient modified method of characteristics called single-step reverse particle tracking. When a front dissipates with time, its forward tracking stops automatically and the corresponding cloud of particles is eliminated. The dispersion problem is solved by an unconventional Lagrangian finite element formulation on a fixed grid which involves only symmetric and diagonal matrices. Preliminary tests against analytical solutions of ne- and two-dimensional dispersion in a uniform steady state velocity field suggest that the proposed adaptive method can handle the entire range of Peclet numbers from 0 to infinity, with Courant numbers well in excess of 1

  2. Computer Based Expert Systems.

    Science.gov (United States)

    Parry, James D.; Ferrara, Joseph M.

    1985-01-01

    Claims knowledge-based expert computer systems can meet needs of rural schools for affordable expert advice and support and will play an important role in the future of rural education. Describes potential applications in prediction, interpretation, diagnosis, remediation, planning, monitoring, and instruction. (NEC)

  3. Budget-based power consumption for application execution on a plurality of compute nodes

    Science.gov (United States)

    Archer, Charles J; Inglett, Todd A; Ratterman, Joseph D

    2012-10-23

    Methods, apparatus, and products are disclosed for budget-based power consumption for application execution on a plurality of compute nodes that include: assigning an execution priority to each of one or more applications; executing, on the plurality of compute nodes, the applications according to the execution priorities assigned to the applications at an initial power level provided to the compute nodes until a predetermined power consumption threshold is reached; and applying, upon reaching the predetermined power consumption threshold, one or more power conservation actions to reduce power consumption of the plurality of compute nodes during execution of the applications.

  4. Development of a Computer-Based Visualised Quantitative Learning System for Playing Violin Vibrato

    Science.gov (United States)

    Ho, Tracy Kwei-Liang; Lin, Huann-shyang; Chen, Ching-Kong; Tsai, Jih-Long

    2015-01-01

    Traditional methods of teaching music are largely subjective, with the lack of objectivity being particularly challenging for violin students learning vibrato because of the existence of conflicting theories. By using a computer-based analysis method, this study found that maintaining temporal coincidence between the intensity peak and the target…

  5. A guidance law for UAV autonomous aerial refueling based on the iterative computation method

    Directory of Open Access Journals (Sweden)

    Luo Delin

    2014-08-01

    Full Text Available The rendezvous and formation problem is a significant part for the unmanned aerial vehicle (UAV autonomous aerial refueling (AAR technique. It can be divided into two major phases: the long-range guidance phase and the formation phase. In this paper, an iterative computation guidance law (ICGL is proposed to compute a series of state variables to get the solution of a control variable for a UAV conducting rendezvous with a tanker in AAR. The proposed method can make the control variable converge to zero when the tanker and the UAV receiver come to a formation flight eventually. For the long-range guidance phase, the ICGL divides it into two sub-phases: the correction sub-phase and the guidance sub-phase. The two sub-phases share the same iterative process. As for the formation phase, a velocity coordinate system is created by which control accelerations are designed to make the speed of the UAV consistent with that of the tanker. The simulation results demonstrate that the proposed ICGL is effective and robust against wind disturbance.

  6. Spiral and Project-Based Learning with Peer Assessment in a Computer Science Project Management Course

    Science.gov (United States)

    Jaime, Arturo; Blanco, José Miguel; Domínguez, César; Sánchez, Ana; Heras, Jónathan; Usandizaga, Imanol

    2016-01-01

    Different learning methods such as project-based learning, spiral learning and peer assessment have been implemented in science disciplines with different outcomes. This paper presents a proposal for a project management course in the context of a computer science degree. Our proposal combines three well-known methods: project-based learning,…

  7. New evaluation methods for conceptual design selection using computational intelligence techniques

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Hong Zhong; Liu, Yu; Li, Yanfeng; Wang, Zhonglai [University of Electronic Science and Technology of China, Chengdu (China); Xue, Lihua [Higher Education Press, Beijing (China)

    2013-03-15

    The conceptual design selection, which aims at choosing the best or most desirable design scheme among several candidates for the subsequent detailed design stage, oftentimes requires a set of tools to conduct design evaluation. Using computational intelligence techniques, such as fuzzy logic, neural network, genetic algorithm, and physical programming, several design evaluation methods are put forth in this paper to realize the conceptual design selection under different scenarios. Depending on whether an evaluation criterion can be quantified or not, the linear physical programming (LPP) model and the RAOGA-based fuzzy neural network (FNN) model can be utilized to evaluate design alternatives in conceptual design stage. Furthermore, on the basis of Vanegas and Labib's work, a multi-level conceptual design evaluation model based on the new fuzzy weighted average (NFWA) and the fuzzy compromise decision-making method is developed to solve the design evaluation problem consisting of many hierarchical criteria. The effectiveness of the proposed methods is demonstrated via several illustrative examples.

  8. New evaluation methods for conceptual design selection using computational intelligence techniques

    International Nuclear Information System (INIS)

    Huang, Hong Zhong; Liu, Yu; Li, Yanfeng; Wang, Zhonglai; Xue, Lihua

    2013-01-01

    The conceptual design selection, which aims at choosing the best or most desirable design scheme among several candidates for the subsequent detailed design stage, oftentimes requires a set of tools to conduct design evaluation. Using computational intelligence techniques, such as fuzzy logic, neural network, genetic algorithm, and physical programming, several design evaluation methods are put forth in this paper to realize the conceptual design selection under different scenarios. Depending on whether an evaluation criterion can be quantified or not, the linear physical programming (LPP) model and the RAOGA-based fuzzy neural network (FNN) model can be utilized to evaluate design alternatives in conceptual design stage. Furthermore, on the basis of Vanegas and Labib's work, a multi-level conceptual design evaluation model based on the new fuzzy weighted average (NFWA) and the fuzzy compromise decision-making method is developed to solve the design evaluation problem consisting of many hierarchical criteria. The effectiveness of the proposed methods is demonstrated via several illustrative examples.

  9. Secure information transfer based on computing reservoir

    Energy Technology Data Exchange (ETDEWEB)

    Szmoski, R.M.; Ferrari, F.A.S. [Department of Physics, Universidade Estadual de Ponta Grossa, 84030-900, Ponta Grossa (Brazil); Pinto, S.E. de S, E-mail: desouzapinto@pq.cnpq.br [Department of Physics, Universidade Estadual de Ponta Grossa, 84030-900, Ponta Grossa (Brazil); Baptista, M.S. [Institute for Complex Systems and Mathematical Biology, SUPA, University of Aberdeen, Aberdeen (United Kingdom); Viana, R.L. [Department of Physics, Universidade Federal do Parana, 81531-990, Curitiba, Parana (Brazil)

    2013-04-01

    There is a broad area of research to ensure that information is transmitted securely. Within this scope, chaos-based cryptography takes a prominent role due to its nonlinear properties. Using these properties, we propose a secure mechanism for transmitting data that relies on chaotic networks. We use a nonlinear on–off device to cipher the message, and the transfer entropy to retrieve it. We analyze the system capability for sending messages, and we obtain expressions for the operating time. We demonstrate the system efficiency for a wide range of parameters. We find similarities between our method and the reservoir computing.

  10. CSNS computing environment Based on OpenStack

    Science.gov (United States)

    Li, Yakang; Qi, Fazhi; Chen, Gang; Wang, Yanming; Hong, Jianshu

    2017-10-01

    Cloud computing can allow for more flexible configuration of IT resources and optimized hardware utilization, it also can provide computing service according to the real need. We are applying this computing mode to the China Spallation Neutron Source(CSNS) computing environment. So, firstly, CSNS experiment and its computing scenarios and requirements are introduced in this paper. Secondly, the design and practice of cloud computing platform based on OpenStack are mainly demonstrated from the aspects of cloud computing system framework, network, storage and so on. Thirdly, some improvments to openstack we made are discussed further. Finally, current status of CSNS cloud computing environment are summarized in the ending of this paper.

  11. Multiscale Methods, Parallel Computation, and Neural Networks for Real-Time Computer Vision.

    Science.gov (United States)

    Battiti, Roberto

    1990-01-01

    This thesis presents new algorithms for low and intermediate level computer vision. The guiding ideas in the presented approach are those of hierarchical and adaptive processing, concurrent computation, and supervised learning. Processing of the visual data at different resolutions is used not only to reduce the amount of computation necessary to reach the fixed point, but also to produce a more accurate estimation of the desired parameters. The presented adaptive multiple scale technique is applied to the problem of motion field estimation. Different parts of the image are analyzed at a resolution that is chosen in order to minimize the error in the coefficients of the differential equations to be solved. Tests with video-acquired images show that velocity estimation is more accurate over a wide range of motion with respect to the homogeneous scheme. In some cases introduction of explicit discontinuities coupled to the continuous variables can be used to avoid propagation of visual information from areas corresponding to objects with different physical and/or kinematic properties. The human visual system uses concurrent computation in order to process the vast amount of visual data in "real -time." Although with different technological constraints, parallel computation can be used efficiently for computer vision. All the presented algorithms have been implemented on medium grain distributed memory multicomputers with a speed-up approximately proportional to the number of processors used. A simple two-dimensional domain decomposition assigns regions of the multiresolution pyramid to the different processors. The inter-processor communication needed during the solution process is proportional to the linear dimension of the assigned domain, so that efficiency is close to 100% if a large region is assigned to each processor. Finally, learning algorithms are shown to be a viable technique to engineer computer vision systems for different applications starting from

  12. Genomic signal processing methods for computation of alignment-free distances from DNA sequences.

    Science.gov (United States)

    Borrayo, Ernesto; Mendizabal-Ruiz, E Gerardo; Vélez-Pérez, Hugo; Romo-Vázquez, Rebeca; Mendizabal, Adriana P; Morales, J Alejandro

    2014-01-01

    Genomic signal processing (GSP) refers to the use of digital signal processing (DSP) tools for analyzing genomic data such as DNA sequences. A possible application of GSP that has not been fully explored is the computation of the distance between a pair of sequences. In this work we present GAFD, a novel GSP alignment-free distance computation method. We introduce a DNA sequence-to-signal mapping function based on the employment of doublet values, which increases the number of possible amplitude values for the generated signal. Additionally, we explore the use of three DSP distance metrics as descriptors for categorizing DNA signal fragments. Our results indicate the feasibility of employing GAFD for computing sequence distances and the use of descriptors for characterizing DNA fragments.

  13. Methods for simulation-based analysis of fluid-structure interaction.

    Energy Technology Data Exchange (ETDEWEB)

    Barone, Matthew Franklin; Payne, Jeffrey L.

    2005-10-01

    Methods for analysis of fluid-structure interaction using high fidelity simulations are critically reviewed. First, a literature review of modern numerical techniques for simulation of aeroelastic phenomena is presented. The review focuses on methods contained within the arbitrary Lagrangian-Eulerian (ALE) framework for coupling computational fluid dynamics codes to computational structural mechanics codes. The review treats mesh movement algorithms, the role of the geometric conservation law, time advancement schemes, wetted surface interface strategies, and some representative applications. The complexity and computational expense of coupled Navier-Stokes/structural dynamics simulations points to the need for reduced order modeling to facilitate parametric analysis. The proper orthogonal decomposition (POD)/Galerkin projection approach for building a reduced order model (ROM) is presented, along with ideas for extension of the methodology to allow construction of ROMs based on data generated from ALE simulations.

  14. A Computer-Based Program to Teach Braille Reading to Sighted Individuals

    Science.gov (United States)

    Scheithauer, Mindy C.; Tiger, Jeffrey H.

    2012-01-01

    Instructors of the visually impaired need efficient braille-training methods. This study conducted a preliminary evaluation of a computer-based program intended to teach the relation between braille characters and English letters using a matching-to-sample format with 4 sighted college students. Each participant mastered matching visual depictions…

  15. Geometric optical transfer function and tis computation method

    International Nuclear Information System (INIS)

    Wang Qi

    1992-01-01

    Geometric Optical Transfer Function formula is derived after expound some content to be easily ignored, and the computation method is given with Bessel function of order zero and numerical integration and Spline interpolation. The method is of advantage to ensure accuracy and to save calculation

  16. Industrial applications of formal methods to model, design and analyze computer systems

    CERN Document Server

    Craigen, Dan

    1995-01-01

    Formal methods are mathematically-based techniques, often supported by reasoning tools, that can offer a rigorous and effective way to model, design and analyze computer systems. The purpose of this study is to evaluate international industrial experience in using formal methods. The cases selected are representative of industrial-grade projects and span a variety of application domains. The study had three main objectives: · To better inform deliberations within industry and government on standards and regulations; · To provide an authoritative record on the practical experience of formal m

  17. Optimizing distance-based methods for large data sets

    Science.gov (United States)

    Scholl, Tobias; Brenner, Thomas

    2015-10-01

    Distance-based methods for measuring spatial concentration of industries have received an increasing popularity in the spatial econometrics community. However, a limiting factor for using these methods is their computational complexity since both their memory requirements and running times are in {{O}}(n^2). In this paper, we present an algorithm with constant memory requirements and shorter running time, enabling distance-based methods to deal with large data sets. We discuss three recent distance-based methods in spatial econometrics: the D&O-Index by Duranton and Overman (Rev Econ Stud 72(4):1077-1106, 2005), the M-function by Marcon and Puech (J Econ Geogr 10(5):745-762, 2010) and the Cluster-Index by Scholl and Brenner (Reg Stud (ahead-of-print):1-15, 2014). Finally, we present an alternative calculation for the latter index that allows the use of data sets with millions of firms.

  18. Computational intelligence-based polymerase chain reaction primer selection based on a novel teaching-learning-based optimisation.

    Science.gov (United States)

    Cheng, Yu-Huei

    2014-12-01

    Specific primers play an important role in polymerase chain reaction (PCR) experiments, and therefore it is essential to find specific primers of outstanding quality. Unfortunately, many PCR constraints must be simultaneously inspected which makes specific primer selection difficult and time-consuming. This paper introduces a novel computational intelligence-based method, Teaching-Learning-Based Optimisation, to select the specific and feasible primers. The specified PCR product lengths of 150-300 bp and 500-800 bp with three melting temperature formulae of Wallace's formula, Bolton and McCarthy's formula and SantaLucia's formula were performed. The authors calculate optimal frequency to estimate the quality of primer selection based on a total of 500 runs for 50 random nucleotide sequences of 'Homo species' retrieved from the National Center for Biotechnology Information. The method was then fairly compared with the genetic algorithm (GA) and memetic algorithm (MA) for primer selection in the literature. The results show that the method easily found suitable primers corresponding with the setting primer constraints and had preferable performance than the GA and the MA. Furthermore, the method was also compared with the common method Primer3 according to their method type, primers presentation, parameters setting, speed and memory usage. In conclusion, it is an interesting primer selection method and a valuable tool for automatic high-throughput analysis. In the future, the usage of the primers in the wet lab needs to be validated carefully to increase the reliability of the method.

  19. Overdetermined shooting methods for computing standing water waves with spectral accuracy

    International Nuclear Information System (INIS)

    Wilkening, Jon; Yu Jia

    2012-01-01

    A high-performance shooting algorithm is developed to compute time-periodic solutions of the free-surface Euler equations with spectral accuracy in double and quadruple precision. The method is used to study resonance and its effect on standing water waves. We identify new nucleation mechanisms in which isolated large-amplitude solutions, and closed loops of such solutions, suddenly exist for depths below a critical threshold. We also study degenerate and secondary bifurcations related to Wilton's ripples in the traveling case, and explore the breakdown of self-similarity at the crests of extreme standing waves. In shallow water, we find that standing waves take the form of counter-propagating solitary waves that repeatedly collide quasi-elastically. In deep water with surface tension, we find that standing waves resemble counter-propagating depression waves. We also discuss the existence and non-uniqueness of solutions, and smooth versus erratic dependence of Fourier modes on wave amplitude and fluid depth. In the numerical method, robustness is achieved by posing the problem as an overdetermined nonlinear system and using either adjoint-based minimization techniques or a quadratically convergent trust-region method to minimize the objective function. Efficiency is achieved in the trust-region approach by parallelizing the Jacobian computation, so the setup cost of computing the Dirichlet-to-Neumann operator in the variational equation is not repeated for each column. Updates of the Jacobian are also delayed until the previous Jacobian ceases to be useful. Accuracy is maintained using spectral collocation with optional mesh refinement in space, a high-order Runge–Kutta or spectral deferred correction method in time and quadruple precision for improved navigation of delicate regions of parameter space as well as validation of double-precision results. Implementation issues for transferring much of the computation to a graphic processing units are briefly

  20. Information Security Scheme Based on Computational Temporal Ghost Imaging.

    Science.gov (United States)

    Jiang, Shan; Wang, Yurong; Long, Tao; Meng, Xiangfeng; Yang, Xiulun; Shu, Rong; Sun, Baoqing

    2017-08-09

    An information security scheme based on computational temporal ghost imaging is proposed. A sequence of independent 2D random binary patterns are used as encryption key to multiply with the 1D data stream. The cipher text is obtained by summing the weighted encryption key. The decryption process can be realized by correlation measurement between the encrypted information and the encryption key. Due to the instinct high-level randomness of the key, the security of this method is greatly guaranteed. The feasibility of this method and robustness against both occlusion and additional noise attacks are discussed with simulation, respectively.

  1. Volumetric quantification of bone-implant contact using micro-computed tomography analysis based on region-based segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Sung Won; Lee, Woo Jin; Choi, Soon Chul; Lee, Sam Sun; Heo, Min Suk; Huh, Kyung Hoe; Kim, Tae Il; Yi, Won Ji [Dental Research Institute, School of Dentistry, Seoul National University, Seoul (Korea, Republic of)

    2015-03-15

    We have developed a new method of segmenting the areas of absorbable implants and bone using region-based segmentation of micro-computed tomography (micro-CT) images, which allowed us to quantify volumetric bone-implant contact (VBIC) and volumetric absorption (VA). The simple threshold technique generally used in micro-CT analysis cannot be used to segment the areas of absorbable implants and bone. Instead, a region-based segmentation method, a region-labeling method, and subsequent morphological operations were successively applied to micro-CT images. The three-dimensional VBIC and VA of the absorbable implant were then calculated over the entire volume of the implant. Two-dimensional (2D) bone-implant contact (BIC) and bone area (BA) were also measured based on the conventional histomorphometric method. VA and VBIC increased significantly with as the healing period increased (p<0.05). VBIC values were significantly correlated with VA values (p<0.05) and with 2D BIC values (p<0.05). It is possible to quantify VBIC and VA for absorbable implants using micro-CT analysis using a region-based segmentation method.

  2. Volumetric quantification of bone-implant contact using micro-computed tomography analysis based on region-based segmentation

    International Nuclear Information System (INIS)

    Kang, Sung Won; Lee, Woo Jin; Choi, Soon Chul; Lee, Sam Sun; Heo, Min Suk; Huh, Kyung Hoe; Kim, Tae Il; Yi, Won Ji

    2015-01-01

    We have developed a new method of segmenting the areas of absorbable implants and bone using region-based segmentation of micro-computed tomography (micro-CT) images, which allowed us to quantify volumetric bone-implant contact (VBIC) and volumetric absorption (VA). The simple threshold technique generally used in micro-CT analysis cannot be used to segment the areas of absorbable implants and bone. Instead, a region-based segmentation method, a region-labeling method, and subsequent morphological operations were successively applied to micro-CT images. The three-dimensional VBIC and VA of the absorbable implant were then calculated over the entire volume of the implant. Two-dimensional (2D) bone-implant contact (BIC) and bone area (BA) were also measured based on the conventional histomorphometric method. VA and VBIC increased significantly with as the healing period increased (p<0.05). VBIC values were significantly correlated with VA values (p<0.05) and with 2D BIC values (p<0.05). It is possible to quantify VBIC and VA for absorbable implants using micro-CT analysis using a region-based segmentation method.

  3. Analysis of problem solving on project based learning with resource based learning approach computer-aided program

    Science.gov (United States)

    Kuncoro, K. S.; Junaedi, I.; Dwijanto

    2018-03-01

    This study aimed to reveal the effectiveness of Project Based Learning with Resource Based Learning approach computer-aided program and analyzed problem-solving abilities in terms of problem-solving steps based on Polya stages. The research method used was mixed method with sequential explanatory design. The subject of this research was the students of math semester 4. The results showed that the S-TPS (Strong Top Problem Solving) and W-TPS (Weak Top Problem Solving) had good problem-solving abilities in each problem-solving indicator. The problem-solving ability of S-MPS (Strong Middle Problem Solving) and (Weak Middle Problem Solving) in each indicator was good. The subject of S-BPS (Strong Bottom Problem Solving) had a difficulty in solving the problem with computer program, less precise in writing the final conclusion and could not reflect the problem-solving process using Polya’s step. While the Subject of W-BPS (Weak Bottom Problem Solving) had not been able to meet almost all the indicators of problem-solving. The subject of W-BPS could not precisely made the initial table of completion so that the completion phase with Polya’s step was constrained.

  4. An Accurate liver segmentation method using parallel computing algorithm

    International Nuclear Information System (INIS)

    Elbasher, Eiman Mohammed Khalied

    2014-12-01

    Computed Tomography (CT or CAT scan) is a noninvasive diagnostic imaging procedure that uses a combination of X-rays and computer technology to produce horizontal, or axial, images (often called slices) of the body. A CT scan shows detailed images of any part of the body, including the bones muscles, fat and organs CT scans are more detailed than standard x-rays. CT scans may be done with or without "contrast Contrast refers to a substance taken by mouth and/ or injected into an intravenous (IV) line that causes the particular organ or tissue under study to be seen more clearly. CT scan of the liver and biliary tract are used in the diagnosis of many diseases in the abdomen structures, particularly when another type of examination, such as X-rays, physical examination, and ultra sound is not conclusive. Unfortunately, the presence of noise and artifact in the edges and fine details in the CT images limit the contrast resolution and make diagnostic procedure more difficult. This experimental study was conducted at the College of Medical Radiological Science, Sudan University of Science and Technology and Fidel Specialist Hospital. The sample of study was included 50 patients. The main objective of this research was to study an accurate liver segmentation method using a parallel computing algorithm, and to segment liver and adjacent organs using image processing technique. The main technique of segmentation used in this study was watershed transform. The scope of image processing and analysis applied to medical application is to improve the quality of the acquired image and extract quantitative information from medical image data in an efficient and accurate way. The results of this technique agreed wit the results of Jarritt et al, (2010), Kratchwil et al, (2010), Jover et al, (2011), Yomamoto et al, (1996), Cai et al (1999), Saudha and Jayashree (2010) who used different segmentation filtering based on the methods of enhancing the computed tomography images. Anther

  5. Simulating elastic light scattering using high performance computing methods

    NARCIS (Netherlands)

    Hoekstra, A.G.; Sloot, P.M.A.; Verbraeck, A.; Kerckhoffs, E.J.H.

    1993-01-01

    The Coupled Dipole method, as originally formulated byPurcell and Pennypacker, is a very powerful method tosimulate the Elastic Light Scattering from arbitraryparticles. This method, which is a particle simulationmodel for Computational Electromagnetics, has one majordrawback: if the size of the

  6. Computation of integral bases

    NARCIS (Netherlands)

    Bauch, J.H.P.

    2015-01-01

    Let $A$ be a Dedekind domain, $K$ the fraction field of $A$, and $f\\in A[x]$ a monic irreducible separable polynomial. For a given non-zero prime ideal $\\mathfrak{p}$ of $A$ we present in this paper a new method to compute a $\\mathfrak{p}$-integral basis of the extension of $K$ determined by $f$.

  7. A Short Review of FDTD-Based Methods for Uncertainty Quantification in Computational Electromagnetics

    Directory of Open Access Journals (Sweden)

    Theodoros T. Zygiridis

    2017-01-01

    Full Text Available We provide a review of selected computational methodologies that are based on the deterministic finite-difference time-domain algorithm and are suitable for the investigation of electromagnetic problems involving uncertainties. As it will become apparent, several alternatives capable of performing uncertainty quantification in a variety of cases exist, each one exhibiting different qualities and ranges of applicability, which we intend to point out here. Given the numerous available approaches, the purpose of this paper is to clarify the main strengths and weaknesses of the described methodologies and help the potential readers to safely select the most suitable approach for their problem under consideration.

  8. Free surface profiles in river flows: Can standard energy-based gradually-varied flow computations be pursued?

    Science.gov (United States)

    Cantero, Francisco; Castro-Orgaz, Oscar; Garcia-Marín, Amanda; Ayuso, José Luis; Dey, Subhasish

    2015-10-01

    Is the energy equation for gradually-varied flow the best approximation for the free surface profile computations in river flows? Determination of flood inundation in rivers and natural waterways is based on the hydraulic computation of flow profiles. This is usually done using energy-based gradually-varied flow models, like HEC-RAS, that adopts a vertical division method for discharge prediction in compound channel sections. However, this discharge prediction method is not so accurate in the context of advancements over the last three decades. This paper firstly presents a study of the impact of discharge prediction on the gradually-varied flow computations by comparing thirteen different methods for compound channels, where both energy and momentum equations are applied. The discharge, velocity distribution coefficients, specific energy, momentum and flow profiles are determined. After the study of gradually-varied flow predictions, a new theory is developed to produce higher-order energy and momentum equations for rapidly-varied flow in compound channels. These generalized equations enable to describe the flow profiles with more generality than the gradually-varied flow computations. As an outcome, results of gradually-varied flow provide realistic conclusions for computations of flow in compound channels, showing that momentum-based models are in general more accurate; whereas the new theory developed for rapidly-varied flow opens a new research direction, so far not investigated in flows through compound channels.

  9. Computer-Based Career Interventions.

    Science.gov (United States)

    Mau, Wei-Cheng

    The possible utilities and limitations of computer-assisted career guidance systems (CACG) have been widely discussed although the effectiveness of CACG has not been systematically considered. This paper investigates the effectiveness of a theory-based CACG program, integrating Sequential Elimination and Expected Utility strategies. Three types of…

  10. Home-Based Computer Gaming in Vestibular Rehabilitation of Gaze and Balance Impairment.

    Science.gov (United States)

    Szturm, Tony; Reimer, Karen M; Hochman, Jordan

    2015-06-01

    Disease or damage of the vestibular sense organs cause a range of distressing symptoms and functional problems that could include loss of balance, gaze instability, disorientation, and dizziness. A novel computer-based rehabilitation system with therapeutic gaming application has been developed. This method allows different gaze and head movement exercises to be coupled to a wide range of inexpensive, commercial computer games. It can be used in standing, and thus graded balance demands using a sponge pad can be incorporated into the program. A case series pre- and postintervention study was conducted of nine adults diagnosed with peripheral vestibular dysfunction who received a 12-week home rehabilitation program. The feasibility and usability of the home computer-based therapeutic program were established. Study findings revealed that using head rotation to interact with computer games, when coupled to demanding balance conditions, resulted in significant improvements in standing balance, dynamic visual acuity, gaze control, and walking performance. Perception of dizziness as measured by the Dizziness Handicap Inventory also decreased significantly. These preliminary findings provide support that a low-cost home game-based exercise program is well suited to train standing balance and gaze control (with active and passive head motion).

  11. A Krylov Subspace Method for Unstructured Mesh SN Transport Computation

    International Nuclear Information System (INIS)

    Yoo, Han Jong; Cho, Nam Zin; Kim, Jong Woon; Hong, Ser Gi; Lee, Young Ouk

    2010-01-01

    Hong, et al., have developed a computer code MUST (Multi-group Unstructured geometry S N Transport) for the neutral particle transport calculations in three-dimensional unstructured geometry. In this code, the discrete ordinates transport equation is solved by using the discontinuous finite element method (DFEM) or the subcell balance methods with linear discontinuous expansion. In this paper, the conventional source iteration in the MUST code is replaced by the Krylov subspace method to reduce computing time and the numerical test results are given

  12. Computational electrodynamics the finite-difference time-domain method

    CERN Document Server

    Taflove, Allen

    2005-01-01

    This extensively revised and expanded third edition of the Artech House bestseller, Computational Electrodynamics: The Finite-Difference Time-Domain Method, offers engineers the most up-to-date and definitive resource on this critical method for solving Maxwell's equations. The method helps practitioners design antennas, wireless communications devices, high-speed digital and microwave circuits, and integrated optical devices with unsurpassed efficiency. There has been considerable advancement in FDTD computational technology over the past few years, and the third edition brings professionals the very latest details with entirely new chapters on important techniques, major updates on key topics, and new discussions on emerging areas such as nanophotonics. What's more, to supplement the third edition, the authors have created a Web site with solutions to problems, downloadable graphics and videos, and updates, making this new edition the ideal textbook on the subject as well.

  13. An online computer method for the potentiometric titration of mixtures of a strong and a weak acid

    NARCIS (Netherlands)

    Bos, M.

    1977-01-01

    A PDP-11 online computer method for the titration of mixtures or a strong and a weak acid is described.The method is based on multiparametric curve-fitting. One or the parameters found from the calculations is the dissociation constant of the weak acid, hence the method can be applied even when this

  14. Simulation of Si:P spin-based quantum computer architecture

    International Nuclear Information System (INIS)

    Chang Yiachung; Fang Angbo

    2008-01-01

    We present realistic simulation for single and double phosphorous donors in a silicon-based quantum computer design by solving a valley-orbit coupled effective-mass equation for describing phosphorous donors in strained silicon quantum well (QW). Using a generalized unrestricted Hartree-Fock method, we solve the two-electron effective-mass equation with quantum well confinement and realistic gate potentials. The effects of QW width, gate voltages, donor separation, and donor position shift on the lowest singlet and triplet energies and their charge distributions for a neighboring donor pair in the quantum computer(QC) architecture are analyzed. The gate tunability are defined and evaluated for a typical QC design. Estimates are obtained for the duration of spin half-swap gate operation.

  15. Reliability Assessment of Cloud Computing Platform Based on Semiquantitative Information and Evidential Reasoning

    Directory of Open Access Journals (Sweden)

    Hang Wei

    2016-01-01

    Full Text Available A reliability assessment method based on evidential reasoning (ER rule and semiquantitative information is proposed in this paper, where a new reliability assessment architecture including four aspects with both quantitative data and qualitative knowledge is established. The assessment architecture is more objective in describing complex dynamic cloud computing environment than that in traditional method. In addition, the ER rule which has good performance for multiple attribute decision making problem is employed to integrate different types of the attributes in assessment architecture, which can obtain more accurate assessment results. The assessment results of the case study in an actual cloud computing platform verify the effectiveness and the advantage of the proposed method.

  16. Computer-based control systems of nuclear power plants

    International Nuclear Information System (INIS)

    Kalashnikov, V.K.; Shugam, R.A.; Ol'shevsky, Yu.N.

    1975-01-01

    Computer-based control systems of nuclear power plants may be classified into those using computers for data acquisition only, those using computers for data acquisition and data processing, and those using computers for process control. In the present paper a brief review is given of the functions the systems above mentioned perform, their applications in different nuclear power plants, and some of their characteristics. The trend towards hierarchic systems using control computers with reserves already becomes clear when consideration is made of the control systems applied in the Canadian nuclear power plants that pertain to the first ones equipped with process computers. The control system being now under development for the large Soviet reactors of WWER type will also be based on the use of control computers. That part of the system concerned with controlling the reactor assembly is described in detail

  17. Computer-modeling codes to improve exploration nuclear-logging methods. National Uranium Resource Evaluation

    International Nuclear Information System (INIS)

    Wilson, R.D.; Price, R.K.; Kosanke, K.L.

    1983-03-01

    As part of the Department of Energy's National Uranium Resource Evaluation (NURE) project's Technology Development effort, a number of computer codes and accompanying data bases were assembled for use in modeling responses of nuclear borehole logging Sondes. The logging methods include fission neutron, active and passive gamma-ray, and gamma-gamma. These CDC-compatible computer codes and data bases are available on magnetic tape from the DOE Technical Library at its Grand Junction Area Office. Some of the computer codes are standard radiation-transport programs that have been available to the radiation shielding community for several years. Other codes were specifically written to model the response of borehole radiation detectors or are specialized borehole modeling versions of existing Monte Carlo transport programs. Results from several radiation modeling studies are available as two large data bases (neutron and gamma-ray). These data bases are accompanied by appropriate processing programs that permit the user to model a wide range of borehole and formation-parameter combinations for fission-neutron, neutron-, activation and gamma-gamma logs. The first part of this report consists of a brief abstract for each code or data base. The abstract gives the code name and title, short description, auxiliary requirements, typical running time (CDC 6600), and a list of references. The next section gives format specifications and/or directory for the tapes. The final section of the report presents listings for programs used to convert data bases between machine floating-point and EBCDIC

  18. High performance computing and quantum trajectory method in CPU and GPU systems

    International Nuclear Information System (INIS)

    Wiśniewska, Joanna; Sawerwain, Marek; Leoński, Wiesław

    2015-01-01

    Nowadays, a dynamic progress in computational techniques allows for development of various methods, which offer significant speed-up of computations, especially those related to the problems of quantum optics and quantum computing. In this work, we propose computational solutions which re-implement the quantum trajectory method (QTM) algorithm in modern parallel computation environments in which multi-core CPUs and modern many-core GPUs can be used. In consequence, new computational routines are developed in more effective way than those applied in other commonly used packages, such as Quantum Optics Toolbox (QOT) for Matlab or QuTIP for Python

  19. A service based estimation method for MPSoC performance modelling

    DEFF Research Database (Denmark)

    Tranberg-Hansen, Anders Sejer; Madsen, Jan; Jensen, Bjørn Sand

    2008-01-01

    This paper presents an abstract service based estimation method for MPSoC performance modelling which allows fast, cycle accurate design space exploration of complex architectures including multi processor configurations at a very early stage in the design phase. The modelling method uses a service...... oriented model of computation based on Hierarchical Colored Petri Nets and allows the modelling of both software and hardware in one unified model. To illustrate the potential of the method, a small MPSoC system, developed at Bang & Olufsen ICEpower a/s, is modelled and performance estimates are produced...

  20. Basicities of Strong Bases in Water: A Computational Study

    OpenAIRE

    Kaupmees, Karl; Trummal, Aleksander; Leito, Ivo

    2014-01-01

    Aqueous pKa values of strong organic bases – DBU, TBD, MTBD, different phosphazene bases, etc – were computed with CPCM, SMD and COSMO-RS approaches. Explicit solvent molecules were not used. Direct computations and computations with reference pKa values were used. The latter were of two types: (1) reliable experimental aqueous pKa value of a reference base with structure similar to the investigated base or (2) reliable experimental pKa value in acetonitrile of the investigated base itself. ...

  1. Performance Analysis and Application of Three Different Computational Methods for Solar Heating System with Seasonal Water Tank Heat Storage

    Directory of Open Access Journals (Sweden)

    Dongliang Sun

    2013-01-01

    Full Text Available We analyze and compare three different computational methods for a solar heating system with seasonal water tank heat storage (SHS-SWTHS. These methods are accurate numerical method, temperature stratification method, and uniform temperature method. The accurate numerical method can accurately predict the performance of the system, but it takes about 4 to 5 weeks, which is too long and hard for the performance analysis of this system. The temperature stratification method obtains relatively accurate computation results and takes a relatively short computation time, which is about 2 to 3 hours. Therefore, this method is most suitable for the performance analysis of this system. The deviation of the computational results of the uniform temperature method is great, and the time consumed is similar to that of the temperature stratification method. Therefore, this method is not recommended herein. Based on the above analyses, the temperature stratification method is applied to analyze the influence of the embedded depth of water tank, the thickness of thermal insulation material, and the collection area on the performance of this system. The results will provide a design basis for the related demonstration projects.

  2. Highly Scalable Asynchronous Computing Method for Partial Differential Equations: A Path Towards Exascale

    Science.gov (United States)

    Konduri, Aditya

    Many natural and engineering systems are governed by nonlinear partial differential equations (PDEs) which result in a multiscale phenomena, e.g. turbulent flows. Numerical simulations of these problems are computationally very expensive and demand for extreme levels of parallelism. At realistic conditions, simulations are being carried out on massively parallel computers with hundreds of thousands of processing elements (PEs). It has been observed that communication between PEs as well as their synchronization at these extreme scales take up a significant portion of the total simulation time and result in poor scalability of codes. This issue is likely to pose a bottleneck in scalability of codes on future Exascale systems. In this work, we propose an asynchronous computing algorithm based on widely used finite difference methods to solve PDEs in which synchronization between PEs due to communication is relaxed at a mathematical level. We show that while stability is conserved when schemes are used asynchronously, accuracy is greatly degraded. Since message arrivals at PEs are random processes, so is the behavior of the error. We propose a new statistical framework in which we show that average errors drop always to first-order regardless of the original scheme. We propose new asynchrony-tolerant schemes that maintain accuracy when synchronization is relaxed. The quality of the solution is shown to depend, not only on the physical phenomena and numerical schemes, but also on the characteristics of the computing machine. A novel algorithm using remote memory access communications has been developed to demonstrate excellent scalability of the method for large-scale computing. Finally, we present a path to extend this method in solving complex multi-scale problems on Exascale machines.

  3. Emerging Computational Methods for the Rational Discovery of Allosteric Drugs.

    Science.gov (United States)

    Wagner, Jeffrey R; Lee, Christopher T; Durrant, Jacob D; Malmstrom, Robert D; Feher, Victoria A; Amaro, Rommie E

    2016-06-08

    Allosteric drug development holds promise for delivering medicines that are more selective and less toxic than those that target orthosteric sites. To date, the discovery of allosteric binding sites and lead compounds has been mostly serendipitous, achieved through high-throughput screening. Over the past decade, structural data has become more readily available for larger protein systems and more membrane protein classes (e.g., GPCRs and ion channels), which are common allosteric drug targets. In parallel, improved simulation methods now provide better atomistic understanding of the protein dynamics and cooperative motions that are critical to allosteric mechanisms. As a result of these advances, the field of predictive allosteric drug development is now on the cusp of a new era of rational structure-based computational methods. Here, we review algorithms that predict allosteric sites based on sequence data and molecular dynamics simulations, describe tools that assess the druggability of these pockets, and discuss how Markov state models and topology analyses provide insight into the relationship between protein dynamics and allosteric drug binding. In each section, we first provide an overview of the various method classes before describing relevant algorithms and software packages.

  4. A stochastic method for computing hadronic matrix elements

    Energy Technology Data Exchange (ETDEWEB)

    Alexandrou, Constantia [Cyprus Univ., Nicosia (Cyprus). Dept. of Physics; The Cyprus Institute, Nicosia (Cyprus). Computational-based Science and Technology Research Center; Dinter, Simon; Drach, Vincent [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Jansen, Karl [Cyprus Univ., Nicosia (Cyprus). Dept. of Physics; Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Hadjiyiannakou, Kyriakos [Cyprus Univ., Nicosia (Cyprus). Dept. of Physics; Renner, Dru B. [Thomas Jefferson National Accelerator Facility, Newport News, VA (United States); Collaboration: European Twisted Mass Collaboration

    2013-02-15

    We present a stochastic method for the calculation of baryon three-point functions that is more versatile compared to the typically used sequential method. We analyze the scaling of the error of the stochastically evaluated three-point function with the lattice volume and find a favorable signal-to-noise ratio suggesting that our stochastic method can be used efficiently at large volumes to compute hadronic matrix elements.

  5. Computational methods for 2D materials: discovery, property characterization, and application design.

    Science.gov (United States)

    Paul, J T; Singh, A K; Dong, Z; Zhuang, H; Revard, B C; Rijal, B; Ashton, M; Linscheid, A; Blonsky, M; Gluhovic, D; Guo, J; Hennig, R G

    2017-11-29

    The discovery of two-dimensional (2D) materials comes at a time when computational methods are mature and can predict novel 2D materials, characterize their properties, and guide the design of 2D materials for applications. This article reviews the recent progress in computational approaches for 2D materials research. We discuss the computational techniques and provide an overview of the ongoing research in the field. We begin with an overview of known 2D materials, common computational methods, and available cyber infrastructures. We then move onto the discovery of novel 2D materials, discussing the stability criteria for 2D materials, computational methods for structure prediction, and interactions of monolayers with electrochemical and gaseous environments. Next, we describe the computational characterization of the 2D materials' electronic, optical, magnetic, and superconducting properties and the response of the properties under applied mechanical strain and electrical fields. From there, we move on to discuss the structure and properties of defects in 2D materials, and describe methods for 2D materials device simulations. We conclude by providing an outlook on the needs and challenges for future developments in the field of computational research for 2D materials.

  6. Template based parallel checkpointing in a massively parallel computer system

    Science.gov (United States)

    Archer, Charles Jens [Rochester, MN; Inglett, Todd Alan [Rochester, MN

    2009-01-13

    A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.

  7. Effects of Computer-Based Training on Procedural Modifications to Standard Functional Analyses

    Science.gov (United States)

    Schnell, Lauren K.; Sidener, Tina M.; DeBar, Ruth M.; Vladescu, Jason C.; Kahng, SungWoo

    2018-01-01

    Few studies have evaluated methods for training decision-making when functional analysis data are undifferentiated. The current study evaluated computer-based training to teach 20 graduate students to arrange functional analysis conditions, analyze functional analysis data, and implement procedural modifications. Participants were exposed to…

  8. Un Cours de composition francaise par ordinateur (A Computer-Based Course in French Composition).

    Science.gov (United States)

    Landes, Anne; Kaplan, Alice

    1988-01-01

    The origins, organization, and methods of a Columbia University course offering computer-based instruction in French composition are outlined, and the progress of four individual students is described. (MSE)

  9. A gradient-based method for segmenting FDG-PET images: methodology and validation

    International Nuclear Information System (INIS)

    Geets, Xavier; Lee, John A.; Gregoire, Vincent; Bol, Anne; Lonneux, Max

    2007-01-01

    A new gradient-based method for segmenting FDG-PET images is described and validated. The proposed method relies on the watershed transform and hierarchical cluster analysis. To allow a better estimation of the gradient intensity, iteratively reconstructed images were first denoised and deblurred with an edge-preserving filter and a constrained iterative deconvolution algorithm. Validation was first performed on computer-generated 3D phantoms containing spheres, then on a real cylindrical Lucite phantom containing spheres of different volumes ranging from 2.1 to 92.9 ml. Moreover, laryngeal tumours from seven patients were segmented on PET images acquired before laryngectomy by the gradient-based method and the thresholding method based on the source-to-background ratio developed by Daisne (Radiother Oncol 2003;69:247-50). For the spheres, the calculated volumes and radii were compared with the known values; for laryngeal tumours, the volumes were compared with the macroscopic specimens. Volume mismatches were also analysed. On computer-generated phantoms, the deconvolution algorithm decreased the mis-estimate of volumes and radii. For the Lucite phantom, the gradient-based method led to a slight underestimation of sphere volumes (by 10-20%), corresponding to negligible radius differences (0.5-1.1 mm); for laryngeal tumours, the segmented volumes by the gradient-based method agreed with those delineated on the macroscopic specimens, whereas the threshold-based method overestimated the true volume by 68% (p = 0.014). Lastly, macroscopic laryngeal specimens were totally encompassed by neither the threshold-based nor the gradient-based volumes. The gradient-based segmentation method applied on denoised and deblurred images proved to be more accurate than the source-to-background ratio method. (orig.)

  10. Soft computing based on hierarchical evaluation approach and criteria interdependencies for energy decision-making problems: A case study

    International Nuclear Information System (INIS)

    Gitinavard, Hossein; Mousavi, S. Meysam; Vahdani, Behnam

    2017-01-01

    In numerous real-world energy decision problems, decision makers often encounter complex environments, in which existent imprecise data and uncertain information lead us to make an appropriate decision. In this paper, a new soft computing group decision-making approach is introduced based on novel compromise ranking method and interval-valued hesitant fuzzy sets (IVHFSs) for energy decision-making problems under multiple criteria. In the proposed approach, the assessment information is provided by energy experts or decision makers based on interval-valued hesitant fuzzy elements under incomplete criteria weights. In this respect, a new ranking index is presented respecting to interval-valued hesitant fuzzy Hamming distance measure to prioritize energy candidates, and criteria weights are computed based on an extended maximizing deviation method by considering the preferences experts' judgments about the relative importance of each criterion. Also, a decision making trial and evaluation laboratory (DEMATEL) method is extended under an IVHF-environment to compute the interdependencies between and within the selected criteria in the hierarchical structure. Accordingly, to demonstrate the applicability of the presented approach a case study and a practical example are provided regarding to hierarchical structure and criteria interdependencies relations for renewable energy and energy policy selection problems. Hence, the obtained computational results are compared with a fuzzy decision-making method from the recent literature based on some comparison parameters to show the advantages and constraints of the proposed approach. Finally, a sensitivity analysis is prepared to indicate effects of different criteria weights on ranking results to present the robustness or sensitiveness of the proposed soft computing approach versus the relative importance of criteria. - Highlights: • Introducing a novel interval-valued hesitant fuzzy compromise ranking method. • Presenting

  11. Computational methods assuring nuclear power plant structural integrity and safety: an overview of the recent activities at VTT

    International Nuclear Information System (INIS)

    Keinaenen, H.; Talja, H.; Rintamaa, R.

    1998-01-01

    Numerical, simplified engineering and standardised methods are applied in the safety analyses of primary circuit components and reactor pressure vessels. The integrity assessment procedures require input relating both to the steady state and transient loading actual material properties data and precise knowledge of the size and geometry of defects. Current procedures bold extensive information regarding these aspects. It is important to verify the accuracy of the different assessment methods especially in the case of complex structures and loading. The focus of this paper is on the recent results and development of computational fracture assessment methods at VTT Manufacturing Technology. The methods include effective engineering type tools for rapid structural integrity assessments and more sophisticated finite-element based methods. An integrated PC-based program system MASI for engineering fracture analysis is described. A summary of the verification of the methods in computational benchmark analyses and against the results of large scale experiments is presented. (orig.)

  12. The Shortlist Method for fast computation of the Earth Mover's Distance and finding optimal solutions to transportation problems.

    Science.gov (United States)

    Gottschlich, Carsten; Schuhmacher, Dominic

    2014-01-01

    Finding solutions to the classical transportation problem is of great importance, since this optimization problem arises in many engineering and computer science applications. Especially the Earth Mover's Distance is used in a plethora of applications ranging from content-based image retrieval, shape matching, fingerprint recognition, object tracking and phishing web page detection to computing color differences in linguistics and biology. Our starting point is the well-known revised simplex algorithm, which iteratively improves a feasible solution to optimality. The Shortlist Method that we propose substantially reduces the number of candidates inspected for improving the solution, while at the same time balancing the number of pivots required. Tests on simulated benchmarks demonstrate a considerable reduction in computation time for the new method as compared to the usual revised simplex algorithm implemented with state-of-the-art initialization and pivot strategies. As a consequence, the Shortlist Method facilitates the computation of large scale transportation problems in viable time. In addition we describe a novel method for finding an initial feasible solution which we coin Modified Russell's Method.

  13. Minimizing the Free Energy: A Computer Method for Teaching Chemical Equilibrium Concepts.

    Science.gov (United States)

    Heald, Emerson F.

    1978-01-01

    Presents a computer method for teaching chemical equilibrium concepts using material balance conditions and the minimization of the free energy. Method for the calculation of chemical equilibrium, the computer program used to solve equilibrium problems and applications of the method are also included. (HM)

  14. Computational methods for ab initio detection of microRNAs

    Directory of Open Access Journals (Sweden)

    Malik eYousef

    2012-10-01

    Full Text Available MicroRNAs are small RNA sequences of 18-24 nucleotides in length, which serve as templates to drive post transcriptional gene silencing. The canonical microRNA pathway starts with transcription from DNA and is followed by processing via the Microprocessor complex, yielding a hairpin structure. Which is then exported into the cytosol where it is processed by Dicer and then incorporated into the RNA induced silencing complex. All of these biogenesis steps add to the overall specificity of miRNA production and effect. Unfortunately, their modes of action are just beginning to be elucidated and therefore computational prediction algorithms cannot model the process but are usually forced to employ machine learning approaches. This work focuses on ab initio prediction methods throughout; and therefore homology-based miRNA detection methods are not discussed. Current ab initio prediction algorithms, their ties to data mining, and their prediction accuracy are detailed.

  15. Secure Data Access Control for Fog Computing Based on Multi-Authority Attribute-Based Signcryption with Computation Outsourcing and Attribute Revocation.

    Science.gov (United States)

    Xu, Qian; Tan, Chengxiang; Fan, Zhijie; Zhu, Wenye; Xiao, Ya; Cheng, Fujia

    2018-05-17

    Nowadays, fog computing provides computation, storage, and application services to end users in the Internet of Things. One of the major concerns in fog computing systems is how fine-grained access control can be imposed. As a logical combination of attribute-based encryption and attribute-based signature, Attribute-based Signcryption (ABSC) can provide confidentiality and anonymous authentication for sensitive data and is more efficient than traditional "encrypt-then-sign" or "sign-then-encrypt" strategy. Thus, ABSC is suitable for fine-grained access control in a semi-trusted cloud environment and is gaining more and more attention recently. However, in many existing ABSC systems, the computation cost required for the end users in signcryption and designcryption is linear with the complexity of signing and encryption access policy. Moreover, only a single authority that is responsible for attribute management and key generation exists in the previous proposed ABSC schemes, whereas in reality, mostly, different authorities monitor different attributes of the user. In this paper, we propose OMDAC-ABSC, a novel data access control scheme based on Ciphertext-Policy ABSC, to provide data confidentiality, fine-grained control, and anonymous authentication in a multi-authority fog computing system. The signcryption and designcryption overhead for the user is significantly reduced by outsourcing the undesirable computation operations to fog nodes. The proposed scheme is proven to be secure in the standard model and can provide attribute revocation and public verifiability. The security analysis, asymptotic complexity comparison, and implementation results indicate that our construction can balance the security goals with practical efficiency in computation.

  16. Risk-based analysis methods applied to nuclear power plant technical specifications

    International Nuclear Information System (INIS)

    Wagner, D.P.; Minton, L.A.; Gaertner, J.P.

    1989-01-01

    A computer-aided methodology and practical applications of risk-based evaluation of technical specifications are described. The methodology, developed for use by the utility industry, is a part of the overall process of improving nuclear power plant technical specifications. The SOCRATES computer program uses the results of a probabilistic risk assessment or a system-level risk analysis to calculate changes in risk due to changes in the surveillance test interval and/or the allowed outage time stated in the technical specification. The computer program can accommodate various testing strategies (such as staggered or simultaneous testing) to allow modeling of component testing as it is carried out at the plant. The methods and computer program are an integral part of a larger decision process aimed at determining benefits from technical specification changes. These benefits can include cost savings to the utilities by reducing forced shutdowns and decreasing labor requirements for test and maintenance activities, with no adverse impacts on risk. The methodology and the SOCRATES computer program have been used extensively toe valuate several actual technical specifications in case studies demonstrating the methods. Summaries of these applications demonstrate the types of results achieved and the usefulness of the risk-based evaluation in improving the technical specifications

  17. Constant Jacobian Matrix-Based Stochastic Galerkin Method for Probabilistic Load Flow

    Directory of Open Access Journals (Sweden)

    Yingyun Sun

    2016-03-01

    Full Text Available An intrusive spectral method of probabilistic load flow (PLF is proposed in the paper, which can handle the uncertainties arising from renewable energy integration. Generalized polynomial chaos (gPC expansions of dependent random variables are utilized to build a spectral stochastic representation of PLF model. Instead of solving the coupled PLF model with a traditional, cumbersome method, a modified stochastic Galerkin (SG method is proposed based on the P-Q decoupling properties of load flow in power system. By introducing two pre-calculated constant sparse Jacobian matrices, the computational burden of the SG method is significantly reduced. Two cases, IEEE 14-bus and IEEE 118-bus systems, are used to verify the computation speed and efficiency of the proposed method.

  18. Computer-aided method for recognition of proton track in nuclear emulsion

    International Nuclear Information System (INIS)

    Ruan Jinlu; Li Hongyun; Song Jiwen; Zhang Jianfu; Chen Liang; Zhang Zhongbing; Liu Jinliang

    2014-01-01

    In order to overcome the shortcomings of the manual method for proton-recoil track recognition in nuclear emulsions, a computer-aided track recognition method was studied. In this method, image sequences captured by a microscope system were processed through image convolution with composite filters, binarization by multi thresholds, track grains clustering and redundant grains removing to recognize the track grains in the image sequences. Then the proton-recoil tracks were reconstructed from the recognized track grains through track reconstruction. The proton-recoil tracks in the nuclear emulsion irradiated by the neutron beam at energy of 14.9 MeV were recognized by the computer-aided method. The results show that proton-recoil tracks reconstructed by this method consist well with those reconstructed by the manual method. This compute-raided track recognition method lays an important technical foundation of developments of a proton-recoil track automatic recognition system and applications of nuclear emulsions in pulsed neutron spectrum measurement. (authors)

  19. GPU-based acceleration of computations in nonlinear finite element deformation analysis.

    Science.gov (United States)

    Mafi, Ramin; Sirouspour, Shahin

    2014-03-01

    The physics of deformation for biological soft-tissue is best described by nonlinear continuum mechanics-based models, which then can be discretized by the FEM for a numerical solution. However, computational complexity of such models have limited their use in applications requiring real-time or fast response. In this work, we propose a graphic processing unit-based implementation of the FEM using implicit time integration for dynamic nonlinear deformation analysis. This is the most general formulation of the deformation analysis. It is valid for large deformations and strains and can account for material nonlinearities. The data-parallel nature and the intense arithmetic computations of nonlinear FEM equations make it particularly suitable for implementation on a parallel computing platform such as graphic processing unit. In this work, we present and compare two different designs based on the matrix-free and conventional preconditioned conjugate gradients algorithms for solving the FEM equations arising in deformation analysis. The speedup achieved with the proposed parallel implementations of the algorithms will be instrumental in the development of advanced surgical simulators and medical image registration methods involving soft-tissue deformation. Copyright © 2013 John Wiley & Sons, Ltd.

  20. A comparative analysis on computational methods for fitting an ERGM to biological network data

    Directory of Open Access Journals (Sweden)

    Sudipta Saha

    2015-03-01

    Full Text Available Exponential random graph models (ERGM based on graph theory are useful in studying global biological network structure using its local properties. However, computational methods for fitting such models are sensitive to the type, structure and the number of the local features of a network under study. In this paper, we compared computational methods for fitting an ERGM with local features of different types and structures. Two commonly used methods, such as the Markov Chain Monte Carlo Maximum Likelihood Estimation and the Maximum Pseudo Likelihood Estimation are considered for estimating the coefficients of network attributes. We compared the estimates of observed network to our random simulated network using both methods under ERGM. The motivation was to ascertain the extent to which an observed network would deviate from a randomly simulated network if the physical numbers of attributes were approximately same. Cut-off points of some common attributes of interest for different order of nodes were determined through simulations. We implemented our method to a known regulatory network database of Escherichia coli (E. coli.