WorldWideScience

Sample records for previously developed algorithm

  1. Technical Note: A novel leaf sequencing optimization algorithm which considers previous underdose and overdose events for MLC tracking radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Wisotzky, Eric, E-mail: eric.wisotzky@charite.de, E-mail: eric.wisotzky@ipk.fraunhofer.de; O’Brien, Ricky; Keall, Paul J., E-mail: paul.keall@sydney.edu.au [Radiation Physics Laboratory, Sydney Medical School, University of Sydney, Sydney, NSW 2006 (Australia)

    2016-01-15

    Purpose: Multileaf collimator (MLC) tracking radiotherapy is complex as the beam pattern needs to be modified due to the planned intensity modulation as well as the real-time target motion. The target motion cannot be planned; therefore, the modified beam pattern differs from the original plan and the MLC sequence needs to be recomputed online. Current MLC tracking algorithms use a greedy heuristic in that they optimize for a given time, but ignore past errors. To overcome this problem, the authors have developed and improved an algorithm that minimizes large underdose and overdose regions. Additionally, previous underdose and overdose events are taken into account to avoid regions with high quantity of dose events. Methods: The authors improved the existing MLC motion control algorithm by introducing a cumulative underdose/overdose map. This map represents the actual projection of the planned tumor shape and logs occurring dose events at each specific regions. These events have an impact on the dose cost calculation and reduce recurrence of dose events at each region. The authors studied the improvement of the new temporal optimization algorithm in terms of the L1-norm minimization of the sum of overdose and underdose compared to not accounting for previous dose events. For evaluation, the authors simulated the delivery of 5 conformal and 14 intensity-modulated radiotherapy (IMRT)-plans with 7 3D patient measured tumor motion traces. Results: Simulations with conformal shapes showed an improvement of L1-norm up to 8.5% after 100 MLC modification steps. Experiments showed comparable improvements with the same type of treatment plans. Conclusions: A novel leaf sequencing optimization algorithm which considers previous dose events for MLC tracking radiotherapy has been developed and investigated. Reductions in underdose/overdose are observed for conformal and IMRT delivery.

  2. Developing an Enhanced Lightning Jump Algorithm for Operational Use

    Science.gov (United States)

    Schultz, Christopher J.; Petersen, Walter A.; Carey, Lawrence D.

    2009-01-01

    Overall Goals: 1. Build on the lightning jump framework set through previous studies. 2. Understand what typically occurs in nonsevere convection with respect to increases in lightning. 3. Ultimately develop a lightning jump algorithm for use on the Geostationary Lightning Mapper (GLM). 4 Lightning jump algorithm configurations were developed (2(sigma), 3(sigma), Threshold 10 and Threshold 8). 5 algorithms were tested on a population of 47 nonsevere and 38 severe thunderstorms. Results indicate that the 2(sigma) algorithm performed best over the entire thunderstorm sample set with a POD of 87%, a far of 35%, a CSI of 59% and a HSS of 75%.

  3. STAR Algorithm Integration Team - Facilitating operational algorithm development

    Science.gov (United States)

    Mikles, V. J.

    2015-12-01

    The NOAA/NESDIS Center for Satellite Research and Applications (STAR) provides technical support of the Joint Polar Satellite System (JPSS) algorithm development and integration tasks. Utilizing data from the S-NPP satellite, JPSS generates over thirty Environmental Data Records (EDRs) and Intermediate Products (IPs) spanning atmospheric, ocean, cryosphere, and land weather disciplines. The Algorithm Integration Team (AIT) brings technical expertise and support to product algorithms, specifically in testing and validating science algorithms in a pre-operational environment. The AIT verifies that new and updated algorithms function in the development environment, enforces established software development standards, and ensures that delivered packages are functional and complete. AIT facilitates the development of new JPSS-1 algorithms by implementing a review approach based on the Enterprise Product Lifecycle (EPL) process. Building on relationships established during the S-NPP algorithm development process and coordinating directly with science algorithm developers, the AIT has implemented structured reviews with self-contained document suites. The process has supported algorithm improvements for products such as ozone, active fire, vegetation index, and temperature and moisture profiles.

  4. To develop a universal gamut mapping algorithm

    International Nuclear Information System (INIS)

    Morovic, J.

    1998-10-01

    When a colour image from one colour reproduction medium (e.g. nature, a monitor) needs to be reproduced on another (e.g. on a monitor or in print) and these media have different colour ranges (gamuts), it is necessary to have a method for mapping between them. If such a gamut mapping algorithm can be used under a wide range of conditions, it can also be incorporated in an automated colour reproduction system and considered to be in some sense universal. In terms of preliminary work, a colour reproduction system was implemented, for which a new printer characterisation model (including grey-scale correction) was developed. Methods were also developed for calculating gamut boundary descriptors and for calculating gamut boundaries along given lines from them. The gamut mapping solution proposed in this thesis is a gamut compression algorithm developed with the aim of being accurate and universally applicable. It was arrived at by way of an evolutionary gamut mapping development strategy for the purposes of which five test images were reproduced between a CRT and printed media obtained using an inkjet printer. Initially, a number of previously published algorithms were chosen and psychophysically evaluated whereby an important characteristic of this evaluation was that it also considered the performance of algorithms for individual colour regions within the test images used. New algorithms were then developed on their basis, subsequently evaluated and this process was repeated once more. In this series of experiments the new GCUSP algorithm, which consists of a chroma-dependent lightness compression followed by a compression towards the lightness of the reproduction cusp on the lightness axis, gave the most accurate and stable performance overall. The results of these experiments were also useful for improving the understanding of some gamut mapping factors - in particular gamut difference. In addition to looking at accuracy, the pleasantness of reproductions obtained

  5. Development and Evaluation of Algorithms for Breath Alcohol Screening.

    Science.gov (United States)

    Ljungblad, Jonas; Hök, Bertil; Ekström, Mikael

    2016-04-01

    Breath alcohol screening is important for traffic safety, access control and other areas of health promotion. A family of sensor devices useful for these purposes is being developed and evaluated. This paper is focusing on algorithms for the determination of breath alcohol concentration in diluted breath samples using carbon dioxide to compensate for the dilution. The examined algorithms make use of signal averaging, weighting and personalization to reduce estimation errors. Evaluation has been performed by using data from a previously conducted human study. It is concluded that these features in combination will significantly reduce the random error compared to the signal averaging algorithm taken alone.

  6. Machine Learning Algorithms Outperform Conventional Regression Models in Predicting Development of Hepatocellular Carcinoma

    Science.gov (United States)

    Singal, Amit G.; Mukherjee, Ashin; Elmunzer, B. Joseph; Higgins, Peter DR; Lok, Anna S.; Zhu, Ji; Marrero, Jorge A; Waljee, Akbar K

    2015-01-01

    Background Predictive models for hepatocellular carcinoma (HCC) have been limited by modest accuracy and lack of validation. Machine learning algorithms offer a novel methodology, which may improve HCC risk prognostication among patients with cirrhosis. Our study's aim was to develop and compare predictive models for HCC development among cirrhotic patients, using conventional regression analysis and machine learning algorithms. Methods We enrolled 442 patients with Child A or B cirrhosis at the University of Michigan between January 2004 and September 2006 (UM cohort) and prospectively followed them until HCC development, liver transplantation, death, or study termination. Regression analysis and machine learning algorithms were used to construct predictive models for HCC development, which were tested on an independent validation cohort from the Hepatitis C Antiviral Long-term Treatment against Cirrhosis (HALT-C) Trial. Both models were also compared to the previously published HALT-C model. Discrimination was assessed using receiver operating characteristic curve analysis and diagnostic accuracy was assessed with net reclassification improvement and integrated discrimination improvement statistics. Results After a median follow-up of 3.5 years, 41 patients developed HCC. The UM regression model had a c-statistic of 0.61 (95%CI 0.56-0.67), whereas the machine learning algorithm had a c-statistic of 0.64 (95%CI 0.60–0.69) in the validation cohort. The machine learning algorithm had significantly better diagnostic accuracy as assessed by net reclassification improvement (pmachine learning algorithm (p=0.047). Conclusion Machine learning algorithms improve the accuracy of risk stratifying patients with cirrhosis and can be used to accurately identify patients at high-risk for developing HCC. PMID:24169273

  7. Multisensor data fusion algorithm development

    Energy Technology Data Exchange (ETDEWEB)

    Yocky, D.A.; Chadwick, M.D.; Goudy, S.P.; Johnson, D.K.

    1995-12-01

    This report presents a two-year LDRD research effort into multisensor data fusion. We approached the problem by addressing the available types of data, preprocessing that data, and developing fusion algorithms using that data. The report reflects these three distinct areas. First, the possible data sets for fusion are identified. Second, automated registration techniques for imagery data are analyzed. Third, two fusion techniques are presented. The first fusion algorithm is based on the two-dimensional discrete wavelet transform. Using test images, the wavelet algorithm is compared against intensity modulation and intensity-hue-saturation image fusion algorithms that are available in commercial software. The wavelet approach outperforms the other two fusion techniques by preserving spectral/spatial information more precisely. The wavelet fusion algorithm was also applied to Landsat Thematic Mapper and SPOT panchromatic imagery data. The second algorithm is based on a linear-regression technique. We analyzed the technique using the same Landsat and SPOT data.

  8. Integrated Graphics Operations and Analysis Lab Development of Advanced Computer Graphics Algorithms

    Science.gov (United States)

    Wheaton, Ira M.

    2011-01-01

    The focus of this project is to aid the IGOAL in researching and implementing algorithms for advanced computer graphics. First, this project focused on porting the current International Space Station (ISS) Xbox experience to the web. Previously, the ISS interior fly-around education and outreach experience only ran on an Xbox 360. One of the desires was to take this experience and make it into something that can be put on NASA s educational site for anyone to be able to access. The current code works in the Unity game engine which does have cross platform capability but is not 100% compatible. The tasks for an intern to complete this portion consisted of gaining familiarity with Unity and the current ISS Xbox code, porting the Xbox code to the web as is, and modifying the code to work well as a web application. In addition, a procedurally generated cloud algorithm will be developed. Currently, the clouds used in AGEA animations and the Xbox experiences are a texture map. The desire is to create a procedurally generated cloud algorithm to provide dynamically generated clouds for both AGEA animations and the Xbox experiences. This task consists of gaining familiarity with AGEA and the plug-in interface, developing the algorithm, creating an AGEA plug-in to implement the algorithm inside AGEA, and creating a Unity script to implement the algorithm for the Xbox. This portion of the project was unable to be completed in the time frame of the internship; however, the IGOAL will continue to work on it in the future.

  9. DEVELOPMENT OF A NEW ALGORITHM FOR KEY AND S-BOX GENERATION IN BLOWFISH ALGORITHM

    Directory of Open Access Journals (Sweden)

    TAYSEER S. ATIA

    2014-08-01

    Full Text Available Blowfish algorithm is a block cipher algorithm, its strong, simple algorithm used to encrypt data in block of size 64-bit. Key and S-box generation process in this algorithm require time and memory space the reasons that make this algorithm not convenient to be used in smart card or application requires changing secret key frequently. In this paper a new key and S-box generation process was developed based on Self Synchronization Stream Cipher (SSS algorithm where the key generation process for this algorithm was modified to be used with the blowfish algorithm. Test result shows that the generation process requires relatively slow time and reasonably low memory requirement and this enhance the algorithm and gave it the possibility for different usage.

  10. Development of real-time plasma analysis and control algorithms for the TCV tokamak using SIMULINK

    International Nuclear Information System (INIS)

    Felici, F.; Le, H.B.; Paley, J.I.; Duval, B.P.; Coda, S.; Moret, J.-M.; Bortolon, A.; Federspiel, L.; Goodman, T.P.; Hommen, G.; Karpushov, A.; Piras, F.; Pitzschke, A.; Romero, J.; Sevillano, G.; Sauter, O.; Vijvers, W.

    2014-01-01

    Highlights: • A new digital control system for the TCV tokamak has been commissioned. • The system is entirely programmable by SIMULINK, allowing rapid algorithm development. • Different control system nodes can run different algorithms at varying sampling times. • The previous control system functions have been emulated and improved. • New capabilities include MHD control, profile control, equilibrium reconstruction. - Abstract: One of the key features of the new digital plasma control system installed on the TCV tokamak is the possibility to rapidly design, test and deploy real-time algorithms. With this flexibility the new control system has been used for a large number of new experiments which exploit TCV's powerful actuators consisting of 16 individually controllable poloidal field coils and 7 real-time steerable electron cyclotron (EC) launchers. The system has been used for various applications, ranging from event-based real-time MHD control to real-time current diffusion simulations. These advances have propelled real-time control to one of the cornerstones of the TCV experimental program. Use of the SIMULINK graphical programming language to directly program the control system has greatly facilitated algorithm development and allowed a multitude of different algorithms to be deployed in a short time. This paper will give an overview of the developed algorithms and their application in physics experiments

  11. DIDADTIC TOOLS FOR THE STUDENTS’ ALGORITHMIC THINKING DEVELOPMENT

    Directory of Open Access Journals (Sweden)

    T. P. Pushkaryeva

    2017-01-01

    Full Text Available Introduction. Modern engineers must possess high potential of cognitive abilities, in particular, the algorithmic thinking (AT. In this regard, the training of future experts (university graduates of technical specialities has to provide the knowledge of principles and ways of designing of various algorithms, abilities to analyze them, and to choose the most optimal variants for engineering activity implementation. For full formation of AT skills it is necessary to consider all channels of psychological perception and cogitative processing of educational information: visual, auditory, and kinesthetic.The aim of the present research is theoretical basis of design, development and use of resources for successful development of AT during the educational process of training in programming.Methodology and research methods. Methodology of the research involves the basic thesis of cognitive psychology and information approach while organizing the educational process. The research used methods: analysis; modeling of cognitive processes; designing training tools that take into account the mentality and peculiarities of information perception; diagnostic efficiency of the didactic tools. Results. The three-level model for future engineers training in programming aimed at development of AT skills was developed. The model includes three components: aesthetic, simulative, and conceptual. Stages to mastering a new discipline are allocated. It is proved that for development of AT skills when training in programming it is necessary to use kinesthetic tools at the stage of mental algorithmic maps formation; algorithmic animation and algorithmic mental maps at the stage of algorithmic model and conceptual images formation. Kinesthetic tools for development of students’ AT skills when training in algorithmization and programming are designed. Using of kinesthetic training simulators in educational process provide the effective development of algorithmic style of

  12. An ATR architecture for algorithm development and testing

    Science.gov (United States)

    Breivik, Gøril M.; Løkken, Kristin H.; Brattli, Alvin; Palm, Hans C.; Haavardsholm, Trym

    2013-05-01

    A research platform with four cameras in the infrared and visible spectral domains is under development at the Norwegian Defence Research Establishment (FFI). The platform will be mounted on a high-speed jet aircraft and will primarily be used for image acquisition and for development and test of automatic target recognition (ATR) algorithms. The sensors on board produce large amounts of data, the algorithms can be computationally intensive and the data processing is complex. This puts great demands on the system architecture; it has to run in real-time and at the same time be suitable for algorithm development. In this paper we present an architecture for ATR systems that is designed to be exible, generic and efficient. The architecture is module based so that certain parts, e.g. specific ATR algorithms, can be exchanged without affecting the rest of the system. The modules are generic and can be used in various ATR system configurations. A software framework in C++ that handles large data ows in non-linear pipelines is used for implementation. The framework exploits several levels of parallelism and lets the hardware processing capacity be fully utilised. The ATR system is under development and has reached a first level that can be used for segmentation algorithm development and testing. The implemented system consists of several modules, and although their content is still limited, the segmentation module includes two different segmentation algorithms that can be easily exchanged. We demonstrate the system by applying the two segmentation algorithms to infrared images from sea trial recordings.

  13. A formal analysis of a dynamic distributed spanning tree algorithm

    NARCIS (Netherlands)

    Mooij, A.J.; Wesselink, J.W.

    2003-01-01

    Abstract. We analyze the spanning tree algorithm in the IEEE 1394.1 draft standard, which correctness has not previously been proved. This algorithm is a fully-dynamic distributed graph algorithm, which, in general, is hard to develop. The approach we use is to formally develop an algorithm that is

  14. Critical function monitoring system algorithm development

    International Nuclear Information System (INIS)

    Harmon, D.L.

    1984-01-01

    Accurate critical function status information is a key to operator decision-making during events threatening nuclear power plant safety. The Critical Function Monitoring System provides continuous critical function status monitoring by use of algorithms which mathematically represent the processes by which an operating staff would determine critical function status. This paper discusses in detail the systematic design methodology employed to develop adequate Critical Function Monitoring System algorithms

  15. Applied economic model development algorithm for electronics company

    Directory of Open Access Journals (Sweden)

    Mikhailov I.

    2017-01-01

    Full Text Available The purpose of this paper is to report about received experience in the field of creating the actual methods and algorithms that help to simplify development of applied decision support systems. It reports about an algorithm, which is a result of two years research and have more than one-year practical verification. In a case of testing electronic components, the time of the contract conclusion is crucial point to make the greatest managerial mistake. At this stage, it is difficult to achieve a realistic assessment of time-limit and of wage-fund for future work. The creation of estimating model is possible way to solve this problem. In the article is represented an algorithm for creation of those models. The algorithm is based on example of the analytical model development that serves for amount of work estimation. The paper lists the algorithm’s stages and explains their meanings with participants’ goals. The implementation of the algorithm have made possible twofold acceleration of these models development and fulfilment of management’s requirements. The resulting models have made a significant economic effect. A new set of tasks was identified to be further theoretical study.

  16. Development of Educational Support System for Algorithm using Flowchart

    Science.gov (United States)

    Ohchi, Masashi; Aoki, Noriyuki; Furukawa, Tatsuya; Takayama, Kanta

    Recently, an information technology is indispensable for the business and industrial developments. However, it has been a social problem that the number of software developers has been insufficient. To solve the problem, it is necessary to develop and implement the environment for learning the algorithm and programming language. In the paper, we will describe the algorithm study support system for a programmer using the flowchart. Since the proposed system uses Graphical User Interface(GUI), it will become easy for a programmer to understand the algorithm in programs.

  17. Development of a Novel Locomotion Algorithm for Snake Robot

    International Nuclear Information System (INIS)

    Khan, Raisuddin; Billah, Md Masum; Watanabe, Mitsuru; Shafie, A A

    2013-01-01

    A novel algorithm for snake robot locomotion is developed and analyzed in this paper. Serpentine is one of the renowned locomotion for snake robot in disaster recovery mission to overcome narrow space navigation. Several locomotion for snake navigation, such as concertina or rectilinear may be suitable for narrow spaces, but is highly inefficient if the same type of locomotion is used even in open spaces resulting friction reduction which make difficulties for snake movement. A novel locomotion algorithm has been proposed based on the modification of the multi-link snake robot, the modifications include alterations to the snake segments as well elements that mimic scales on the underside of the snake body. Snake robot can be able to navigate in the narrow space using this developed locomotion algorithm. The developed algorithm surmount the others locomotion limitation in narrow space navigation

  18. Development of a parallel genetic algorithm using MPI and its application in a nuclear reactor core. Design optimization

    International Nuclear Information System (INIS)

    Waintraub, Marcel; Pereira, Claudio M.N.A.; Baptista, Rafael P.

    2005-01-01

    This work presents the development of a distributed parallel genetic algorithm applied to a nuclear reactor core design optimization. In the implementation of the parallelism, a 'Message Passing Interface' (MPI) library, standard for parallel computation in distributed memory platforms, has been used. Another important characteristic of MPI is its portability for various architectures. The main objectives of this paper are: validation of the results obtained by the application of this algorithm in a nuclear reactor core optimization problem, through comparisons with previous results presented by Pereira et al.; and performance test of the Brazilian Nuclear Engineering Institute (IEN) cluster in reactors physics optimization problems. The experiments demonstrated that the developed parallel genetic algorithm using the MPI library presented significant gains in the obtained results and an accentuated reduction of the processing time. Such results ratify the use of the parallel genetic algorithms for the solution of nuclear reactor core optimization problems. (author)

  19. PCTFPeval: a web tool for benchmarking newly developed algorithms for predicting cooperative transcription factor pairs in yeast.

    Science.gov (United States)

    Lai, Fu-Jou; Chang, Hong-Tsun; Wu, Wei-Sheng

    2015-01-01

    Computational identification of cooperative transcription factor (TF) pairs helps understand the combinatorial regulation of gene expression in eukaryotic cells. Many advanced algorithms have been proposed to predict cooperative TF pairs in yeast. However, it is still difficult to conduct a comprehensive and objective performance comparison of different algorithms because of lacking sufficient performance indices and adequate overall performance scores. To solve this problem, in our previous study (published in BMC Systems Biology 2014), we adopted/proposed eight performance indices and designed two overall performance scores to compare the performance of 14 existing algorithms for predicting cooperative TF pairs in yeast. Most importantly, our performance comparison framework can be applied to comprehensively and objectively evaluate the performance of a newly developed algorithm. However, to use our framework, researchers have to put a lot of effort to construct it first. To save researchers time and effort, here we develop a web tool to implement our performance comparison framework, featuring fast data processing, a comprehensive performance comparison and an easy-to-use web interface. The developed tool is called PCTFPeval (Predicted Cooperative TF Pair evaluator), written in PHP and Python programming languages. The friendly web interface allows users to input a list of predicted cooperative TF pairs from their algorithm and select (i) the compared algorithms among the 15 existing algorithms, (ii) the performance indices among the eight existing indices, and (iii) the overall performance scores from two possible choices. The comprehensive performance comparison results are then generated in tens of seconds and shown as both bar charts and tables. The original comparison results of each compared algorithm and each selected performance index can be downloaded as text files for further analyses. Allowing users to select eight existing performance indices and 15

  20. Model based development of engine control algorithms

    NARCIS (Netherlands)

    Dekker, H.J.; Sturm, W.L.

    1996-01-01

    Model based development of engine control systems has several advantages. The development time and costs are strongly reduced because much of the development and optimization work is carried out by simulating both engine and control system. After optimizing the control algorithm it can be executed

  1. B ampersand W PWR advanced control system algorithm development

    International Nuclear Information System (INIS)

    Winks, R.W.; Wilson, T.L.; Amick, M.

    1992-01-01

    This paper discusses algorithm development of an Advanced Control System for the B ampersand W Pressurized Water Reactor (PWR) nuclear power plant. The paper summarizes the history of the project, describes the operation of the algorithm, and presents transient results from a simulation of the plant and control system. The history discusses the steps in the development process and the roles played by the utility owners, B ampersand W Nuclear Service Company (BWNS), Oak Ridge National Laboratory (ORNL), and the Foxboro Company. The algorithm description is a brief overview of the features of the control system. The transient results show that operation of the algorithm in a normal power maneuvering mode and in a moderately large upset following a feedwater pump trip

  2. Column Reduction of Polynomial Matrices; Some Remarks on the Algorithm of Wolovich

    NARCIS (Netherlands)

    Praagman, C.

    1996-01-01

    Recently an algorithm has been developed for column reduction of polynomial matrices. In a previous report the authors described a Fortran implementation of this algorithm. In this paper we compare the results of that implementation with an implementation of the algorithm originally developed by

  3. Solving Optimization Problems via Vortex Optimization Algorithm and Cognitive Development Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Ahmet Demir

    2017-01-01

    Full Text Available In the fields which require finding the most appropriate value, optimization became a vital approach to employ effective solutions. With the use of optimization techniques, many different fields in the modern life have found solutions to their real-world based problems. In this context, classical optimization techniques have had an important popularity. But after a while, more advanced optimization problems required the use of more effective techniques. At this point, Computer Science took an important role on providing software related techniques to improve the associated literature. Today, intelligent optimization techniques based on Artificial Intelligence are widely used for optimization problems. The objective of this paper is to provide a comparative study on the employment of classical optimization solutions and Artificial Intelligence solutions for enabling readers to have idea about the potential of intelligent optimization techniques. At this point, two recently developed intelligent optimization algorithms, Vortex Optimization Algorithm (VOA and Cognitive Development Optimization Algorithm (CoDOA, have been used to solve some multidisciplinary optimization problems provided in the source book Thomas' Calculus 11th Edition and the obtained results have compared with classical optimization solutions. 

  4. Derivation of a regional active-optical reflectance sensor corn algorithm

    Science.gov (United States)

    Active-optical reflectance sensor (AORS) algorithms developed for in-season corn (Zea mays L.) N management have traditionally been derived using sub-regional scale information. However, studies have shown these previously developed AORS algorithms are not consistently accurate when used on a region...

  5. Development and Application of a Portable Health Algorithms Test System

    Science.gov (United States)

    Melcher, Kevin J.; Fulton, Christopher E.; Maul, William A.; Sowers, T. Shane

    2007-01-01

    This paper describes the development and initial demonstration of a Portable Health Algorithms Test (PHALT) System that is being developed by researchers at the NASA Glenn Research Center (GRC). The PHALT System was conceived as a means of evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT System allows systems health management algorithms to be developed in a graphical programming environment; to be tested and refined using system simulation or test data playback; and finally, to be evaluated in a real-time hardware-in-the-loop mode with a live test article. In this paper, PHALT System development is described through the presentation of a functional architecture, followed by the selection and integration of hardware and software. Also described is an initial real-time hardware-in-the-loop demonstration that used sensor data qualification algorithms to diagnose and isolate simulated sensor failures in a prototype Power Distribution Unit test-bed. Success of the initial demonstration is highlighted by the correct detection of all sensor failures and the absence of any real-time constraint violations.

  6. Battery algorithm verification and development using hardware-in-the-loop testing

    Science.gov (United States)

    He, Yongsheng; Liu, Wei; Koch, Brain J.

    Battery algorithms play a vital role in hybrid electric vehicles (HEVs), plug-in hybrid electric vehicles (PHEVs), extended-range electric vehicles (EREVs), and electric vehicles (EVs). The energy management of hybrid and electric propulsion systems needs to rely on accurate information on the state of the battery in order to determine the optimal electric drive without abusing the battery. In this study, a cell-level hardware-in-the-loop (HIL) system is used to verify and develop state of charge (SOC) and power capability predictions of embedded battery algorithms for various vehicle applications. Two different batteries were selected as representative examples to illustrate the battery algorithm verification and development procedure. One is a lithium-ion battery with a conventional metal oxide cathode, which is a power battery for HEV applications. The other is a lithium-ion battery with an iron phosphate (LiFePO 4) cathode, which is an energy battery for applications in PHEVs, EREVs, and EVs. The battery cell HIL testing provided valuable data and critical guidance to evaluate the accuracy of the developed battery algorithms, to accelerate battery algorithm future development and improvement, and to reduce hybrid/electric vehicle system development time and costs.

  7. Battery algorithm verification and development using hardware-in-the-loop testing

    Energy Technology Data Exchange (ETDEWEB)

    He, Yongsheng [General Motors Global Research and Development, 30500 Mound Road, MC 480-106-252, Warren, MI 48090 (United States); Liu, Wei; Koch, Brain J. [General Motors Global Vehicle Engineering, Warren, MI 48090 (United States)

    2010-05-01

    Battery algorithms play a vital role in hybrid electric vehicles (HEVs), plug-in hybrid electric vehicles (PHEVs), extended-range electric vehicles (EREVs), and electric vehicles (EVs). The energy management of hybrid and electric propulsion systems needs to rely on accurate information on the state of the battery in order to determine the optimal electric drive without abusing the battery. In this study, a cell-level hardware-in-the-loop (HIL) system is used to verify and develop state of charge (SOC) and power capability predictions of embedded battery algorithms for various vehicle applications. Two different batteries were selected as representative examples to illustrate the battery algorithm verification and development procedure. One is a lithium-ion battery with a conventional metal oxide cathode, which is a power battery for HEV applications. The other is a lithium-ion battery with an iron phosphate (LiFePO{sub 4}) cathode, which is an energy battery for applications in PHEVs, EREVs, and EVs. The battery cell HIL testing provided valuable data and critical guidance to evaluate the accuracy of the developed battery algorithms, to accelerate battery algorithm future development and improvement, and to reduce hybrid/electric vehicle system development time and costs. (author)

  8. Development of radio frequency interference detection algorithms for passive microwave remote sensing

    Science.gov (United States)

    Misra, Sidharth

    Radio Frequency Interference (RFI) signals are man-made sources that are increasingly plaguing passive microwave remote sensing measurements. RFI is of insidious nature, with some signals low power enough to go undetected but large enough to impact science measurements and their results. With the launch of the European Space Agency (ESA) Soil Moisture and Ocean Salinity (SMOS) satellite in November 2009 and the upcoming launches of the new NASA sea-surface salinity measuring Aquarius mission in June 2011 and soil-moisture measuring Soil Moisture Active Passive (SMAP) mission around 2015, active steps are being taken to detect and mitigate RFI at L-band. An RFI detection algorithm was designed for the Aquarius mission. The algorithm performance was analyzed using kurtosis based RFI ground-truth. The algorithm has been developed with several adjustable location dependant parameters to control the detection statistics (false-alarm rate and probability of detection). The kurtosis statistical detection algorithm has been compared with the Aquarius pulse detection method. The comparative study determines the feasibility of the kurtosis detector for the SMAP radiometer, as a primary RFI detection algorithm in terms of detectability and data bandwidth. The kurtosis algorithm has superior detection capabilities for low duty-cycle radar like pulses, which are more prevalent according to analysis of field campaign data. Most RFI algorithms developed have generally been optimized for performance with individual pulsed-sinusoidal RFI sources. A new RFI detection model is developed that takes into account multiple RFI sources within an antenna footprint. The performance of the kurtosis detection algorithm under such central-limit conditions is evaluated. The SMOS mission has a unique hardware system, and conventional RFI detection techniques cannot be applied. Instead, an RFI detection algorithm for SMOS is developed and applied in the angular domain. This algorithm compares

  9. Algorithms for worst-case tolerance optimization

    DEFF Research Database (Denmark)

    Schjær-Jacobsen, Hans; Madsen, Kaj

    1979-01-01

    New algorithms are presented for the solution of optimum tolerance assignment problems. The problems considered are defined mathematically as a worst-case problem (WCP), a fixed tolerance problem (FTP), and a variable tolerance problem (VTP). The basic optimization problem without tolerances...... is denoted the zero tolerance problem (ZTP). For solution of the WCP we suggest application of interval arithmetic and also alternative methods. For solution of the FTP an algorithm is suggested which is conceptually similar to algorithms previously developed by the authors for the ZTP. Finally, the VTP...... is solved by a double-iterative algorithm in which the inner iteration is performed by the FTP- algorithm. The application of the algorithm is demonstrated by means of relatively simple numerical examples. Basic properties, such as convergence properties, are displayed based on the examples....

  10. Simple Algorithms for Distributed Leader Election in Anonymous Synchronous Rings and Complete Networks Inspired by Neural Development in Fruit Flies.

    Science.gov (United States)

    Xu, Lei; Jeavons, Peter

    2015-11-01

    Leader election in anonymous rings and complete networks is a very practical problem in distributed computing. Previous algorithms for this problem are generally designed for a classical message passing model where complex messages are exchanged. However, the need to send and receive complex messages makes such algorithms less practical for some real applications. We present some simple synchronous algorithms for distributed leader election in anonymous rings and complete networks that are inspired by the development of the neural system of the fruit fly. Our leader election algorithms all assume that only one-bit messages are broadcast by nodes in the network and processors are only able to distinguish between silence and the arrival of one or more messages. These restrictions allow implementations to use a simpler message-passing architecture. Even with these harsh restrictions our algorithms are shown to achieve good time and message complexity both analytically and experimentally.

  11. Development of morphing algorithms for Histfactory using information geometry

    Energy Technology Data Exchange (ETDEWEB)

    Bandyopadhyay, Anjishnu; Brock, Ian [University of Bonn (Germany); Cranmer, Kyle [New York University (United States)

    2016-07-01

    Many statistical analyses are based on likelihood fits. In any likelihood fit we try to incorporate all uncertainties, both systematic and statistical. We generally have distributions for the nominal and ±1 σ variations of a given uncertainty. Using that information, Histfactory morphs the distributions for any arbitrary value of the given uncertainties. In this talk, a new morphing algorithm will be presented, which is based on information geometry. The algorithm uses the information about the difference between various probability distributions. Subsequently, we map this information onto geometrical structures and develop the algorithm on the basis of different geometrical properties. Apart from varying all nuisance parameters together, this algorithm can also probe both small (< 1 σ) and large (> 2 σ) variations. It will also be shown how this algorithm can be used for interpolating other forms of probability distributions.

  12. Improving GPU-accelerated adaptive IDW interpolation algorithm using fast kNN search.

    Science.gov (United States)

    Mei, Gang; Xu, Nengxiong; Xu, Liangliang

    2016-01-01

    This paper presents an efficient parallel Adaptive Inverse Distance Weighting (AIDW) interpolation algorithm on modern Graphics Processing Unit (GPU). The presented algorithm is an improvement of our previous GPU-accelerated AIDW algorithm by adopting fast k-nearest neighbors (kNN) search. In AIDW, it needs to find several nearest neighboring data points for each interpolated point to adaptively determine the power parameter; and then the desired prediction value of the interpolated point is obtained by weighted interpolating using the power parameter. In this work, we develop a fast kNN search approach based on the space-partitioning data structure, even grid, to improve the previous GPU-accelerated AIDW algorithm. The improved algorithm is composed of the stages of kNN search and weighted interpolating. To evaluate the performance of the improved algorithm, we perform five groups of experimental tests. The experimental results indicate: (1) the improved algorithm can achieve a speedup of up to 1017 over the corresponding serial algorithm; (2) the improved algorithm is at least two times faster than our previous GPU-accelerated AIDW algorithm; and (3) the utilization of fast kNN search can significantly improve the computational efficiency of the entire GPU-accelerated AIDW algorithm.

  13. Algorithm development for Maxwell's equations for computational electromagnetism

    Science.gov (United States)

    Goorjian, Peter M.

    1990-01-01

    A new algorithm has been developed for solving Maxwell's equations for the electromagnetic field. It solves the equations in the time domain with central, finite differences. The time advancement is performed implicitly, using an alternating direction implicit procedure. The space discretization is performed with finite volumes, using curvilinear coordinates with electromagnetic components along those directions. Sample calculations are presented of scattering from a metal pin, a square and a circle to demonstrate the capabilities of the new algorithm.

  14. AeroADL: applying the integration of the Suomi-NPP science algorithms with the Algorithm Development Library to the calibration and validation task

    Science.gov (United States)

    Houchin, J. S.

    2014-09-01

    A common problem for the off-line validation of the calibration algorithms and algorithm coefficients is being able to run science data through the exact same software used for on-line calibration of that data. The Joint Polar Satellite System (JPSS) program solved part of this problem by making the Algorithm Development Library (ADL) available, which allows the operational algorithm code to be compiled and run on a desktop Linux workstation using flat file input and output. However, this solved only part of the problem, as the toolkit and methods to initiate the processing of data through the algorithms were geared specifically toward the algorithm developer, not the calibration analyst. In algorithm development mode, a limited number of sets of test data are staged for the algorithm once, and then run through the algorithm over and over as the software is developed and debugged. In calibration analyst mode, we are continually running new data sets through the algorithm, which requires significant effort to stage each of those data sets for the algorithm without additional tools. AeroADL solves this second problem by providing a set of scripts that wrap the ADL tools, providing both efficient means to stage and process an input data set, to override static calibration coefficient look-up-tables (LUT) with experimental versions of those tables, and to manage a library containing multiple versions of each of the static LUT files in such a way that the correct set of LUTs required for each algorithm are automatically provided to the algorithm without analyst effort. Using AeroADL, The Aerospace Corporation's analyst team has demonstrated the ability to quickly and efficiently perform analysis tasks for both the VIIRS and OMPS sensors with minimal training on the software tools.

  15. Image microarrays derived from tissue microarrays (IMA-TMA: New resource for computer-aided diagnostic algorithm development

    Directory of Open Access Journals (Sweden)

    Jennifer A Hipp

    2012-01-01

    Full Text Available Background: Conventional tissue microarrays (TMAs consist of cores of tissue inserted into a recipient paraffin block such that a tissue section on a single glass slide can contain numerous patient samples in a spatially structured pattern. Scanning TMAs into digital slides for subsequent analysis by computer-aided diagnostic (CAD algorithms all offers the possibility of evaluating candidate algorithms against a near-complete repertoire of variable disease morphologies. This parallel interrogation approach simplifies the evaluation, validation, and comparison of such candidate algorithms. A recently developed digital tool, digital core (dCORE, and image microarray maker (iMAM enables the capture of uniformly sized and resolution-matched images, with these representing key morphologic features and fields of view, aggregated into a single monolithic digital image file in an array format, which we define as an image microarray (IMA. We further define the TMA-IMA construct as IMA-based images derived from whole slide images of TMAs themselves. Methods: Here we describe the first combined use of the previously described dCORE and iMAM tools, toward the goal of generating a higher-order image construct, with multiple TMA cores from multiple distinct conventional TMAs assembled as a single digital image montage. This image construct served as the basis of the carrying out of a massively parallel image analysis exercise, based on the use of the previously described spatially invariant vector quantization (SIVQ algorithm. Results: Multicase, multifield TMA-IMAs of follicular lymphoma and follicular hyperplasia were separately rendered, using the aforementioned tools. Each of these two IMAs contained a distinct spectrum of morphologic heterogeneity with respect to both tingible body macrophage (TBM appearance and apoptotic body morphology. SIVQ-based pattern matching, with ring vectors selected to screen for either tingible body macrophages or apoptotic

  16. Efficient conjugate gradient algorithms for computation of the manipulator forward dynamics

    Science.gov (United States)

    Fijany, Amir; Scheid, Robert E.

    1989-01-01

    The applicability of conjugate gradient algorithms for computation of the manipulator forward dynamics is investigated. The redundancies in the previously proposed conjugate gradient algorithm are analyzed. A new version is developed which, by avoiding these redundancies, achieves a significantly greater efficiency. A preconditioned conjugate gradient algorithm is also presented. A diagonal matrix whose elements are the diagonal elements of the inertia matrix is proposed as the preconditioner. In order to increase the computational efficiency, an algorithm is developed which exploits the synergism between the computation of the diagonal elements of the inertia matrix and that required by the conjugate gradient algorithm.

  17. Development and Comparative Study of Effects of Training Algorithms on Performance of Artificial Neural Network Based Analog and Digital Automatic Modulation Recognition

    Directory of Open Access Journals (Sweden)

    Jide Julius Popoola

    2015-11-01

    Full Text Available This paper proposes two new classifiers that automatically recognise twelve combined analog and digital modulated signals without any a priori knowledge of the modulation schemes and the modulation parameters. The classifiers are developed using pattern recognition approach. Feature keys extracted from the instantaneous amplitude, instantaneous phase and the spectrum symmetry of the simulated signals are used as inputs to the artificial neural network employed in developing the classifiers. The two developed classifiers are trained using scaled conjugate gradient (SCG and conjugate gradient (CONJGRAD training algorithms. Sample results of the two classifiers show good success recognition performance with an average overall recognition rate above 99.50% at signal-to-noise ratio (SNR value from 0 dB and above with the two training algorithms employed and an average overall recognition rate slightly above 99.00% and 96.40% respectively at - 5 dB SNR value for SCG and CONJGRAD training algorithms. The comparative performance evaluation of the two developed classifiers using the two training algorithms shows that the two training algorithms have different effects on both the response rate and efficiency of the two developed artificial neural networks classifiers. In addition, the result of the performance evaluation carried out on the overall success recognition rates between the two developed classifiers in this study using pattern recognition approach with the two training algorithms and one reported classifier in surveyed literature using decision-theoretic approach shows that the classifiers developed in this study perform favourably with regard to accuracy and performance probability as compared to classifier presented in previous study.

  18. Probabilistic analysis algorithm for UA slope software program.

    Science.gov (United States)

    2013-12-01

    A reliability-based computational algorithm for using a single row and equally spaced drilled shafts to : stabilize an unstable slope has been developed in this research. The Monte-Carlo simulation (MCS) : technique was used in the previously develop...

  19. Development and validation of an algorithm for laser application in wound treatment

    Directory of Open Access Journals (Sweden)

    Diequison Rite da Cunha

    2017-12-01

    Full Text Available ABSTRACT Objective: To develop and validate an algorithm for laser wound therapy. Method: Methodological study and literature review. For the development of the algorithm, a review was performed in the Health Sciences databases of the past ten years. The algorithm evaluation was performed by 24 participants, nurses, physiotherapists, and physicians. For data analysis, the Cronbach’s alpha coefficient and the chi-square test for independence was used. The level of significance of the statistical test was established at 5% (p<0.05. Results: The professionals’ responses regarding the facility to read the algorithm indicated: 41.70%, great; 41.70%, good; 16.70%, regular. With regard the algorithm being sufficient for supporting decisions related to wound evaluation and wound cleaning, 87.5% said yes to both questions. Regarding the participants’ opinion that the algorithm contained enough information to support their decision regarding the choice of laser parameters, 91.7% said yes. The questionnaire presented reliability using the Cronbach’s alpha coefficient test (α = 0.962. Conclusion: The developed and validated algorithm showed reliability for evaluation, wound cleaning, and use of laser therapy in wounds.

  20. Development of hybrid artificial intelligent based handover decision algorithm

    Directory of Open Access Journals (Sweden)

    A.M. Aibinu

    2017-04-01

    Full Text Available The possibility of seamless handover remains a mirage despite the plethora of existing handover algorithms. The underlying factor responsible for this has been traced to the Handover decision module in the Handover process. Hence, in this paper, the development of novel hybrid artificial intelligent handover decision algorithm has been developed. The developed model is made up of hybrid of Artificial Neural Network (ANN based prediction model and Fuzzy Logic. On accessing the network, the Received Signal Strength (RSS was acquired over a period of time to form a time series data. The data was then fed to the newly proposed k-step ahead ANN-based RSS prediction system for estimation of prediction model coefficients. The synaptic weights and adaptive coefficients of the trained ANN was then used to compute the k-step ahead ANN based RSS prediction model coefficients. The predicted RSS value was later codified as Fuzzy sets and in conjunction with other measured network parameters were fed into the Fuzzy logic controller in order to finalize handover decision process. The performance of the newly developed k-step ahead ANN based RSS prediction algorithm was evaluated using simulated and real data acquired from available mobile communication networks. Results obtained in both cases shows that the proposed algorithm is capable of predicting ahead the RSS value to about ±0.0002 dB. Also, the cascaded effect of the complete handover decision module was also evaluated. Results obtained show that the newly proposed hybrid approach was able to reduce ping-pong effect associated with other handover techniques.

  1. Comparison of switching control algorithms effective in restricting the switching in the neighborhood of the origin

    International Nuclear Information System (INIS)

    Joung, JinWook; Chung, Lan; Smyth, Andrew W

    2010-01-01

    The active interaction control (AIC) system consisting of a primary structure, an auxiliary structure and an interaction element was proposed to protect the primary structure against earthquakes and winds. The objective of the AIC system in reducing the responses of the primary structure is fulfilled by activating or deactivating the switching between the engagement and the disengagement of the primary and auxiliary structures through the interaction element. The status of the interaction element is controlled by switching control algorithms. The previously developed switching control algorithms require an excessive amount of switching, which is inefficient. In this paper, the excessive amount of switching is restricted by imposing an appropriately designed switching boundary region, where switching is prohibited, on pre-designed engagement–disengagement conditions. Two different approaches are used in designing the newly proposed AID-off and AID-off 2 algorithms. The AID-off 2 algorithm is designed to affect deactivated switching regions explicitly, unlike the AID-off algorithm, which follows the same procedure of designing the engagement–disengagement conditions of the previously developed algorithms, by using the current status of the AIC system. Both algorithms are shown to be effective in reducing the amount of switching times triggered from the previously developed AID algorithm under an appropriately selected control sampling period for different earthquakes, but the AID-off 2 algorithm outperforms the AID-off algorithm in reducing the number of switching times

  2. Motion Cueing Algorithm Development: Human-Centered Linear and Nonlinear Approaches

    Science.gov (United States)

    Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.

    2005-01-01

    While the performance of flight simulator motion system hardware has advanced substantially, the development of the motion cueing algorithm, the software that transforms simulated aircraft dynamics into realizable motion commands, has not kept pace. Prior research identified viable features from two algorithms: the nonlinear "adaptive algorithm", and the "optimal algorithm" that incorporates human vestibular models. A novel approach to motion cueing, the "nonlinear algorithm" is introduced that combines features from both approaches. This algorithm is formulated by optimal control, and incorporates a new integrated perception model that includes both visual and vestibular sensation and the interaction between the stimuli. Using a time-varying control law, the matrix Riccati equation is updated in real time by a neurocomputing approach. Preliminary pilot testing resulted in the optimal algorithm incorporating a new otolith model, producing improved motion cues. The nonlinear algorithm vertical mode produced a motion cue with a time-varying washout, sustaining small cues for longer durations and washing out large cues more quickly compared to the optimal algorithm. The inclusion of the integrated perception model improved the responses to longitudinal and lateral cues. False cues observed with the NASA adaptive algorithm were absent. The neurocomputing approach was crucial in that the number of presentations of an input vector could be reduced to meet the real time requirement without degrading the quality of the motion cues.

  3. Computational geometry algorithms and applications

    CERN Document Server

    de Berg, Mark; Overmars, Mark; Schwarzkopf, Otfried

    1997-01-01

    Computational geometry emerged from the field of algorithms design and anal­ ysis in the late 1970s. It has grown into a recognized discipline with its own journals, conferences, and a large community of active researchers. The suc­ cess of the field as a research discipline can on the one hand be explained from the beauty of the problems studied and the solutions obtained, and, on the other hand, by the many application domains--computer graphics, geographic in­ formation systems (GIS), robotics, and others-in which geometric algorithms play a fundamental role. For many geometric problems the early algorithmic solutions were either slow or difficult to understand and implement. In recent years a number of new algorithmic techniques have been developed that improved and simplified many of the previous approaches. In this textbook we have tried to make these modem algorithmic solutions accessible to a large audience. The book has been written as a textbook for a course in computational geometry, but it can ...

  4. Genetic Algorithm Applied to the Eigenvalue Equalization Filtered-x LMS Algorithm (EE-FXLMS

    Directory of Open Access Journals (Sweden)

    Stephan P. Lovstedt

    2008-01-01

    Full Text Available The FXLMS algorithm, used extensively in active noise control (ANC, exhibits frequency-dependent convergence behavior. This leads to degraded performance for time-varying tonal noise and noise with multiple stationary tones. Previous work by the authors proposed the eigenvalue equalization filtered-x least mean squares (EE-FXLMS algorithm. For that algorithm, magnitude coefficients of the secondary path transfer function are modified to decrease variation in the eigenvalues of the filtered-x autocorrelation matrix, while preserving the phase, giving faster convergence and increasing overall attenuation. This paper revisits the EE-FXLMS algorithm, using a genetic algorithm to find magnitude coefficients that give the least variation in eigenvalues. This method overcomes some of the problems with implementing the EE-FXLMS algorithm arising from finite resolution of sampled systems. Experimental control results using the original secondary path model, and a modified secondary path model for both the previous implementation of EE-FXLMS and the genetic algorithm implementation are compared.

  5. Development of the algorithm for obtaining 3-dimensional information using the structured light

    Energy Technology Data Exchange (ETDEWEB)

    Shin, Dong Uk; Lee, Jae Hyub; Kim, Chung Soo [Korea University of Technology and Education, Cheonan (Korea)

    1998-03-01

    The utilization of robot in atomic power plants or nuclear-related facilities has grown rapidly. In order to perform preassigned jobs using robot in nuclear-related facilities, advanced technology extracting 3D information of objects is essential. We have studied an algorithm to extract 3D information of objects using laser slit light and camera, and developed the following hardware system and algorithms. (1) We have designed and fabricated the hardware system which consists of laser light and two cameras. The hardware system can be easily installed on the robot. (2) In order to reduce the occlusion problem when measuring 3D information using laser slit light and camera, we have studied system with laser slit light and two cameras and developed algorithm to synthesize 3D information obtained from two cameras. (2) For easy use of obtained 3D information, we expressed it as digital distance image format and developed algorithm to interpolate 3D information of points which is not obtained. (4) In order to simplify calibration of the camera's parameter, we have also designed an fabricated LED plate, and developed an algorithm detecting the center position of LED automatically. We can certify the efficiency of developed algorithm and hardware system through experimental results. 16 refs., 26 figs., 1 tabs. (Author)

  6. Low-dose computed tomography image restoration using previous normal-dose scan

    International Nuclear Information System (INIS)

    Ma, Jianhua; Huang, Jing; Feng, Qianjin; Zhang, Hua; Lu, Hongbing; Liang, Zhengrong; Chen, Wufan

    2011-01-01

    Purpose: In current computed tomography (CT) examinations, the associated x-ray radiation dose is of a significant concern to patients and operators. A simple and cost-effective means to perform the examinations is to lower the milliampere-seconds (mAs) or kVp parameter (or delivering less x-ray energy to the body) as low as reasonably achievable in data acquisition. However, lowering the mAs parameter will unavoidably increase data noise and the noise would propagate into the CT image if no adequate noise control is applied during image reconstruction. Since a normal-dose high diagnostic CT image scanned previously may be available in some clinical applications, such as CT perfusion imaging and CT angiography (CTA), this paper presents an innovative way to utilize the normal-dose scan as a priori information to induce signal restoration of the current low-dose CT image series. Methods: Unlike conventional local operations on neighboring image voxels, nonlocal means (NLM) algorithm utilizes the redundancy of information across the whole image. This paper adapts the NLM to utilize the redundancy of information in the previous normal-dose scan and further exploits ways to optimize the nonlocal weights for low-dose image restoration in the NLM framework. The resulting algorithm is called the previous normal-dose scan induced nonlocal means (ndiNLM). Because of the optimized nature of nonlocal weights calculation, the ndiNLM algorithm does not depend heavily on image registration between the current low-dose and the previous normal-dose CT scans. Furthermore, the smoothing parameter involved in the ndiNLM algorithm can be adaptively estimated based on the image noise relationship between the current low-dose and the previous normal-dose scanning protocols. Results: Qualitative and quantitative evaluations were carried out on a physical phantom as well as clinical abdominal and brain perfusion CT scans in terms of accuracy and resolution properties. The gain by the use

  7. A Developed Artificial Bee Colony Algorithm Based on Cloud Model

    Directory of Open Access Journals (Sweden)

    Ye Jin

    2018-04-01

    Full Text Available The Artificial Bee Colony (ABC algorithm is a bionic intelligent optimization method. The cloud model is a kind of uncertainty conversion model between a qualitative concept T ˜ that is presented by nature language and its quantitative expression, which integrates probability theory and the fuzzy mathematics. A developed ABC algorithm based on cloud model is proposed to enhance accuracy of the basic ABC algorithm and avoid getting trapped into local optima by introducing a new select mechanism, replacing the onlooker bees’ search formula and changing the scout bees’ updating formula. Experiments on CEC15 show that the new algorithm has a faster convergence speed and higher accuracy than the basic ABC and some cloud model based ABC variants.

  8. Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics

    Science.gov (United States)

    Goodrich, John W.; Dyson, Rodger W.

    1999-01-01

    The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that

  9. Search for 'Little Higgs' and reconstruction algorithms developments in Atlas

    International Nuclear Information System (INIS)

    Rousseau, D.

    2007-05-01

    This document summarizes developments of framework and reconstruction algorithms for the ATLAS detector at the LHC. A library of reconstruction algorithms has been developed in a more and more complex environment. The reconstruction software originally designed on an optimistic Monte-Carlo simulation, has been confronted with a more detailed 'as-built' simulation. The 'Little Higgs' is an effective theory which can be taken for granted, or as an opportunity to study heavy resonances. In several cases, these resonances can be detected in original channels like tZ, ZH or WH. (author)

  10. Development of Base Transceiver Station Selection Algorithm for ...

    African Journals Online (AJOL)

    TEMS) equipment was carried out on the existing BTSs, and a linear algorithm optimization program based on the spectral link efficiency of each BTS was developed, the output of this site optimization gives the selected number of base station sites ...

  11. Developing and Implementing the Data Mining Algorithms in RAVEN

    International Nuclear Information System (INIS)

    Sen, Ramazan Sonat; Maljovec, Daniel Patrick; Alfonsi, Andrea; Rabiti, Cristian

    2015-01-01

    The RAVEN code is becoming a comprehensive tool to perform probabilistic risk assessment, uncertainty quantification, and verification and validation. The RAVEN code is being developed to support many programs and to provide a set of methodologies and algorithms for advanced analysis. Scientific computer codes can generate enormous amounts of data. To post-process and analyze such data might, in some cases, take longer than the initial software runtime. Data mining algorithms/methods help in recognizing and understanding patterns in the data, and thus discover knowledge in databases. The methodologies used in the dynamic probabilistic risk assessment or in uncertainty and error quantification analysis couple system/physics codes with simulation controller codes, such as RAVEN. RAVEN introduces both deterministic and stochastic elements into the simulation while the system/physics code model the dynamics deterministically. A typical analysis is performed by sampling values of a set of parameter values. A major challenge in using dynamic probabilistic risk assessment or uncertainty and error quantification analysis for a complex system is to analyze the large number of scenarios generated. Data mining techniques are typically used to better organize and understand data, i.e. recognizing patterns in the data. This report focuses on development and implementation of Application Programming Interfaces (APIs) for different data mining algorithms, and the application of these algorithms to different databases.

  12. Developing and Implementing the Data Mining Algorithms in RAVEN

    Energy Technology Data Exchange (ETDEWEB)

    Sen, Ramazan Sonat [Idaho National Lab. (INL), Idaho Falls, ID (United States); Maljovec, Daniel Patrick [Idaho National Lab. (INL), Idaho Falls, ID (United States); Alfonsi, Andrea [Idaho National Lab. (INL), Idaho Falls, ID (United States); Rabiti, Cristian [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    The RAVEN code is becoming a comprehensive tool to perform probabilistic risk assessment, uncertainty quantification, and verification and validation. The RAVEN code is being developed to support many programs and to provide a set of methodologies and algorithms for advanced analysis. Scientific computer codes can generate enormous amounts of data. To post-process and analyze such data might, in some cases, take longer than the initial software runtime. Data mining algorithms/methods help in recognizing and understanding patterns in the data, and thus discover knowledge in databases. The methodologies used in the dynamic probabilistic risk assessment or in uncertainty and error quantification analysis couple system/physics codes with simulation controller codes, such as RAVEN. RAVEN introduces both deterministic and stochastic elements into the simulation while the system/physics code model the dynamics deterministically. A typical analysis is performed by sampling values of a set of parameter values. A major challenge in using dynamic probabilistic risk assessment or uncertainty and error quantification analysis for a complex system is to analyze the large number of scenarios generated. Data mining techniques are typically used to better organize and understand data, i.e. recognizing patterns in the data. This report focuses on development and implementation of Application Programming Interfaces (APIs) for different data mining algorithms, and the application of these algorithms to different databases.

  13. A Radix-10 Digit-Recurrence Division Unit: Algorithm and Architecture

    DEFF Research Database (Denmark)

    Lang, Tomas; Nannarelli, Alberto

    2007-01-01

    In this work, we present a radix-10 division unit that is based on the digit-recurrence algorithm. The previous decimal division designs do not include recent developments in the theory and practice of this type of algorithm, which were developed for radix-2^k dividers. In addition to the adaptat...... dynamic range of significant) and it has a shorter latency than a radix-10 unit based on the Newton-Raphson approximation....

  14. Predictive factors for the development of diabetes in women with previous gestational diabetes mellitus

    DEFF Research Database (Denmark)

    Damm, P.; Kühl, C.; Bertelsen, Aksel

    1992-01-01

    OBJECTIVES: The purpose of this study was to determine the incidence of diabetes in women with previous dietary-treated gestational diabetes mellitus and to identify predictive factors for development of diabetes. STUDY DESIGN: Two to 11 years post partum, glucose tolerance was investigated in 241...... women with previous dietary-treated gestational diabetes mellitus and 57 women without previous gestational diabetes mellitus (control group). RESULTS: Diabetes developed in 42 (17.4%) women with previous gestational diabetes mellitus (3.7% insulin-dependent diabetes mellitus and 13.7% non...... of previous patients with gestational diabetes mellitus in whom plasma insulin was measured during an oral glucose tolerance test in late pregnancy a low insulin response at diagnosis was found to be an independent predictive factor for diabetes development. CONCLUSIONS: Women with previous dietary...

  15. Development of target-tracking algorithms using neural network

    Energy Technology Data Exchange (ETDEWEB)

    Park, Dong Sun; Lee, Joon Whaoan; Yoon, Sook; Baek, Seong Hyun; Lee, Myung Jae [Chonbuk National University, Chonjoo (Korea)

    1998-04-01

    The utilization of remote-control robot system in atomic power plants or nuclear-related facilities grows rapidly, to protect workers form high radiation environments. Such applications require complete stability of the robot system, so that precisely tracking the robot is essential for the whole system. This research is to accomplish the goal by developing appropriate algorithms for remote-control robot systems. A neural network tracking system is designed and experimented to trace a robot Endpoint. This model is aimed to utilized the excellent capabilities of neural networks; nonlinear mapping between inputs and outputs, learning capability, and generalization capability. The neural tracker consists of two networks for position detection and prediction. Tracking algorithms are developed and experimented for the two models. Results of the experiments show that both models are promising as real-time target-tracking systems for remote-control robot systems. (author). 10 refs., 47 figs.

  16. Development of web-based reliability data analysis algorithm model and its application

    International Nuclear Information System (INIS)

    Hwang, Seok-Won; Oh, Ji-Yong; Moosung-Jae

    2010-01-01

    For this study, a database model of plant reliability was developed for the effective acquisition and management of plant-specific data that can be used in various applications of plant programs as well as in Probabilistic Safety Assessment (PSA). Through the development of a web-based reliability data analysis algorithm, this approach systematically gathers specific plant data such as component failure history, maintenance history, and shift diary. First, for the application of the developed algorithm, this study reestablished the raw data types, data deposition procedures and features of the Enterprise Resource Planning (ERP) system process. The component codes and system codes were standardized to make statistical analysis between different types of plants possible. This standardization contributes to the establishment of a flexible database model that allows the customization of reliability data for the various applications depending on component types and systems. In addition, this approach makes it possible for users to perform trend analyses and data comparisons for the significant plant components and systems. The validation of the algorithm is performed through a comparison of the importance measure value (Fussel-Vesely) of the mathematical calculation and that of the algorithm application. The development of a reliability database algorithm is one of the best approaches for providing systemic management of plant-specific reliability data with transparency and continuity. This proposed algorithm reinforces the relationships between raw data and application results so that it can provide a comprehensive database that offers everything from basic plant-related data to final customized data.

  17. Development of web-based reliability data analysis algorithm model and its application

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Seok-Won, E-mail: swhwang@khnp.co.k [Korea Hydro and Nuclear Power Co. Ltd., Jang-Dong 25-1, Yuseong-Gu, 305-343 Daejeon (Korea, Republic of); Oh, Ji-Yong [Korea Hydro and Nuclear Power Co. Ltd., Jang-Dong 25-1, Yuseong-Gu, 305-343 Daejeon (Korea, Republic of); Moosung-Jae [Department of Nuclear Engineering Hanyang University 17 Haengdang, Sungdong, Seoul (Korea, Republic of)

    2010-02-15

    For this study, a database model of plant reliability was developed for the effective acquisition and management of plant-specific data that can be used in various applications of plant programs as well as in Probabilistic Safety Assessment (PSA). Through the development of a web-based reliability data analysis algorithm, this approach systematically gathers specific plant data such as component failure history, maintenance history, and shift diary. First, for the application of the developed algorithm, this study reestablished the raw data types, data deposition procedures and features of the Enterprise Resource Planning (ERP) system process. The component codes and system codes were standardized to make statistical analysis between different types of plants possible. This standardization contributes to the establishment of a flexible database model that allows the customization of reliability data for the various applications depending on component types and systems. In addition, this approach makes it possible for users to perform trend analyses and data comparisons for the significant plant components and systems. The validation of the algorithm is performed through a comparison of the importance measure value (Fussel-Vesely) of the mathematical calculation and that of the algorithm application. The development of a reliability database algorithm is one of the best approaches for providing systemic management of plant-specific reliability data with transparency and continuity. This proposed algorithm reinforces the relationships between raw data and application results so that it can provide a comprehensive database that offers everything from basic plant-related data to final customized data.

  18. Multicore and GPU algorithms for Nussinov RNA folding

    Science.gov (United States)

    2014-01-01

    Background One segment of a RNA sequence might be paired with another segment of the same RNA sequence due to the force of hydrogen bonds. This two-dimensional structure is called the RNA sequence's secondary structure. Several algorithms have been proposed to predict an RNA sequence's secondary structure. These algorithms are referred to as RNA folding algorithms. Results We develop cache efficient, multicore, and GPU algorithms for RNA folding using Nussinov's algorithm. Conclusions Our cache efficient algorithm provides a speedup between 1.6 and 3.0 relative to a naive straightforward single core code. The multicore version of the cache efficient single core algorithm provides a speedup, relative to the naive single core algorithm, between 7.5 and 14.0 on a 6 core hyperthreaded CPU. Our GPU algorithm for the NVIDIA C2050 is up to 1582 times as fast as the naive single core algorithm and between 5.1 and 11.2 times as fast as the fastest previously known GPU algorithm for Nussinov RNA folding. PMID:25082539

  19. Development of an inter-layer solute transport algorithm for SOLTR computer program. Part 1. The algorithm

    International Nuclear Information System (INIS)

    Miller, I.; Roman, K.

    1979-12-01

    In order to perform studies of the influence of regional groundwater flow systems on the long-term performance of potential high-level nuclear waste repositories, it was determined that an adequate computer model would have to consider the full three-dimensional flow system. Golder Associates' SOLTR code, while three-dimensional, has an overly simple algorithm for simulating the passage of radionuclides from one aquifier to another above or below it. Part 1 of this report describes the algorithm developed to provide SOLTR with an improved capability for simulating interaquifer transport

  20. DNA Microarray Data Analysis: A Novel Biclustering Algorithm Approach

    Directory of Open Access Journals (Sweden)

    Tewfik Ahmed H

    2006-01-01

    Full Text Available Biclustering algorithms refer to a distinct class of clustering algorithms that perform simultaneous row-column clustering. Biclustering problems arise in DNA microarray data analysis, collaborative filtering, market research, information retrieval, text mining, electoral trends, exchange analysis, and so forth. When dealing with DNA microarray experimental data for example, the goal of biclustering algorithms is to find submatrices, that is, subgroups of genes and subgroups of conditions, where the genes exhibit highly correlated activities for every condition. In this study, we develop novel biclustering algorithms using basic linear algebra and arithmetic tools. The proposed biclustering algorithms can be used to search for all biclusters with constant values, biclusters with constant values on rows, biclusters with constant values on columns, and biclusters with coherent values from a set of data in a timely manner and without solving any optimization problem. We also show how one of the proposed biclustering algorithms can be adapted to identify biclusters with coherent evolution. The algorithms developed in this study discover all valid biclusters of each type, while almost all previous biclustering approaches will miss some.

  1. A New Plant Intelligent Behaviour Optimisation Algorithm for Solving Vehicle Routing Problem

    OpenAIRE

    Chagwiza, Godfrey

    2018-01-01

    A new plant intelligent behaviour optimisation algorithm is developed. The algorithm is motivated by intelligent behaviour of plants and is implemented to solve benchmark vehicle routing problems of all sizes, and results were compared to those in literature. The results show that the new algorithm outperforms most of algorithms it was compared to for very large and large vehicle routing problem instances. This is attributed to the ability of the plant to use previously stored memory to respo...

  2. The Psychopharmacology Algorithm Project at the Harvard South Shore Program: An Algorithm for Generalized Anxiety Disorder.

    Science.gov (United States)

    Abejuela, Harmony Raylen; Osser, David N

    2016-01-01

    This revision of previous algorithms for the pharmacotherapy of generalized anxiety disorder was developed by the Psychopharmacology Algorithm Project at the Harvard South Shore Program. Algorithms from 1999 and 2010 and associated references were reevaluated. Newer studies and reviews published from 2008-14 were obtained from PubMed and analyzed with a focus on their potential to justify changes in the recommendations. Exceptions to the main algorithm for special patient populations, such as women of childbearing potential, pregnant women, the elderly, and those with common medical and psychiatric comorbidities, were considered. Selective serotonin reuptake inhibitors (SSRIs) are still the basic first-line medication. Early alternatives include duloxetine, buspirone, hydroxyzine, pregabalin, or bupropion, in that order. If response is inadequate, then the second recommendation is to try a different SSRI. Additional alternatives now include benzodiazepines, venlafaxine, kava, and agomelatine. If the response to the second SSRI is unsatisfactory, then the recommendation is to try a serotonin-norepinephrine reuptake inhibitor (SNRI). Other alternatives to SSRIs and SNRIs for treatment-resistant or treatment-intolerant patients include tricyclic antidepressants, second-generation antipsychotics, and valproate. This revision of the GAD algorithm responds to issues raised by new treatments under development (such as pregabalin) and organizes the evidence systematically for practical clinical application.

  3. Test of TEDA, Tsunami Early Detection Algorithm

    Science.gov (United States)

    Bressan, Lidia; Tinti, Stefano

    2010-05-01

    Tsunami detection in real-time, both offshore and at the coastline, plays a key role in Tsunami Warning Systems since it provides so far the only reliable and timely proof of tsunami generation, and is used to confirm or cancel tsunami warnings previously issued on the basis of seismic data alone. Moreover, in case of submarine or coastal landslide generated tsunamis, which are not announced by clear seismic signals and are typically local, real-time detection at the coastline might be the fastest way to release a warning, even if the useful time for emergency operations might be limited. TEDA is an algorithm for real-time detection of tsunami signal on sea-level records, developed by the Tsunami Research Team of the University of Bologna. The development and testing of the algorithm has been accomplished within the framework of the Italian national project DPC-INGV S3 and the European project TRANSFER. The algorithm is to be implemented at station level, and it is based therefore only on sea-level data of a single station, either a coastal tide-gauge or an offshore buoy. TEDA's principle is to discriminate the first tsunami wave from the previous background signal, which implies the assumption that the tsunami waves introduce a difference in the previous sea-level signal. Therefore, in TEDA the instantaneous (most recent) and the previous background sea-level elevation gradients are characterized and compared by proper functions (IS and BS) that are updated at every new data acquisition. Detection is triggered when the instantaneous signal function passes a set threshold and at the same time it is significantly bigger compared to the previous background signal. The functions IS and BS depend on temporal parameters that allow the algorithm to be adapted different situations: in general, coastal tide-gauges have a typical background spectrum depending on the location where the instrument is installed, due to local topography and bathymetry, while offshore buoys are

  4. Clinical algorithms to aid osteoarthritis guideline dissemination.

    Science.gov (United States)

    Meneses, S R F; Goode, A P; Nelson, A E; Lin, J; Jordan, J M; Allen, K D; Bennell, K L; Lohmander, L S; Fernandes, L; Hochberg, M C; Underwood, M; Conaghan, P G; Liu, S; McAlindon, T E; Golightly, Y M; Hunter, D J

    2016-09-01

    Numerous scientific organisations have developed evidence-based recommendations aiming to optimise the management of osteoarthritis (OA). Uptake, however, has been suboptimal. The purpose of this exercise was to harmonize the recent recommendations and develop a user-friendly treatment algorithm to facilitate translation of evidence into practice. We updated a previous systematic review on clinical practice guidelines (CPGs) for OA management. The guidelines were assessed using the Appraisal of Guidelines for Research and Evaluation for quality and the standards for developing trustworthy CPGs as established by the National Academy of Medicine (NAM). Four case scenarios and algorithms were developed by consensus of a multidisciplinary panel. Sixteen guidelines were included in the systematic review. Most recommendations were directed toward physicians and allied health professionals, and most had multi-disciplinary input. Analysis for trustworthiness suggests that many guidelines still present a lack of transparency. A treatment algorithm was developed for each case scenario advised by recommendations from guidelines and based on panel consensus. Strategies to facilitate the implementation of guidelines in clinical practice are necessary. The algorithms proposed are examples of how to apply recommendations in the clinical context, helping the clinician to visualise the patient flow and timing of different treatment modalities. Copyright © 2016 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.

  5. Efficient sequential and parallel algorithms for record linkage.

    Science.gov (United States)

    Mamun, Abdullah-Al; Mi, Tian; Aseltine, Robert; Rajasekaran, Sanguthevar

    2014-01-01

    Integrating data from multiple sources is a crucial and challenging problem. Even though there exist numerous algorithms for record linkage or deduplication, they suffer from either large time needs or restrictions on the number of datasets that they can integrate. In this paper we report efficient sequential and parallel algorithms for record linkage which handle any number of datasets and outperform previous algorithms. Our algorithms employ hierarchical clustering algorithms as the basis. A key idea that we use is radix sorting on certain attributes to eliminate identical records before any further processing. Another novel idea is to form a graph that links similar records and find the connected components. Our sequential and parallel algorithms have been tested on a real dataset of 1,083,878 records and synthetic datasets ranging in size from 50,000 to 9,000,000 records. Our sequential algorithm runs at least two times faster, for any dataset, than the previous best-known algorithm, the two-phase algorithm using faster computation of the edit distance (TPA (FCED)). The speedups obtained by our parallel algorithm are almost linear. For example, we get a speedup of 7.5 with 8 cores (residing in a single node), 14.1 with 16 cores (residing in two nodes), and 26.4 with 32 cores (residing in four nodes). We have compared the performance of our sequential algorithm with TPA (FCED) and found that our algorithm outperforms the previous one. The accuracy is the same as that of this previous best-known algorithm.

  6. Greedy Algorithms for Nonnegativity-Constrained Simultaneous Sparse Recovery

    Science.gov (United States)

    Kim, Daeun; Haldar, Justin P.

    2016-01-01

    This work proposes a family of greedy algorithms to jointly reconstruct a set of vectors that are (i) nonnegative and (ii) simultaneously sparse with a shared support set. The proposed algorithms generalize previous approaches that were designed to impose these constraints individually. Similar to previous greedy algorithms for sparse recovery, the proposed algorithms iteratively identify promising support indices. In contrast to previous approaches, the support index selection procedure has been adapted to prioritize indices that are consistent with both the nonnegativity and shared support constraints. Empirical results demonstrate for the first time that the combined use of simultaneous sparsity and nonnegativity constraints can substantially improve recovery performance relative to existing greedy algorithms that impose less signal structure. PMID:26973368

  7. Texas Medication Algorithm Project: development and feasibility testing of a treatment algorithm for patients with bipolar disorder.

    Science.gov (United States)

    Suppes, T; Swann, A C; Dennehy, E B; Habermacher, E D; Mason, M; Crismon, M L; Toprac, M G; Rush, A J; Shon, S P; Altshuler, K Z

    2001-06-01

    Use of treatment guidelines for treatment of major psychiatric illnesses has increased in recent years. The Texas Medication Algorithm Project (TMAP) was developed to study the feasibility and process of developing and implementing guidelines for bipolar disorder, major depressive disorder, and schizophrenia in the public mental health system of Texas. This article describes the consensus process used to develop the first set of TMAP algorithms for the Bipolar Disorder Module (Phase 1) and the trial testing the feasibility of their implementation in inpatient and outpatient psychiatric settings across Texas (Phase 2). The feasibility trial answered core questions regarding implementation of treatment guidelines for bipolar disorder. A total of 69 patients were treated with the original algorithms for bipolar disorder developed in Phase 1 of TMAP. Results support that physicians accepted the guidelines, followed recommendations to see patients at certain intervals, and utilized sequenced treatment steps differentially over the course of treatment. While improvements in clinical symptoms (24-item Brief Psychiatric Rating Scale) were observed over the course of enrollment in the trial, these conclusions are limited by the fact that physician volunteers were utilized for both treatment and ratings. and there was no control group. Results from Phases 1 and 2 indicate that it is possible to develop and implement a treatment guideline for patients with a history of mania in public mental health clinics in Texas. TMAP Phase 3, a recently completed larger and controlled trial assessing the clinical and economic impact of treatment guidelines and patient and family education in the public mental health system of Texas, improves upon this methodology.

  8. Clinical algorithms to aid osteoarthritis guideline dissemination

    DEFF Research Database (Denmark)

    Meneses, S. R. F.; Goode, A. P.; Nelson, A. E

    2016-01-01

    Background: Numerous scientific organisations have developed evidence-based recommendations aiming to optimise the management of osteoarthritis (OA). Uptake, however, has been suboptimal. The purpose of this exercise was to harmonize the recent recommendations and develop a user-friendly treatment...... algorithm to facilitate translation of evidence into practice. Methods: We updated a previous systematic review on clinical practice guidelines (CPGs) for OA management. The guidelines were assessed using the Appraisal of Guidelines for Research and Evaluation for quality and the standards for developing...... to facilitate the implementation of guidelines in clinical practice are necessary. The algorithms proposed are examples of how to apply recommendations in the clinical context, helping the clinician to visualise the patient flow and timing of different treatment modalities. (C) 2016 Osteoarthritis Research...

  9. Crowdsourcing seizure detection: algorithm development and validation on human implanted device recordings.

    Science.gov (United States)

    Baldassano, Steven N; Brinkmann, Benjamin H; Ung, Hoameng; Blevins, Tyler; Conrad, Erin C; Leyde, Kent; Cook, Mark J; Khambhati, Ankit N; Wagenaar, Joost B; Worrell, Gregory A; Litt, Brian

    2017-06-01

    There exist significant clinical and basic research needs for accurate, automated seizure detection algorithms. These algorithms have translational potential in responsive neurostimulation devices and in automatic parsing of continuous intracranial electroencephalography data. An important barrier to developing accurate, validated algorithms for seizure detection is limited access to high-quality, expertly annotated seizure data from prolonged recordings. To overcome this, we hosted a kaggle.com competition to crowdsource the development of seizure detection algorithms using intracranial electroencephalography from canines and humans with epilepsy. The top three performing algorithms from the contest were then validated on out-of-sample patient data including standard clinical data and continuous ambulatory human data obtained over several years using the implantable NeuroVista seizure advisory system. Two hundred teams of data scientists from all over the world participated in the kaggle.com competition. The top performing teams submitted highly accurate algorithms with consistent performance in the out-of-sample validation study. The performance of these seizure detection algorithms, achieved using freely available code and data, sets a new reproducible benchmark for personalized seizure detection. We have also shared a 'plug and play' pipeline to allow other researchers to easily use these algorithms on their own datasets. The success of this competition demonstrates how sharing code and high quality data results in the creation of powerful translational tools with significant potential to impact patient care. © The Author (2017). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  10. Algorithm integration using ADL (Algorithm Development Library) for improving CrIMSS EDR science product quality

    Science.gov (United States)

    Das, B.; Wilson, M.; Divakarla, M. G.; Chen, W.; Barnet, C.; Wolf, W.

    2013-05-01

    Algorithm Development Library (ADL) is a framework that mimics the operational system IDPS (Interface Data Processing Segment) that is currently being used to process data from instruments aboard Suomi National Polar-orbiting Partnership (S-NPP) satellite. The satellite was launched successfully in October 2011. The Cross-track Infrared and Microwave Sounder Suite (CrIMSS) consists of the Advanced Technology Microwave Sounder (ATMS) and Cross-track Infrared Sounder (CrIS) instruments that are on-board of S-NPP. These instruments will also be on-board of JPSS (Joint Polar Satellite System) that will be launched in early 2017. The primary products of the CrIMSS Environmental Data Record (EDR) include global atmospheric vertical temperature, moisture, and pressure profiles (AVTP, AVMP and AVPP) and Ozone IP (Intermediate Product from CrIS radiances). Several algorithm updates have recently been proposed by CrIMSS scientists that include fixes to the handling of forward modeling errors, a more conservative identification of clear scenes, indexing corrections for daytime products, and relaxed constraints between surface temperature and air temperature for daytime land scenes. We have integrated these improvements into the ADL framework. This work compares the results from ADL emulation of future IDPS system incorporating all the suggested algorithm updates with the current official processing results by qualitative and quantitative evaluations. The results prove these algorithm updates improve science product quality.

  11. Computational complexity of algorithms for sequence comparison, short-read assembly and genome alignment.

    Science.gov (United States)

    Baichoo, Shakuntala; Ouzounis, Christos A

    A multitude of algorithms for sequence comparison, short-read assembly and whole-genome alignment have been developed in the general context of molecular biology, to support technology development for high-throughput sequencing, numerous applications in genome biology and fundamental research on comparative genomics. The computational complexity of these algorithms has been previously reported in original research papers, yet this often neglected property has not been reviewed previously in a systematic manner and for a wider audience. We provide a review of space and time complexity of key sequence analysis algorithms and highlight their properties in a comprehensive manner, in order to identify potential opportunities for further research in algorithm or data structure optimization. The complexity aspect is poised to become pivotal as we will be facing challenges related to the continuous increase of genomic data on unprecedented scales and complexity in the foreseeable future, when robust biological simulation at the cell level and above becomes a reality. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. A Global algorithm for linear radiosity

    OpenAIRE

    Sbert Cassasayas, Mateu; Pueyo Sánchez, Xavier

    1993-01-01

    A linear algorithm for radiosity is presented, linear both in time and storage. The new algorithm is based on previous work by the authors and on the well known algorithms for progressive radiosity and Monte Carlo particle transport.

  13. Combined Dust Detection Algorithm by Using MODIS Infrared Channels over East Asia

    Science.gov (United States)

    Park, Sang Seo; Kim, Jhoon; Lee, Jaehwa; Lee, Sukjo; Kim, Jeong Soo; Chang, Lim Seok; Ou, Steve

    2014-01-01

    A new dust detection algorithm is developed by combining the results of multiple dust detectionmethods using IR channels onboard the MODerate resolution Imaging Spectroradiometer (MODIS). Brightness Temperature Difference (BTD) between two wavelength channels has been used widely in previous dust detection methods. However, BTDmethods have limitations in identifying the offset values of the BTDto discriminate clear-sky areas. The current algorithm overcomes the disadvantages of previous dust detection methods by considering the Brightness Temperature Ratio (BTR) values of the dual wavelength channels with 30-day composite, the optical properties of the dust particles, the variability of surface properties, and the cloud contamination. Therefore, the current algorithm shows improvements in detecting the dust loaded region over land during daytime. Finally, the confidence index of the current dust algorithm is shown in 10 × 10 pixels of the MODIS observations. From January to June, 2006, the results of the current algorithm are within 64 to 81% of those found using the fine mode fraction (FMF) and aerosol index (AI) from the MODIS and Ozone Monitoring Instrument (OMI). The agreement between the results of the current algorithm and the OMI AI over the non-polluted land also ranges from 60 to 67% to avoid errors due to the anthropogenic aerosol. In addition, the developed algorithm shows statistically significant results at four AErosol RObotic NETwork (AERONET) sites in East Asia.

  14. Development of a MELCOR self-initialization algorithm for boiling water reactors

    International Nuclear Information System (INIS)

    Chien, C.S.; Wang, S.J.; Cheng, S.K.

    1996-01-01

    The MELCOR code, developed by Sandia National Laboratories, is suitable for calculating source terms and simulating severe accident phenomena of nuclear power plants. Prior to simulating a severe accident transient with MELCOR, the initial steady-state conditions must be generated in advance. The current MELCOR users' manuals do not provide a self-initialization procedure; this is the reason users have to adjust the initial conditions by themselves through a trial-and-error approach. A MELCOR self-initialization algorithm for boiling water reactor plants has been developed, which eliminates the tedious trial-and-error procedures and improves the simulation accuracy. This algorithm adjusts the important plant variable such as the dome pressure, downcomer level, and core flow rate to the desired conditions automatically. It is implemented through input with control functions provided in MELCOR. The reactor power and feedwater temperature are fed as input data. The initialization work of full-power conditions of the Kuosheng nuclear power station is cited as an example. These initial conditions are generated successfully with the developed algorithm. The generated initial conditions can be stored in a restart file and used for transient analysis. The methodology in this study improves the accuracy and consistency of transient calculations. Meanwhile, the algorithm provides all MELCOR users an easy and correct method for establishing the initial conditions

  15. Preliminary Development and Evaluation of Lightning Jump Algorithms for the Real-Time Detection of Severe Weather

    Science.gov (United States)

    Schultz, Christopher J.; Petersen, Walter A.; Carey, Lawrence D.

    2009-01-01

    Previous studies have demonstrated that rapid increases in total lightning activity (intracloud + cloud-to-ground) are often observed tens of minutes in advance of the occurrence of severe weather at the ground. These rapid increases in lightning activity have been termed "lightning jumps." Herein, we document a positive correlation between lightning jumps and the manifestation of severe weather in thunderstorms occurring across the Tennessee Valley and Washington D.C. A total of 107 thunderstorms were examined in this study, with 69 of the 107 thunderstorms falling into the category of non-severe, and 38 into the category of severe. From the dataset of 69 isolated non-severe thunderstorms, an average peak 1 minute flash rate of 10 flashes/min was determined. A variety of severe thunderstorm types were examined for this study including an MCS, MCV, tornadic outer rainbands of tropical remnants, supercells, and pulse severe thunderstorms. Of the 107 thunderstorms, 85 thunderstorms (47 non-severe, 38 severe) from the Tennessee Valley and Washington D.C tested 6 lightning jump algorithm configurations (Gatlin, Gatlin 45, 2(sigma), 3(sigma), Threshold 10, and Threshold 8). Performance metrics for each algorithm were then calculated, yielding encouraging results from the limited sample of 85 thunderstorms. The 2(sigma) lightning jump algorithm had a high probability of detection (POD; 87%), a modest false alarm rate (FAR; 33%), and a solid Heidke Skill Score (HSS; 0.75). A second and more simplistic lightning jump algorithm named the Threshold 8 lightning jump algorithm also shows promise, with a POD of 81% and a FAR of 41%. Average lead times to severe weather occurrence for these two algorithms were 23 minutes and 20 minutes, respectively. The overall goal of this study is to advance the development of an operationally-applicable jump algorithm that can be used with either total lightning observations made from the ground, or in the near future from space using the

  16. Pressure algorithm for elliptic flow calculations with the PDF method

    Science.gov (United States)

    Anand, M. S.; Pope, S. B.; Mongia, H. C.

    1991-01-01

    An algorithm to determine the mean pressure field for elliptic flow calculations with the probability density function (PDF) method is developed and applied. The PDF method is a most promising approach for the computation of turbulent reacting flows. Previous computations of elliptic flows with the method were in conjunction with conventional finite volume based calculations that provided the mean pressure field. The algorithm developed and described here permits the mean pressure field to be determined within the PDF calculations. The PDF method incorporating the pressure algorithm is applied to the flow past a backward-facing step. The results are in good agreement with data for the reattachment length, mean velocities, and turbulence quantities including triple correlations.

  17. Development of CAD implementing the algorithm of boundary elements’ numerical analytical method

    Directory of Open Access Journals (Sweden)

    Yulia V. Korniyenko

    2015-03-01

    Full Text Available Up to recent days the algorithms for numerical-analytical boundary elements method had been implemented with programs written in MATLAB environment language. Each program had a local character, i.e. used to solve a particular problem: calculation of beam, frame, arch, etc. Constructing matrices in these programs was carried out “manually” therefore being time-consuming. The research was purposed onto a reasoned choice of programming language for new CAD development, allows to implement algorithm of numerical analytical boundary elements method and to create visualization tools for initial objects and calculation results. Research conducted shows that among wide variety of programming languages the most efficient one for CAD development, employing the numerical analytical boundary elements method algorithm, is the Java language. This language provides tools not only for development of calculating CAD part, but also to build the graphic interface for geometrical models construction and calculated results interpretation.

  18. Using qualitative research to inform development of a diagnostic algorithm for UTI in children.

    Science.gov (United States)

    de Salis, Isabel; Whiting, Penny; Sterne, Jonathan A C; Hay, Alastair D

    2013-06-01

    Diagnostic and prognostic algorithms can help reduce clinical uncertainty. The selection of candidate symptoms and signs to be measured in case report forms (CRFs) for potential inclusion in diagnostic algorithms needs to be comprehensive, clearly formulated and relevant for end users. To investigate whether qualitative methods could assist in designing CRFs in research developing diagnostic algorithms. Specifically, the study sought to establish whether qualitative methods could have assisted in designing the CRF for the Health Technology Association funded Diagnosis of Urinary Tract infection in Young children (DUTY) study, which will develop a diagnostic algorithm to improve recognition of urinary tract infection (UTI) in children aged children in primary care and a Children's Emergency Department. We elicited features that clinicians believed useful in diagnosing UTI and compared these for presence or absence and terminology with the DUTY CRF. Despite much agreement between clinicians' accounts and the DUTY CRFs, we identified a small number of potentially important symptoms and signs not included in the CRF and some included items that could have been reworded to improve understanding and final data analysis. This study uniquely demonstrates the role of qualitative methods in the design and content of CRFs used for developing diagnostic (and prognostic) algorithms. Research groups developing such algorithms should consider using qualitative methods to inform the selection and wording of candidate symptoms and signs.

  19. Development of antibiotic regimens using graph based evolutionary algorithms.

    Science.gov (United States)

    Corns, Steven M; Ashlock, Daniel A; Bryden, Kenneth M

    2013-12-01

    This paper examines the use of evolutionary algorithms in the development of antibiotic regimens given to production animals. A model is constructed that combines the lifespan of the animal and the bacteria living in the animal's gastro-intestinal tract from the early finishing stage until the animal reaches market weight. This model is used as the fitness evaluation for a set of graph based evolutionary algorithms to assess the impact of diversity control on the evolving antibiotic regimens. The graph based evolutionary algorithms have two objectives: to find an antibiotic treatment regimen that maintains the weight gain and health benefits of antibiotic use and to reduce the risk of spreading antibiotic resistant bacteria. This study examines different regimens of tylosin phosphate use on bacteria populations divided into Gram positive and Gram negative types, with a focus on Campylobacter spp. Treatment regimens were found that provided decreased antibiotic resistance relative to conventional methods while providing nearly the same benefits as conventional antibiotic regimes. By using a graph to control the information flow in the evolutionary algorithm, a variety of solutions along the Pareto front can be found automatically for this and other multi-objective problems. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  20. Lightning Jump Algorithm Development for the GOES·R Geostationary Lightning Mapper

    Science.gov (United States)

    Schultz. E.; Schultz. C.; Chronis, T.; Stough, S.; Carey, L.; Calhoun, K.; Ortega, K.; Stano, G.; Cecil, D.; Bateman, M.; hide

    2014-01-01

    Current work on the lightning jump algorithm to be used in GOES-R Geostationary Lightning Mapper (GLM)'s data stream is multifaceted due to the intricate interplay between the storm tracking, GLM proxy data, and the performance of the lightning jump itself. This work outlines the progress of the last year, where analysis and performance of the lightning jump algorithm with automated storm tracking and GLM proxy data were assessed using over 700 storms from North Alabama. The cases analyzed coincide with previous semi-objective work performed using total lightning mapping array (LMA) measurements in Schultz et al. (2011). Analysis shows that key components of the algorithm (flash rate and sigma thresholds) have the greatest influence on the performance of the algorithm when validating using severe storm reports. Automated objective analysis using the GLM proxy data has shown probability of detection (POD) values around 60% with false alarm rates (FAR) around 73% using similar methodology to Schultz et al. (2011). However, when applying verification methods similar to those employed by the National Weather Service, POD values increase slightly (69%) and FAR values decrease (63%). The relationship between storm tracking and lightning jump has also been tested in a real-time framework at NSSL. This system includes fully automated tracking by radar alone, real-time LMA and radar observations and the lightning jump. Results indicate that the POD is strong at 65%. However, the FAR is significantly higher than in Schultz et al. (2011) (50-80% depending on various tracking/lightning jump parameters) when using storm reports for verification. Given known issues with Storm Data, the performance of the real-time jump algorithm is also being tested with high density radar and surface observations from the NSSL Severe Hazards Analysis & Verification Experiment (SHAVE).

  1. SPECIAL LIBRARIES OF FRAGMENTS OF ALGORITHMIC NETWORKS TO AUTOMATE THE DEVELOPMENT OF ALGORITHMIC MODELS

    Directory of Open Access Journals (Sweden)

    V. E. Marley

    2015-01-01

    Full Text Available Summary. The concept of algorithmic models appeared from the algorithmic approach in which the simulated object, the phenomenon appears in the form of process, subject to strict rules of the algorithm, which placed the process of operation of the facility. Under the algorithmic model is the formalized description of the scenario subject specialist for the simulated process, the structure of which is comparable with the structure of the causal and temporal relationships between events of the process being modeled, together with all information necessary for its software implementation. To represent the structure of algorithmic models used algorithmic network. Normally, they were defined as loaded finite directed graph, the vertices which are mapped to operators and arcs are variables, bound by operators. The language of algorithmic networks has great features, the algorithms that it can display indifference the class of all random algorithms. In existing systems, automation modeling based on algorithmic nets, mainly used by operators working with real numbers. Although this reduces their ability, but enough for modeling a wide class of problems related to economy, environment, transport, technical processes. The task of modeling the execution of schedules and network diagrams is relevant and useful. There are many counting systems, network graphs, however, the monitoring process based analysis of gaps and terms of graphs, no analysis of prediction execution schedule or schedules. The library is designed to build similar predictive models. Specifying source data to obtain a set of projections from which to choose one and take it for a new plan.

  2. Development of computed tomography system and image reconstruction algorithm

    International Nuclear Information System (INIS)

    Khairiah Yazid; Mohd Ashhar Khalid; Azaman Ahmad; Khairul Anuar Mohd Salleh; Ab Razak Hamzah

    2006-01-01

    Computed tomography is one of the most advanced and powerful nondestructive inspection techniques, which is currently used in many different industries. In several CT systems, detection has been by combination of an X-ray image intensifier and charge -coupled device (CCD) camera or by using line array detector. The recent development of X-ray flat panel detector has made fast CT imaging feasible and practical. Therefore this paper explained the arrangement of a new detection system which is using the existing high resolution (127 μm pixel size) flat panel detector in MINT and the image reconstruction technique developed. The aim of the project is to develop a prototype flat panel detector based CT imaging system for NDE. The prototype consisted of an X-ray tube, a flat panel detector system, a rotation table and a computer system to control the sample motion and image acquisition. Hence this project is divided to two major tasks, firstly to develop image reconstruction algorithm and secondly to integrate X-ray imaging components into one CT system. The image reconstruction algorithm using filtered back-projection method is developed and compared to other techniques. The MATLAB program is the tools used for the simulations and computations for this project. (Author)

  3. Fast algorithms for transport models. Final report, June 1, 1993--May 31, 1994

    International Nuclear Information System (INIS)

    Manteuffel, T.

    1994-12-01

    The focus of this project is the study of multigrid and multilevel algorithms for the numerical solution of Boltzmann models of the transport of neutral and charged particles. In previous work a fast multigrid algorithm was developed for the numerical solution of the Boltzmann model of neutral particle transport in slab geometry assuming isotropic scattering. The new algorithm is extremely fast in the thick diffusion limit; the multigrid v-cycle convergence factor approaches zero as the mean-free-path between collisions approaches zero, independent of the mesh. Also, a fast multilevel method was developed for the numerical solution of the Boltzmann model of charged particle transport in the thick Fokker-Plank limit for slab geometry. Parallel implementations were developed for both algorithms

  4. A filtered backprojection algorithm with characteristics of the iterative landweber algorithm

    OpenAIRE

    L. Zeng, Gengsheng

    2012-01-01

    Purpose: In order to eventually develop an analytical algorithm with noise characteristics of an iterative algorithm, this technical note develops a window function for the filtered backprojection (FBP) algorithm in tomography that behaves as an iterative Landweber algorithm.

  5. Development of a Thermal Equilibrium Prediction Algorithm

    International Nuclear Information System (INIS)

    Aviles-Ramos, Cuauhtemoc

    2002-01-01

    A thermal equilibrium prediction algorithm is developed and tested using a heat conduction model and data sets from calorimetric measurements. The physical model used in this study is the exact solution of a system of two partial differential equations that govern the heat conduction in the calorimeter. A multi-parameter estimation technique is developed and implemented to estimate the effective volumetric heat generation and thermal diffusivity in the calorimeter measurement chamber, and the effective thermal diffusivity of the heat flux sensor. These effective properties and the exact solution are used to predict the heat flux sensor voltage readings at thermal equilibrium. Thermal equilibrium predictions are carried out considering only 20% of the total measurement time required for thermal equilibrium. A comparison of the predicted and experimental thermal equilibrium voltages shows that the average percentage error from 330 data sets is only 0.1%. The data sets used in this study come from calorimeters of different sizes that use different kinds of heat flux sensors. Furthermore, different nuclear material matrices were assayed in the process of generating these data sets. This study shows that the integration of this algorithm into the calorimeter data acquisition software will result in an 80% reduction of measurement time. This reduction results in a significant cutback in operational costs for the calorimetric assay of nuclear materials. (authors)

  6. developed algorithm for the application of british method of concret

    African Journals Online (AJOL)

    t-iyke

    Most of the methods of concrete mix design developed over the years were geared towards manual approach. ... Key words: Concrete mix design; British method; Manual Approach; Algorithm. ..... Statistics for Science and Engineering.

  7. An optimal iterative algorithm to solve Cauchy problem for Laplace equation

    KAUST Repository

    Majeed, Muhammad Usman

    2015-05-25

    An optimal mean square error minimizer algorithm is developed to solve severely ill-posed Cauchy problem for Laplace equation on an annulus domain. The mathematical problem is presented as a first order state space-like system and an optimal iterative algorithm is developed that minimizes the mean square error in states. Finite difference discretization schemes are used to discretize first order system. After numerical discretization algorithm equations are derived taking inspiration from Kalman filter however using one of the space variables as a time-like variable. Given Dirichlet and Neumann boundary conditions are used on the Cauchy data boundary and fictitious points are introduced on the unknown solution boundary. The algorithm is run for a number of iterations using the solution of previous iteration as a guess for the next one. The method developed happens to be highly robust to noise in Cauchy data and numerically efficient results are illustrated.

  8. A sparse matrix based full-configuration interaction algorithm

    International Nuclear Information System (INIS)

    Rolik, Zoltan; Szabados, Agnes; Surjan, Peter R.

    2008-01-01

    We present an algorithm related to the full-configuration interaction (FCI) method that makes complete use of the sparse nature of the coefficient vector representing the many-electron wave function in a determinantal basis. Main achievements of the presented sparse FCI (SFCI) algorithm are (i) development of an iteration procedure that avoids the storage of FCI size vectors; (ii) development of an efficient algorithm to evaluate the effect of the Hamiltonian when both the initial and the product vectors are sparse. As a result of point (i) large disk operations can be skipped which otherwise may be a bottleneck of the procedure. At point (ii) we progress by adopting the implementation of the linear transformation by Olsen et al. [J. Chem Phys. 89, 2185 (1988)] for the sparse case, getting the algorithm applicable to larger systems and faster at the same time. The error of a SFCI calculation depends only on the dropout thresholds for the sparse vectors, and can be tuned by controlling the amount of system memory passed to the procedure. The algorithm permits to perform FCI calculations on single node workstations for systems previously accessible only by supercomputers

  9. The Texas Medication Algorithm Project (TMAP) schizophrenia algorithms.

    Science.gov (United States)

    Miller, A L; Chiles, J A; Chiles, J K; Crismon, M L; Rush, A J; Shon, S P

    1999-10-01

    In the Texas Medication Algorithm Project (TMAP), detailed guidelines for medication management of schizophrenia and related disorders, bipolar disorders, and major depressive disorders have been developed and implemented. This article describes the algorithms developed for medication treatment of schizophrenia and related disorders. The guidelines recommend a sequence of medications and discuss dosing, duration, and switch-over tactics. They also specify response criteria at each stage of the algorithm for both positive and negative symptoms. The rationale and evidence for each aspect of the algorithms are presented.

  10. Development of information preserving data compression algorithm for CT images

    International Nuclear Information System (INIS)

    Kobayashi, Yoshio

    1989-01-01

    Although digital imaging techniques in radiology develop rapidly, problems arise in archival storage and communication of image data. This paper reports on a new information preserving data compression algorithm for computed tomographic (CT) images. This algorithm consists of the following five processes: 1. Pixels surrounding the human body showing CT values smaller than -900 H.U. are eliminated. 2. Each pixel is encoded by its numerical difference from its neighboring pixel along a matrix line. 3. Difference values are encoded by a newly designed code rather than the natural binary code. 4. Image data, obtained with the above process, are decomposed into bit planes. 5. The bit state transitions in each bit plane are encoded by run length coding. Using this new algorithm, the compression ratios of brain, chest, and abdomen CT images are 4.49, 4.34. and 4.40 respectively. (author)

  11. Development and verification of an analytical algorithm to predict absorbed dose distributions in ocular proton therapy using Monte Carlo simulations

    International Nuclear Information System (INIS)

    Koch, Nicholas C; Newhauser, Wayne D

    2010-01-01

    Proton beam radiotherapy is an effective and non-invasive treatment for uveal melanoma. Recent research efforts have focused on improving the dosimetric accuracy of treatment planning and overcoming the present limitation of relative analytical dose calculations. Monte Carlo algorithms have been shown to accurately predict dose per monitor unit (D/MU) values, but this has yet to be shown for analytical algorithms dedicated to ocular proton therapy, which are typically less computationally expensive than Monte Carlo algorithms. The objective of this study was to determine if an analytical method could predict absolute dose distributions and D/MU values for a variety of treatment fields like those used in ocular proton therapy. To accomplish this objective, we used a previously validated Monte Carlo model of an ocular nozzle to develop an analytical algorithm to predict three-dimensional distributions of D/MU values from pristine Bragg peaks and therapeutically useful spread-out Bragg peaks (SOBPs). Results demonstrated generally good agreement between the analytical and Monte Carlo absolute dose calculations. While agreement in the proximal region decreased for beams with less penetrating Bragg peaks compared with the open-beam condition, the difference was shown to be largely attributable to edge-scattered protons. A method for including this effect in any future analytical algorithm was proposed. Comparisons of D/MU values showed typical agreement to within 0.5%. We conclude that analytical algorithms can be employed to accurately predict absolute proton dose distributions delivered by an ocular nozzle.

  12. Combined Simulated Annealing Algorithm for the Discrete Facility Location Problem

    Directory of Open Access Journals (Sweden)

    Jin Qin

    2012-01-01

    Full Text Available The combined simulated annealing (CSA algorithm was developed for the discrete facility location problem (DFLP in the paper. The method is a two-layer algorithm, in which the external subalgorithm optimizes the decision of the facility location decision while the internal subalgorithm optimizes the decision of the allocation of customer's demand under the determined location decision. The performance of the CSA is tested by 30 instances with different sizes. The computational results show that CSA works much better than the previous algorithm on DFLP and offers a new reasonable alternative solution method to it.

  13. The development of controller and navigation algorithm for underwater wall crawler

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Hyung Suck; Kim, Kyung Hoon; Kim, Min Young [Korea Advanced Institute of Science and Technology, Taejon (Korea)

    1999-01-01

    In this project, the control system of a underwater robotic vehicle(URV) for underwater wall inspection in the nuclear reactor pool or the related facilities has been developed. The following 4-sub projects have been studied for this project: (1) Development of the controller and motor driver for the URV (2) Development of the control algorithm for the tracking control of the URV (3) Development of the localization system (4) Underwater experiments of the developed system. First, the dynamic characteristic of thruster with the DC servo-motor was analyzed experimentally. Second the controller board using the INTEL 80C196 was designed and constructed, and the software for the communication and motor control is developed. Third the PWM motor-driver was developed. Fourth the localization system using the laser scanner and inclinometer was developed and tested in the pool. Fifth the dynamics of the URV was studied and the proper control algorithms for the URV was proposed. Lastly the validation of the integrated system was experimentally performed. (author). 27 refs., 51 figs., 8 tabs.

  14. Correlation signatures of wet soils and snows. [algorithm development and computer programming

    Science.gov (United States)

    Phillips, M. R.

    1972-01-01

    Interpretation, analysis, and development of algorithms have provided the necessary computational programming tools for soil data processing, data handling and analysis. Algorithms that have been developed thus far, are adequate and have been proven successful for several preliminary and fundamental applications such as software interfacing capabilities, probability distributions, grey level print plotting, contour plotting, isometric data displays, joint probability distributions, boundary mapping, channel registration and ground scene classification. A description of an Earth Resources Flight Data Processor, (ERFDP), which handles and processes earth resources data under a users control is provided.

  15. A unified algorithm for predicting partition coefficients for PBPK modeling of drugs and environmental chemicals

    International Nuclear Information System (INIS)

    Peyret, Thomas; Poulin, Patrick; Krishnan, Kannan

    2010-01-01

    The algorithms in the literature focusing to predict tissue:blood PC (P tb ) for environmental chemicals and tissue:plasma PC based on total (K p ) or unbound concentration (K pu ) for drugs differ in their consideration of binding to hemoglobin, plasma proteins and charged phospholipids. The objective of the present study was to develop a unified algorithm such that P tb , K p and K pu for both drugs and environmental chemicals could be predicted. The development of the unified algorithm was accomplished by integrating all mechanistic algorithms previously published to compute the PCs. Furthermore, the algorithm was structured in such a way as to facilitate predictions of the distribution of organic compounds at the macro (i.e. whole tissue) and micro (i.e. cells and fluids) levels. The resulting unified algorithm was applied to compute the rat P tb , K p or K pu of muscle (n = 174), liver (n = 139) and adipose tissue (n = 141) for acidic, neutral, zwitterionic and basic drugs as well as ketones, acetate esters, alcohols, aliphatic hydrocarbons, aromatic hydrocarbons and ethers. The unified algorithm reproduced adequately the values predicted previously by the published algorithms for a total of 142 drugs and chemicals. The sensitivity analysis demonstrated the relative importance of the various compound properties reflective of specific mechanistic determinants relevant to prediction of PC values of drugs and environmental chemicals. Overall, the present unified algorithm uniquely facilitates the computation of macro and micro level PCs for developing organ and cellular-level PBPK models for both chemicals and drugs.

  16. Comparative analysis of instance selection algorithms for instance-based classifiers in the context of medical decision support

    International Nuclear Information System (INIS)

    Mazurowski, Maciej A; Tourassi, Georgia D; Malof, Jordan M

    2011-01-01

    When constructing a pattern classifier, it is important to make best use of the instances (a.k.a. cases, examples, patterns or prototypes) available for its development. In this paper we present an extensive comparative analysis of algorithms that, given a pool of previously acquired instances, attempt to select those that will be the most effective to construct an instance-based classifier in terms of classification performance, time efficiency and storage requirements. We evaluate seven previously proposed instance selection algorithms and compare their performance to simple random selection of instances. We perform the evaluation using k-nearest neighbor classifier and three classification problems: one with simulated Gaussian data and two based on clinical databases for breast cancer detection and diagnosis, respectively. Finally, we evaluate the impact of the number of instances available for selection on the performance of the selection algorithms and conduct initial analysis of the selected instances. The experiments show that for all investigated classification problems, it was possible to reduce the size of the original development dataset to less than 3% of its initial size while maintaining or improving the classification performance. Random mutation hill climbing emerges as the superior selection algorithm. Furthermore, we show that some previously proposed algorithms perform worse than random selection. Regarding the impact of the number of instances available for the classifier development on the performance of the selection algorithms, we confirm that the selection algorithms are generally more effective as the pool of available instances increases. In conclusion, instance selection is generally beneficial for instance-based classifiers as it can improve their performance, reduce their storage requirements and improve their response time. However, choosing the right selection algorithm is crucial.

  17. A homotopy algorithm for digital optimal projection control GASD-HADOC

    Science.gov (United States)

    Collins, Emmanuel G., Jr.; Richter, Stephen; Davis, Lawrence D.

    1993-01-01

    The linear-quadratic-gaussian (LQG) compensator was developed to facilitate the design of control laws for multi-input, multi-output (MIMO) systems. The compensator is computed by solving two algebraic equations for which standard closed-loop solutions exist. Unfortunately, the minimal dimension of an LQG compensator is almost always equal to the dimension of the plant and can thus often violate practical implementation constraints on controller order. This deficiency is especially highlighted when considering control-design for high-order systems such as flexible space structures. This deficiency motivated the development of techniques that enable the design of optimal controllers whose dimension is less than that of the design plant. A homotopy approach based on the optimal projection equations that characterize the necessary conditions for optimal reduced-order control. Homotopy algorithms have global convergence properties and hence do not require that the initializing reduced-order controller be close to the optimal reduced-order controller to guarantee convergence. However, the homotopy algorithm previously developed for solving the optimal projection equations has sublinear convergence properties and the convergence slows at higher authority levels and may fail. A new homotopy algorithm for synthesizing optimal reduced-order controllers for discrete-time systems is described. Unlike the previous homotopy approach, the new algorithm is a gradient-based, parameter optimization formulation and was implemented in MATLAB. The results reported may offer the foundation for a reliable approach to optimal, reduced-order controller design.

  18. Development of the Landsat Data Continuity Mission Cloud Cover Assessment Algorithms

    Science.gov (United States)

    Scaramuzza, Pat; Bouchard, M.A.; Dwyer, John L.

    2012-01-01

    The upcoming launch of the Operational Land Imager (OLI) will start the next era of the Landsat program. However, the Automated Cloud-Cover Assessment (CCA) (ACCA) algorithm used on Landsat 7 requires a thermal band and is thus not suited for OLI. There will be a thermal instrument on the Landsat Data Continuity Mission (LDCM)-the Thermal Infrared Sensor-which may not be available during all OLI collections. This illustrates a need for CCA for LDCM in the absence of thermal data. To research possibilities for full-resolution OLI cloud assessment, a global data set of 207 Landsat 7 scenes with manually generated cloud masks was created. It was used to evaluate the ACCA algorithm, showing that the algorithm correctly classified 79.9% of a standard test subset of 3.95 109 pixels. The data set was also used to develop and validate two successor algorithms for use with OLI data-one derived from an off-the-shelf machine learning package and one based on ACCA but enhanced by a simple neural network. These comprehensive CCA algorithms were shown to correctly classify pixels as cloudy or clear 88.5% and 89.7% of the time, respectively.

  19. Development of transmission dose estimation algorithm for in vivo dosimetry in high energy radiation treatment

    International Nuclear Information System (INIS)

    Yun, Hyong Geun; Shin, Kyo Chul; Hun, Soon Nyung; Woo, Hong Gyun; Ha, Sung Whan; Lee, Hyoung Koo

    2004-01-01

    In vivo dosimetry is very important for quality assurance purpose in high energy radiation treatment. Measurement of transmission dose is a new method of in vivo dosimetry which is noninvasive and easy for daily performance. This study is to develop a tumor dose estimation algorithm using measured transmission dose for open radiation field. For basic beam data, transmission dose was measured with various field size (FS) of square radiation field, phantom thickness (Tp), and phantom chamber distance (PCD) with a acrylic phantom for 6 MV and 10 MV X-ray. Source to chamber distance (SCD) was set to 150 cm. Measurement was conducted with a 0.6 cc Farmer type ion chamber. By using regression analysis of measured basic beam data, a transmission dose estimation algorithm was developed. Accuracy of the algorithm was tested with flat solid phantom with various thickness in various settings of rectangular fields and various PCD. In our developed algorithm, transmission dose was equated to quadratic function of log(A/P) (where A/P is area-perimeter ratio) and the coefficients of the quadratic functions were equated to tertiary functions of PCD. Our developed algorithm could estimate the radiation dose with the errors within ±0.5% for open square field, and with the errors within ±1.0% for open elongated radiation field. Developed algorithm could accurately estimate the transmission dose in open radiation fields with various treatment settings of high energy radiation treatment. (author)

  20. jClustering, an open framework for the development of 4D clustering algorithms.

    Directory of Open Access Journals (Sweden)

    José María Mateos-Pérez

    Full Text Available We present jClustering, an open framework for the design of clustering algorithms in dynamic medical imaging. We developed this tool because of the difficulty involved in manually segmenting dynamic PET images and the lack of availability of source code for published segmentation algorithms. Providing an easily extensible open tool encourages publication of source code to facilitate the process of comparing algorithms and provide interested third parties with the opportunity to review code. The internal structure of the framework allows an external developer to implement new algorithms easily and quickly, focusing only on the particulars of the method being implemented and not on image data handling and preprocessing. This tool has been coded in Java and is presented as an ImageJ plugin in order to take advantage of all the functionalities offered by this imaging analysis platform. Both binary packages and source code have been published, the latter under a free software license (GNU General Public License to allow modification if necessary.

  1. Performance and development for the Inner Detector Trigger algorithms at ATLAS

    CERN Document Server

    Penc, O; The ATLAS collaboration

    2014-01-01

    The performance of the ATLAS Inner Detector (ID) Trigger algorithms being developed for running on the ATLAS High Level Trigger (HLT) processor farm during Run 2 of the LHC are presented. During the 2013-14 LHC long shutdown modifications are being carried out to the LHC accelerator to increase both the beam energy and luminosity. These modifications will pose significant challenges for the ID Trigger algorithms, both in terms execution time and physics performance. To meet these challenges, the ATLAS HLT software is being restructured to run as a more flexible single stage HLT, instead of two separate stages (Level2 and Event Filter) as in Run 1. This will reduce the overall data volume that needs to be requested by the HLT system, since data will no longer need to be requested for each of the two separate processing stages. Development of the ID Trigger algorithms for Run 2, currently expected to be ready for detector commissioning near the end of 2014, is progressing well and the current efforts towards op...

  2. Evidence-based algorithm for heparin dosing before cardiopulmonary bypass. Part 1: Development of the algorithm.

    Science.gov (United States)

    McKinney, Mark C; Riley, Jeffrey B

    2007-12-01

    The incidence of heparin resistance during adult cardiac surgery with cardiopulmonary bypass has been reported at 15%-20%. The consistent use of a clinical decision-making algorithm may increase the consistency of patient care and likely reduce the total required heparin dose and other problems associated with heparin dosing. After a directed survey of practicing perfusionists regarding treatment of heparin resistance and a literature search for high-level evidence regarding the diagnosis and treatment of heparin resistance, an evidence-based decision-making algorithm was constructed. The face validity of the algorithm decisive steps and logic was confirmed by a second survey of practicing perfusionists. The algorithm begins with review of the patient history to identify predictors for heparin resistance. The definition for heparin resistance contained in the algorithm is an activated clotting time 450 IU/kg heparin loading dose. Based on the literature, the treatment for heparin resistance used in the algorithm is anti-thrombin III supplement. The algorithm seems to be valid and is supported by high-level evidence and clinician opinion. The next step is a human randomized clinical trial to test the clinical procedure guideline algorithm vs. current standard clinical practice.

  3. A new automated assign and analysing method for high-resolution rotationally resolved spectra using genetic algorithms

    NARCIS (Netherlands)

    Meerts, W.L.; Schmitt, M.

    2006-01-01

    This paper describes a numerical technique that has recently been developed to automatically assign and fit high-resolution spectra. The method makes use of genetic algorithms (GA). The current algorithm is compared with previously used analysing methods. The general features of the GA and its

  4. Development and testing of incident detection algorithms. Vol. 2, research methodology and detailed results.

    Science.gov (United States)

    1976-04-01

    The development and testing of incident detection algorithms was based on Los Angeles and Minneapolis freeway surveillance data. Algorithms considered were based on times series and pattern recognition techniques. Attention was given to the effects o...

  5. Designing algorithm visualization on mobile platform: The proposed guidelines

    Science.gov (United States)

    Supli, A. A.; Shiratuddin, N.

    2017-09-01

    This paper entails an ongoing study about the design guidelines of algorithm visualization (AV) on mobile platform, helping students learning data structures and algorithm (DSA) subject effectively. Our previous review indicated that design guidelines of AV on mobile platform are still few. Mostly, previous guidelines of AV are developed for AV on desktop and website platform. In fact, mobile learning has been proved to enhance engagement in learning circumstances, and thus effect student's performance. In addition, the researchers highly recommend including UI design and Interactivity in designing effective AV system. However, the discussions of these two aspects in previous AV design guidelines are not comprehensive. The UI design in this paper describes the arrangement of AV features in mobile environment, whereas interactivity is about the active learning strategy features based on learning experiences (how to engage learners). Thus, this study main objective is to propose design guidelines of AV on mobile platform (AVOMP) that entails comprehensively UI design and interactivity aspects. These guidelines are developed through content analysis and comparative analysis from various related studies. These guidelines are useful for AV designers to help them constructing AVOMP for various topics on DSA.

  6. A prediction algorithm for first onset of major depression in the general population: development and validation.

    Science.gov (United States)

    Wang, JianLi; Sareen, Jitender; Patten, Scott; Bolton, James; Schmitz, Norbert; Birney, Arden

    2014-05-01

    Prediction algorithms are useful for making clinical decisions and for population health planning. However, such prediction algorithms for first onset of major depression do not exist. The objective of this study was to develop and validate a prediction algorithm for first onset of major depression in the general population. Longitudinal study design with approximate 3-year follow-up. The study was based on data from a nationally representative sample of the US general population. A total of 28 059 individuals who participated in Waves 1 and 2 of the US National Epidemiologic Survey on Alcohol and Related Conditions and who had not had major depression at Wave 1 were included. The prediction algorithm was developed using logistic regression modelling in 21 813 participants from three census regions. The algorithm was validated in participants from the 4th census region (n=6246). Major depression occurred since Wave 1 of the National Epidemiologic Survey on Alcohol and Related Conditions, assessed by the Alcohol Use Disorder and Associated Disabilities Interview Schedule-diagnostic and statistical manual for mental disorders IV. A prediction algorithm containing 17 unique risk factors was developed. The algorithm had good discriminative power (C statistics=0.7538, 95% CI 0.7378 to 0.7699) and excellent calibration (F-adjusted test=1.00, p=0.448) with the weighted data. In the validation sample, the algorithm had a C statistic of 0.7259 and excellent calibration (Hosmer-Lemeshow χ(2)=3.41, p=0.906). The developed prediction algorithm has good discrimination and calibration capacity. It can be used by clinicians, mental health policy-makers and service planners and the general public to predict future risk of having major depression. The application of the algorithm may lead to increased personalisation of treatment, better clinical decisions and more optimal mental health service planning.

  7. Leadership development in the age of the algorithm.

    Science.gov (United States)

    Buckingham, Marcus

    2012-06-01

    By now we expect personalized content--it's routinely served up by online retailers and news services, for example. But the typical leadership development program still takes a formulaic, one-size-fits-all approach. And it rarely happens that an excellent technique can be effectively transferred from one leader to all others. Someone trying to adopt a practice from a leader with a different style usually seems stilted and off--a Franken-leader. Breakthrough work at Hilton Hotels and other organizations shows how companies can use an algorithmic model to deliver training tips uniquely suited to each individual's style. It's a five-step process: First, a company must choose a tool with which to identify each person's leadership type. Second, it should assess its best leaders, and third, it should interview them about their techniques. Fourth, it should use its algorithmic model to feed tips drawn from those techniques to developing leaders of the same type. And fifth, it should make the system dynamically intelligent, with user reactions sharpening the content and targeting of tips. The power of this kind of system--highly customized, based on peer-to-peer sharing, and continually evolving--will soon overturn the generic model of leadership development. And such systems will inevitably break through any one organization, until somewhere in the cloud the best leadership tips from all over are gathered, sorted, and distributed according to which ones suit which people best.

  8. DOOCS environment for FPGA-based cavity control system and control algorithms development

    International Nuclear Information System (INIS)

    Pucyk, P.; Koprek, W.; Kaleta, P.; Szewinski, J.; Pozniak, K.T.; Czarski, T.; Romaniuk, R.S.

    2005-01-01

    The paper describes the concept and realization of the DOOCS control software for FPGAbased TESLA cavity controller and simulator (SIMCON). It bases on universal software components, created for laboratory purposes and used in MATLAB based control environment. These modules have been recently adapted to the DOOCS environment to ensure a unified software to hardware communication model. The presented solution can be also used as a general platform for control algorithms development. The proposed interfaces between MATLAB and DOOCS modules allow to check the developed algorithm in the operation environment before implementation in the FPGA. As the examples two systems have been presented. (orig.)

  9. Implementation of a parallel algorithm for spherical SN calculations on the IBM 3090

    International Nuclear Information System (INIS)

    Haghighat, A.; Lawrence, R.D.

    1989-01-01

    Parallel S N algorithms based on domain decomposition in angle are straightforward to develop in Cartesian geometry because the computation of the angular fluxes for a specific discrete ordinate can be performed independently of all other angles. This is not the case for curvilinear geometries, where the angular redistribution component of the discretized streaming operator results in coupling between angular fluxes along adjacent discrete ordinates. Previously, the authors developed a parallel algorithm for S N calculations in spherical geometry and examined its iterative convergence for criticality and detector problems with differing scattering/absorption ratios. In this paper, the authors describe the implementation of the algorithm on an IBM 3090 Model 400 (four processors) and present computational results illustrating the efficiency of the algorithm relative to serial execution

  10. Algorithm improvement program nuclide identification algorithm scoring criteria and scoring application.

    Energy Technology Data Exchange (ETDEWEB)

    Enghauser, Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-02-01

    The goal of the Domestic Nuclear Detection Office (DNDO) Algorithm Improvement Program (AIP) is to facilitate gamma-radiation detector nuclide identification algorithm development, improvement, and validation. Accordingly, scoring criteria have been developed to objectively assess the performance of nuclide identification algorithms. In addition, a Microsoft Excel spreadsheet application for automated nuclide identification scoring has been developed. This report provides an overview of the equations, nuclide weighting factors, nuclide equivalencies, and configuration weighting factors used by the application for scoring nuclide identification algorithm performance. Furthermore, this report presents a general overview of the nuclide identification algorithm scoring application including illustrative examples.

  11. Method and Excel VBA Algorithm for Modeling Master Recession Curve Using Trigonometry Approach.

    Science.gov (United States)

    Posavec, Kristijan; Giacopetti, Marco; Materazzi, Marco; Birk, Steffen

    2017-11-01

    A new method was developed and implemented into an Excel Visual Basic for Applications (VBAs) algorithm utilizing trigonometry laws in an innovative way to overlap recession segments of time series and create master recession curves (MRCs). Based on a trigonometry approach, the algorithm horizontally translates succeeding recession segments of time series, placing their vertex, that is, the highest recorded value of each recession segment, directly onto the appropriate connection line defined by measurement points of a preceding recession segment. The new method and algorithm continues the development of methods and algorithms for the generation of MRC, where the first published method was based on a multiple linear/nonlinear regression model approach (Posavec et al. 2006). The newly developed trigonometry-based method was tested on real case study examples and compared with the previously published multiple linear/nonlinear regression model-based method. The results show that in some cases, that is, for some time series, the trigonometry-based method creates narrower overlaps of the recession segments, resulting in higher coefficients of determination R 2 , while in other cases the multiple linear/nonlinear regression model-based method remains superior. The Excel VBA algorithm for modeling MRC using the trigonometry approach is implemented into a spreadsheet tool (MRCTools v3.0 written by and available from Kristijan Posavec, Zagreb, Croatia) containing the previously published VBA algorithms for MRC generation and separation. All algorithms within the MRCTools v3.0 are open access and available free of charge, supporting the idea of running science on available, open, and free of charge software. © 2017, National Ground Water Association.

  12. Optimal Fungal Space Searching Algorithms.

    Science.gov (United States)

    Asenova, Elitsa; Lin, Hsin-Yu; Fu, Eileen; Nicolau, Dan V; Nicolau, Dan V

    2016-10-01

    Previous experiments have shown that fungi use an efficient natural algorithm for searching the space available for their growth in micro-confined networks, e.g., mazes. This natural "master" algorithm, which comprises two "slave" sub-algorithms, i.e., collision-induced branching and directional memory, has been shown to be more efficient than alternatives, with one, or the other, or both sub-algorithms turned off. In contrast, the present contribution compares the performance of the fungal natural algorithm against several standard artificial homologues. It was found that the space-searching fungal algorithm consistently outperforms uninformed algorithms, such as Depth-First-Search (DFS). Furthermore, while the natural algorithm is inferior to informed ones, such as A*, this under-performance does not importantly increase with the increase of the size of the maze. These findings suggest that a systematic effort of harvesting the natural space searching algorithms used by microorganisms is warranted and possibly overdue. These natural algorithms, if efficient, can be reverse-engineered for graph and tree search strategies.

  13. The development of an algebraic multigrid algorithm for symmetric positive definite linear systems

    Energy Technology Data Exchange (ETDEWEB)

    Vanek, P.; Mandel, J.; Brezina, M. [Univ. of Colorado, Denver, CO (United States)

    1996-12-31

    An algebraic multigrid algorithm for symmetric, positive definite linear systems is developed based on the concept of prolongation by smoothed aggregation. Coarse levels are generated automatically. We present a set of requirements motivated heuristically by a convergence theory. The algorithm then attempts to satisfy the requirements. Input to the method are the coefficient matrix and zero energy modes, which are determined from nodal coordinates and knowledge of the differential equation. Efficiency of the resulting algorithm is demonstrated by computational results on real world problems from solid elasticity, plate blending, and shells.

  14. Parallel algorithms for continuum dynamics

    International Nuclear Information System (INIS)

    Hicks, D.L.; Liebrock, L.M.

    1987-01-01

    Simply porting existing parallel programs to a new parallel processor may not achieve the full speedup possible; to achieve the maximum efficiency may require redesigning the parallel algorithms for the specific architecture. The authors discuss here parallel algorithms that were developed first for the HEP processor and then ported to the CRAY X-MP/4, the ELXSI/10, and the Intel iPSC/32. Focus is mainly on the most recent parallel processing results produced, i.e., those on the Intel Hypercube. The applications are simulations of continuum dynamics in which the momentum and stress gradients are important. Examples of these are inertial confinement fusion experiments, severe breaks in the coolant system of a reactor, weapons physics, shock-wave physics. Speedup efficiencies on the Intel iPSC Hypercube are very sensitive to the ratio of communication to computation. Great care must be taken in designing algorithms for this machine to avoid global communication. This is much more critical on the iPSC than it was on the three previous parallel processors

  15. Successive combination jet algorithm for hadron collisions

    International Nuclear Information System (INIS)

    Ellis, S.D.; Soper, D.E.

    1993-01-01

    Jet finding algorithms, as they are used in e + e- and hadron collisions, are reviewed and compared. It is suggested that a successive combination style algorithm, similar to that used in e + e- physics, might be useful also in hadron collisions, where cone style algorithms have been used previously

  16. Proportionate Minimum Error Entropy Algorithm for Sparse System Identification

    Directory of Open Access Journals (Sweden)

    Zongze Wu

    2015-08-01

    Full Text Available Sparse system identification has received a great deal of attention due to its broad applicability. The proportionate normalized least mean square (PNLMS algorithm, as a popular tool, achieves excellent performance for sparse system identification. In previous studies, most of the cost functions used in proportionate-type sparse adaptive algorithms are based on the mean square error (MSE criterion, which is optimal only when the measurement noise is Gaussian. However, this condition does not hold in most real-world environments. In this work, we use the minimum error entropy (MEE criterion, an alternative to the conventional MSE criterion, to develop the proportionate minimum error entropy (PMEE algorithm for sparse system identification, which may achieve much better performance than the MSE based methods especially in heavy-tailed non-Gaussian situations. Moreover, we analyze the convergence of the proposed algorithm and derive a sufficient condition that ensures the mean square convergence. Simulation results confirm the excellent performance of the new algorithm.

  17. Development of GPT-based optimization algorithm

    International Nuclear Information System (INIS)

    White, J.R.; Chapman, D.M.; Biswas, D.

    1985-01-01

    The University of Lowell and Westinghouse Electric Corporation are involved in a joint effort to evaluate the potential benefits of generalized/depletion perturbation theory (GPT/DTP) methods for a variety of light water reactor (LWR) physics applications. One part of that work has focused on the development of a GPT-based optimization algorithm for the overall design, analysis, and optimization of LWR reload cores. The use of GPT sensitivity data in formulating the fuel management optimization problem is conceptually straightforward; it is the actual execution of the concept that is challenging. Thus, the purpose of this paper is to address some of the major difficulties, to outline our approach to these problems, and to present some illustrative examples of an efficient GTP-based optimization scheme

  18. The development of a scalable parallel 3-D CFD algorithm for turbomachinery. M.S. Thesis Final Report

    Science.gov (United States)

    Luke, Edward Allen

    1993-01-01

    Two algorithms capable of computing a transonic 3-D inviscid flow field about rotating machines are considered for parallel implementation. During the study of these algorithms, a significant new method of measuring the performance of parallel algorithms is developed. The theory that supports this new method creates an empirical definition of scalable parallel algorithms that is used to produce quantifiable evidence that a scalable parallel application was developed. The implementation of the parallel application and an automated domain decomposition tool are also discussed.

  19. Group leaders optimization algorithm

    Science.gov (United States)

    Daskin, Anmer; Kais, Sabre

    2011-03-01

    We present a new global optimization algorithm in which the influence of the leaders in social groups is used as an inspiration for the evolutionary technique which is designed into a group architecture. To demonstrate the efficiency of the method, a standard suite of single and multi-dimensional optimization functions along with the energies and the geometric structures of Lennard-Jones clusters are given as well as the application of the algorithm on quantum circuit design problems. We show that as an improvement over previous methods, the algorithm scales as N 2.5 for the Lennard-Jones clusters of N-particles. In addition, an efficient circuit design is shown for a two-qubit Grover search algorithm which is a quantum algorithm providing quadratic speedup over the classical counterpart.

  20. Design requirements and development of an airborne descent path definition algorithm for time navigation

    Science.gov (United States)

    Izumi, K. H.; Thompson, J. L.; Groce, J. L.; Schwab, R. W.

    1986-01-01

    The design requirements for a 4D path definition algorithm are described. These requirements were developed for the NASA ATOPS as an extension of the Local Flow Management/Profile Descent algorithm. They specify the processing flow, functional and data architectures, and system input requirements, and recommended the addition of a broad path revision (reinitialization) function capability. The document also summarizes algorithm design enhancements and the implementation status of the algorithm on an in-house PDP-11/70 computer. Finally, the requirements for the pilot-computer interfaces, the lateral path processor, and guidance and steering function are described.

  1. Efficient Serial and Parallel Algorithms for Selection of Unique Oligos in EST Databases.

    Science.gov (United States)

    Mata-Montero, Manrique; Shalaby, Nabil; Sheppard, Bradley

    2013-01-01

    Obtaining unique oligos from an EST database is a problem of great importance in bioinformatics, particularly in the discovery of new genes and the mapping of the human genome. Many algorithms have been developed to find unique oligos, many of which are much less time consuming than the traditional brute force approach. An algorithm was presented by Zheng et al. (2004) which finds the solution of the unique oligos search problem efficiently. We implement this algorithm as well as several new algorithms based on some theorems included in this paper. We demonstrate how, with these new algorithms, we can obtain unique oligos much faster than with previous ones. We parallelize these new algorithms to further improve the time of finding unique oligos. All algorithms are run on ESTs obtained from a Barley EST database.

  2. Genetic algorithms

    Science.gov (United States)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  3. Multispecies Coevolution Particle Swarm Optimization Based on Previous Search History

    Directory of Open Access Journals (Sweden)

    Danping Wang

    2017-01-01

    Full Text Available A hybrid coevolution particle swarm optimization algorithm with dynamic multispecies strategy based on K-means clustering and nonrevisit strategy based on Binary Space Partitioning fitness tree (called MCPSO-PSH is proposed. Previous search history memorized into the Binary Space Partitioning fitness tree can effectively restrain the individuals’ revisit phenomenon. The whole population is partitioned into several subspecies and cooperative coevolution is realized by an information communication mechanism between subspecies, which can enhance the global search ability of particles and avoid premature convergence to local optimum. To demonstrate the power of the method, comparisons between the proposed algorithm and state-of-the-art algorithms are grouped into two categories: 10 basic benchmark functions (10-dimensional and 30-dimensional, 10 CEC2005 benchmark functions (30-dimensional, and a real-world problem (multilevel image segmentation problems. Experimental results show that MCPSO-PSH displays a competitive performance compared to the other swarm-based or evolutionary algorithms in terms of solution accuracy and statistical tests.

  4. Efficient Record Linkage Algorithms Using Complete Linkage Clustering.

    Science.gov (United States)

    Mamun, Abdullah-Al; Aseltine, Robert; Rajasekaran, Sanguthevar

    2016-01-01

    Data from different agencies share data of the same individuals. Linking these datasets to identify all the records belonging to the same individuals is a crucial and challenging problem, especially given the large volumes of data. A large number of available algorithms for record linkage are prone to either time inefficiency or low-accuracy in finding matches and non-matches among the records. In this paper we propose efficient as well as reliable sequential and parallel algorithms for the record linkage problem employing hierarchical clustering methods. We employ complete linkage hierarchical clustering algorithms to address this problem. In addition to hierarchical clustering, we also use two other techniques: elimination of duplicate records and blocking. Our algorithms use sorting as a sub-routine to identify identical copies of records. We have tested our algorithms on datasets with millions of synthetic records. Experimental results show that our algorithms achieve nearly 100% accuracy. Parallel implementations achieve almost linear speedups. Time complexities of these algorithms do not exceed those of previous best-known algorithms. Our proposed algorithms outperform previous best-known algorithms in terms of accuracy consuming reasonable run times.

  5. Dynamic gradient descent learning algorithms for enhanced empirical modeling of power plants

    International Nuclear Information System (INIS)

    Parlos, A.G.; Atiya, Amir; Chong, K.T.

    1991-01-01

    A newly developed dynamic gradient descent-based learning algorithm is used to train a recurrent multilayer perceptron network for use in empirical modeling of power plants. The two main advantages of the proposed learning algorithm are its ability to consider past error gradient information for future use and the two forward passes associated with its implementation, instead of one forward and one backward pass of the backpropagation algorithm. The latter advantage results in computational time saving because both passes can be performed simultaneously. The dynamic learning algorithm is used to train a hybrid feedforward/feedback neural network, a recurrent multilayer perceptron, which was previously found to exhibit good interpolation and extrapolation capabilities in modeling nonlinear dynamic systems. One of the drawbacks, however, of the previously reported work has been the long training times associated with accurate empirical models. The enhanced learning capabilities provided by the dynamic gradient descent-based learning algorithm are demonstrated by a case study of a steam power plant. The number of iterations required for accurate empirical modeling has been reduced from tens of thousands to hundreds, thus significantly expediting the learning process

  6. Modeling of video traffic in packet networks, low rate video compression, and the development of a lossy+lossless image compression algorithm

    Science.gov (United States)

    Sayood, K.; Chen, Y. C.; Wang, X.

    1992-01-01

    During this reporting period we have worked on three somewhat different problems. These are modeling of video traffic in packet networks, low rate video compression, and the development of a lossy + lossless image compression algorithm, which might have some application in browsing algorithms. The lossy + lossless scheme is an extension of work previously done under this grant. It provides a simple technique for incorporating browsing capability. The low rate coding scheme is also a simple variation on the standard discrete cosine transform (DCT) coding approach. In spite of its simplicity, the approach provides surprisingly high quality reconstructions. The modeling approach is borrowed from the speech recognition literature, and seems to be promising in that it provides a simple way of obtaining an idea about the second order behavior of a particular coding scheme. Details about these are presented.

  7. Improvement of Parallel Algorithm for MATRA Code

    International Nuclear Information System (INIS)

    Kim, Seong-Jin; Seo, Kyong-Won; Kwon, Hyouk; Hwang, Dae-Hyun

    2014-01-01

    The feasibility study to parallelize the MATRA code was conducted in KAERI early this year. As a result, a parallel algorithm for the MATRA code has been developed to decrease a considerably required computing time to solve a bigsize problem such as a whole core pin-by-pin problem of a general PWR reactor and to improve an overall performance of the multi-physics coupling calculations. It was shown that the performance of the MATRA code was greatly improved by implementing the parallel algorithm using MPI communication. For problems of a 1/8 core and whole core for SMART reactor, a speedup was evaluated as about 10 when the numbers of used processor were 25. However, it was also shown that the performance deteriorated as the axial node number increased. In this paper, the procedure of a communication between processors is optimized to improve the previous parallel algorithm.. To improve the performance deterioration of the parallelized MATRA code, the communication algorithm between processors was newly presented. It was shown that the speedup was improved and stable regardless of the axial node number

  8. Progressive geometric algorithms

    NARCIS (Netherlands)

    Alewijnse, S.P.A.; Bagautdinov, T.M.; de Berg, M.T.; Bouts, Q.W.; ten Brink, Alex P.; Buchin, K.A.; Westenberg, M.A.

    2015-01-01

    Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms

  9. Progressive geometric algorithms

    NARCIS (Netherlands)

    Alewijnse, S.P.A.; Bagautdinov, T.M.; Berg, de M.T.; Bouts, Q.W.; Brink, ten A.P.; Buchin, K.; Westenberg, M.A.

    2014-01-01

    Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms

  10. A flexible fuzzy regression algorithm for forecasting oil consumption estimation

    International Nuclear Information System (INIS)

    Azadeh, A.; Khakestani, M.; Saberi, M.

    2009-01-01

    Oil consumption plays a vital role in socio-economic development of most countries. This study presents a flexible fuzzy regression algorithm for forecasting oil consumption based on standard economic indicators. The standard indicators are annual population, cost of crude oil import, gross domestic production (GDP) and annual oil production in the last period. The proposed algorithm uses analysis of variance (ANOVA) to select either fuzzy regression or conventional regression for future demand estimation. The significance of the proposed algorithm is three fold. First, it is flexible and identifies the best model based on the results of ANOVA and minimum absolute percentage error (MAPE), whereas previous studies consider the best fitted fuzzy regression model based on MAPE or other relative error results. Second, the proposed model may identify conventional regression as the best model for future oil consumption forecasting because of its dynamic structure, whereas previous studies assume that fuzzy regression always provide the best solutions and estimation. Third, it utilizes the most standard independent variables for the regression models. To show the applicability and superiority of the proposed flexible fuzzy regression algorithm the data for oil consumption in Canada, United States, Japan and Australia from 1990 to 2005 are used. The results show that the flexible algorithm provides accurate solution for oil consumption estimation problem. The algorithm may be used by policy makers to accurately foresee the behavior of oil consumption in various regions.

  11. Prosthetic joint infection development of an evidence-based diagnostic algorithm.

    Science.gov (United States)

    Mühlhofer, Heinrich M L; Pohlig, Florian; Kanz, Karl-Georg; Lenze, Ulrich; Lenze, Florian; Toepfer, Andreas; Kelch, Sarah; Harrasser, Norbert; von Eisenhart-Rothe, Rüdiger; Schauwecker, Johannes

    2017-03-09

    Increasing rates of prosthetic joint infection (PJI) have presented challenges for general practitioners, orthopedic surgeons and the health care system in the recent years. The diagnosis of PJI is complex; multiple diagnostic tools are used in the attempt to correctly diagnose PJI. Evidence-based algorithms can help to identify PJI using standardized diagnostic steps. We reviewed relevant publications between 1990 and 2015 using a systematic literature search in MEDLINE and PUBMED. The selected search results were then classified into levels of evidence. The keywords were prosthetic joint infection, biofilm, diagnosis, sonication, antibiotic treatment, implant-associated infection, Staph. aureus, rifampicin, implant retention, pcr, maldi-tof, serology, synovial fluid, c-reactive protein level, total hip arthroplasty (THA), total knee arthroplasty (TKA) and combinations of these terms. From an initial 768 publications, 156 publications were stringently reviewed. Publications with class I-III recommendations (EAST) were considered. We developed an algorithm for the diagnostic approach to display the complex diagnosis of PJI in a clear and logically structured process according to ISO 5807. The evidence-based standardized algorithm combines modern clinical requirements and evidence-based treatment principles. The algorithm provides a detailed transparent standard operating procedure (SOP) for diagnosing PJI. Thus, consistently high, examiner-independent process quality is assured to meet the demands of modern quality management in PJI diagnosis.

  12. Development of algorithm for continuous generation of a computer game in terms of usability and optimization of developed code in computer science

    Directory of Open Access Journals (Sweden)

    Tibor Skala

    2018-03-01

    Full Text Available As both hardware and software have become increasingly available and constantly developed, they globally contribute to improvements in technology in every field of technology and arts. Digital tools for creation and processing of graphical contents are very developed and they have been designed to shorten the time required for content creation, which is, in this case, animation. Since contemporary animation has experienced a surge in various visual styles and visualization methods, programming is built-in in everything that is currently in use. There is no doubt that there is a variety of algorithms and software which are the brain and the moving force behind any idea created for a specific purpose and applicability in society. Art and technology combined make a direct and oriented medium for publishing and marketing in every industry, including those which are not necessarily closely related to those that rely heavily on visual aspect of work. Additionally, quality and consistency of an algorithm will also depend on proper integration into the system that will be powered by that algorithm as well as on the way the algorithm is designed. Development of an endless algorithm and its effective use will be shown during the use of the computer game. In order to present the effect of various parameters, in the final phase of the computer game development an endless algorithm was tested with varying number of key input parameters (achieved time, score reached, pace of the game.

  13. A comparison of three self-tuning control algorithms developed for the Bristol-Babcock controller

    International Nuclear Information System (INIS)

    Tapp, P.A.

    1992-04-01

    A brief overview of adaptive control methods relating to the design of self-tuning proportional-integral-derivative (PID) controllers is given. The methods discussed include gain scheduling, self-tuning, auto-tuning, and model-reference adaptive control systems. Several process identification and parameter adjustment methods are discussed. Characteristics of the two most common types of self-tuning controllers implemented by industry (i.e., pattern recognition and process identification) are summarized. The substance of the work is a comparison of three self-tuning proportional-plus-integral (STPI) control algorithms developed to work in conjunction with the Bristol-Babcock PID control module. The STPI control algorithms are based on closed-loop cycling theory, pattern recognition theory, and model-based theory. A brief theory of operation of these three STPI control algorithms is given. Details of the process simulations developed to test the STPI algorithms are given, including an integrating process, a first-order system, a second-order system, a system with initial inverse response, and a system with variable time constant and delay. The STPI algorithms' performance with regard to both setpoint changes and load disturbances is evaluated, and their robustness is compared. The dynamic effects of process deadtime and noise are also considered. Finally, the limitations of each of the STPI algorithms is discussed, some conclusions are drawn from the performance comparisons, and a few recommendations are made. 6 refs

  14. Weak and Strong Convergence of an Algorithm for the Split Common Fixed-Point of Asymptotically Quasi-Nonexpansive Operators

    Directory of Open Access Journals (Sweden)

    Yazheng Dang

    2013-01-01

    Full Text Available Inspired by the Moudafi (2010, we propose an algorithm for solving the split common fixed-point problem for a wide class of asymptotically quasi-nonexpansive operators and the weak and strong convergence of the algorithm are shown under some suitable conditions in Hilbert spaces. The algorithm and its convergence results improve and develop previous results for split feasibility problems.

  15. Rapid fish stock depletion in previously unexploited seamounts: the ...

    African Journals Online (AJOL)

    Rapid fish stock depletion in previously unexploited seamounts: the case of Beryx splendens from the Sierra Leone Rise (Gulf of Guinea) ... A spectral analysis and red-noise spectra procedure (REDFIT) algorithm was used to identify the red-noise spectrum from the gaps in the observed time-series of catch per unit effort by ...

  16. A Developed ESPRIT Algorithm for DOA Estimation

    Science.gov (United States)

    Fayad, Youssef; Wang, Caiyun; Cao, Qunsheng; Hafez, Alaa El-Din Sayed

    2015-05-01

    A novel algorithm for estimating direction of arrival (DOAE) for target, which aspires to contribute to increase the estimation process accuracy and decrease the calculation costs, has been carried out. It has introduced time and space multiresolution in Estimation of Signal Parameter via Rotation Invariance Techniques (ESPRIT) method (TS-ESPRIT) to realize subspace approach that decreases errors caused by the model's nonlinearity effect. The efficacy of the proposed algorithm is verified by using Monte Carlo simulation, the DOAE accuracy has evaluated by closed-form Cramér-Rao bound (CRB) which reveals that the proposed algorithm's estimated results are better than those of the normal ESPRIT methods leading to the estimator performance enhancement.

  17. Development and comparisons of wind retrieval algorithms for small unmanned aerial systems

    Science.gov (United States)

    Bonin, T. A.; Chilson, P. B.; Zielke, B. S.; Klein, P. M.; Leeman, J. R.

    2012-12-01

    Recently, there has been an increase in use of Unmanned Aerial Systems (UASs) as platforms for conducting fundamental and applied research in the lower atmosphere due to their relatively low cost and ability to collect samples with high spatial and temporal resolution. Concurrent with this development comes the need for accurate instrumentation and measurement methods suitable for small meteorological UASs. Moreover, the instrumentation to be integrated into such platforms must be small and lightweight. Whereas thermodynamic variables can be easily measured using well aspirated sensors onboard, it is much more challenging to accurately measure the wind with a UAS. Several algorithms have been developed that incorporate GPS observations as a means of estimating the horizontal wind vector, with each algorithm exhibiting its own particular strengths and weaknesses. In the present study, the performance of three such GPS-based wind-retrieval algorithms has been investigated and compared with wind estimates from rawinsonde and sodar observations. Each of the algorithms considered agreed well with the wind measurements from sounding and sodar data. Through the integration of UAS-retrieved profiles of thermodynamic and kinematic parameters, one can investigate the static and dynamic stability of the atmosphere and relate them to the state of the boundary layer across a variety of times and locations, which might be difficult to access using conventional instrumentation.

  18. Development of pattern recognition algorithms for the central drift chamber of the Belle II detector

    Energy Technology Data Exchange (ETDEWEB)

    Trusov, Viktor

    2016-11-04

    In this thesis, the development of one of the pattern recognition algorithms for the Belle II experiment based on conformal and Legendre transformations is presented. In order to optimize the performance of the algorithm (CPU time and efficiency) specialized processing steps have been introduced. To show achieved results, Monte-Carlo based efficiency measurements of the tracking algorithms in the Central Drift Chamber (CDC) has been done.

  19. A Recommendation Algorithm for Automating Corollary Order Generation

    Science.gov (United States)

    Klann, Jeffrey; Schadow, Gunther; McCoy, JM

    2009-01-01

    Manual development and maintenance of decision support content is time-consuming and expensive. We explore recommendation algorithms, e-commerce data-mining tools that use collective order history to suggest purchases, to assist with this. In particular, previous work shows corollary order suggestions are amenable to automated data-mining techniques. Here, an item-based collaborative filtering algorithm augmented with association rule interestingness measures mined suggestions from 866,445 orders made in an inpatient hospital in 2007, generating 584 potential corollary orders. Our expert physician panel evaluated the top 92 and agreed 75.3% were clinically meaningful. Also, at least one felt 47.9% would be directly relevant in guideline development. This automated generation of a rough-cut of corollary orders confirms prior indications about automated tools in building decision support content. It is an important step toward computerized augmentation to decision support development, which could increase development efficiency and content quality while automatically capturing local standards. PMID:20351875

  20. Genetic Algorithms for Development of New Financial Products

    Directory of Open Access Journals (Sweden)

    Eder Oliveira Abensur

    2007-06-01

    Full Text Available New Product Development (NPD is recognized as a fundamental activity that has a relevant impact on the performance of companies. Despite the relevance of the financial market there is a lack of work on new financial product development. The aim of this research is to propose the use of Genetic Algorithms (GA as an alternative procedure for evaluating the most favorable combination of variables for the product launch. The paper focuses on: (i determining the essential variables of the financial product studied (investment fund; (ii determining how to evaluate the success of a new investment fund launch and (iii how GA can be applied to the financial product development problem. The proposed framework was tested using 4 years of real data from the Brazilian financial market and the results suggest that this is an innovative development methodology and useful for designing complex financial products with many attributes.

  1. Designing synthetic networks in silico: a generalised evolutionary algorithm approach.

    Science.gov (United States)

    Smith, Robert W; van Sluijs, Bob; Fleck, Christian

    2017-12-02

    Evolution has led to the development of biological networks that are shaped by environmental signals. Elucidating, understanding and then reconstructing important network motifs is one of the principal aims of Systems & Synthetic Biology. Consequently, previous research has focused on finding optimal network structures and reaction rates that respond to pulses or produce stable oscillations. In this work we present a generalised in silico evolutionary algorithm that simultaneously finds network structures and reaction rates (genotypes) that can satisfy multiple defined objectives (phenotypes). The key step to our approach is to translate a schema/binary-based description of biological networks into systems of ordinary differential equations (ODEs). The ODEs can then be solved numerically to provide dynamic information about an evolved networks functionality. Initially we benchmark algorithm performance by finding optimal networks that can recapitulate concentration time-series data and perform parameter optimisation on oscillatory dynamics of the Repressilator. We go on to show the utility of our algorithm by finding new designs for robust synthetic oscillators, and by performing multi-objective optimisation to find a set of oscillators and feed-forward loops that are optimal at balancing different system properties. In sum, our results not only confirm and build on previous observations but we also provide new designs of synthetic oscillators for experimental construction. In this work we have presented and tested an evolutionary algorithm that can design a biological network to produce desired output. Given that previous designs of synthetic networks have been limited to subregions of network- and parameter-space, the use of our evolutionary optimisation algorithm will enable Synthetic Biologists to construct new systems with the potential to display a wider range of complex responses.

  2. Development Modules for Specification of Requirements for a System of Verification of Parallel Algorithms

    Directory of Open Access Journals (Sweden)

    Vasiliy Yu. Meltsov

    2012-05-01

    Full Text Available This paper presents the results of the development of one of the modules of the system verification of parallel algorithms that are used to verify the inference engine. This module is designed to build the specification requirements, the feasibility of which on the algorithm is necessary to prove (test.

  3. Algorithmic mathematics

    CERN Document Server

    Hougardy, Stefan

    2016-01-01

    Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.

  4. Superior Generalization Capability of Hardware-Learing Algorithm Developed for Self-Learning Neuron-MOS Neural Networks

    Science.gov (United States)

    Kondo, Shuhei; Shibata, Tadashi; Ohmi, Tadahiro

    1995-02-01

    We have investigated the learning performance of the hardware backpropagation (HBP) algorithm, a hardware-oriented learning algorithm developed for the self-learning architecture of neural networks constructed using neuron MOS (metal-oxide-semiconductor) transistors. The solution to finding a mirror symmetry axis in a 4×4 binary pixel array was tested by computer simulation based on the HBP algorithm. Despite the inherent restrictions imposed on the hardware-learning algorithm, HBP exhibits equivalent learning performance to that of the original backpropagation (BP) algorithm when all the pertinent parameters are optimized. Very importantly, we have found that HBP has a superior generalization capability over BP; namely, HBP exhibits higher performance in solving problems that the network has not yet learnt.

  5. Development of Data Processing Algorithms for the Upgraded LHCb Vertex Locator

    CERN Document Server

    AUTHOR|(CDS)2101352

    The LHCb detector will see a major upgrade during LHC Long Shutdown II, which is planned for 2019/20. The silicon Vertex Locator subdetector will be upgraded for operation under the new run conditions. The detector will be read out using a data acquisition board based on an FPGA. The work presented in this thesis is concerned with the development of the data processing algorithms to be used in this data acquisition board. In particular, work in three different areas of the FPGA is covered: the data processing block, the low level interface, and the post router block. The algorithms produced have been simulated and tested, and shown to provide the required performance. Errors in the initial implementation of the Gigabit Wireline Transmitter serialized data in the low level interface were discovered and corrected. The data scrambling algorithm and the post router block have been incorporated in the front end readout chip.

  6. Developments in the Aerosol Layer Height Retrieval Algorithm for the Copernicus Sentinel-4/UVN Instrument

    Science.gov (United States)

    Nanda, Swadhin; Sanders, Abram; Veefkind, Pepijn

    2016-04-01

    The Sentinel-4 mission is a part of the European Commission's Copernicus programme, the goal of which is to provide geo-information to manage environmental assets, and to observe, understand and mitigate the effects of the changing climate. The Sentinel-4/UVN instrument design is motivated by the need to monitor trace gas concentrations and aerosols in the atmosphere from a geostationary orbit. The on-board instrument is a high resolution UV-VIS-NIR (UVN) spectrometer system that provides hourly radiance measurements over Europe and northern Africa with a spatial sampling of 8 km. The main application area of Sentinel-4/UVN is air quality. One of the data products that is being developed for Sentinel-4/UVN is the Aerosol Layer Height (ALH). The goal is to determine the height of aerosol plumes with a resolution of better than 0.5 - 1 km. The ALH product thus targets aerosol layers in the free troposphere, such as desert dust, volcanic ash and biomass during plumes. KNMI is assigned with the development of the Aerosol Layer Height (ALH) algorithm. Its heritage is the ALH algorithm developed by Sanders and De Haan (ATBD, 2016) for the TROPOMI instrument on board the Sentinel-5 Precursor mission that is to be launched in June or July 2016 (tentative date). The retrieval algorithm designed so far for the aerosol height product is based on the absorption characteristics of the oxygen-A band (759-770 nm). The algorithm has heritage to the ALH algorithm developed for TROPOMI on the Sentinel 5 precursor satellite. New aspects for Sentinel-4/UVN include the higher resolution (0.116 nm compared to 0.4 for TROPOMI) and hourly observation from the geostationary orbit. The algorithm uses optimal estimation to obtain a spectral fit of the reflectance across absorption band, while assuming a single uniform layer with fixed width to represent the aerosol vertical distribution. The state vector includes amongst other elements the height of this layer and its aerosol optical

  7. Development of a meta-algorithm for guiding primary care encounters for patients with multimorbidity using evidence-based and case-based guideline development methodology.

    Science.gov (United States)

    Muche-Borowski, Cathleen; Lühmann, Dagmar; Schäfer, Ingmar; Mundt, Rebekka; Wagner, Hans-Otto; Scherer, Martin

    2017-06-22

    The study aimed to develop a comprehensive algorithm (meta-algorithm) for primary care encounters of patients with multimorbidity. We used a novel, case-based and evidence-based procedure to overcome methodological difficulties in guideline development for patients with complex care needs. Systematic guideline development methodology including systematic evidence retrieval (guideline synopses), expert opinions and informal and formal consensus procedures. Primary care. The meta-algorithm was developed in six steps:1. Designing 10 case vignettes of patients with multimorbidity (common, epidemiologically confirmed disease patterns and/or particularly challenging health care needs) in a multidisciplinary workshop.2. Based on the main diagnoses, a systematic guideline synopsis of evidence-based and consensus-based clinical practice guidelines was prepared. The recommendations were prioritised according to the clinical and psychosocial characteristics of the case vignettes.3. Case vignettes along with the respective guideline recommendations were validated and specifically commented on by an external panel of practicing general practitioners (GPs).4. Guideline recommendations and experts' opinions were summarised as case specific management recommendations (N-of-one guidelines).5. Healthcare preferences of patients with multimorbidity were elicited from a systematic literature review and supplemented with information from qualitative interviews.6. All N-of-one guidelines were analysed using pattern recognition to identify common decision nodes and care elements. These elements were put together to form a generic meta-algorithm. The resulting meta-algorithm reflects the logic of a GP's encounter of a patient with multimorbidity regarding decision-making situations, communication needs and priorities. It can be filled with the complex problems of individual patients and hereby offer guidance to the practitioner. Contrary to simple, symptom-oriented algorithms, the meta-algorithm

  8. Automatic boiling water reactor control rod pattern design using particle swarm optimization algorithm and local search

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Cheng-Der, E-mail: jdwang@iner.gov.tw [Nuclear Engineering Division, Institute of Nuclear Energy Research, No. 1000, Wenhua Rd., Jiaan Village, Longtan Township, Taoyuan County 32546, Taiwan, ROC (China); Lin, Chaung [National Tsing Hua University, Department of Engineering and System Science, 101, Section 2, Kuang Fu Road, Hsinchu 30013, Taiwan (China)

    2013-02-15

    Highlights: ► The PSO algorithm was adopted to automatically design a BWR CRP. ► The local search procedure was added to improve the result of PSO algorithm. ► The results show that the obtained CRP is the same good as that in the previous work. -- Abstract: This study developed a method for the automatic design of a boiling water reactor (BWR) control rod pattern (CRP) using the particle swarm optimization (PSO) algorithm. The PSO algorithm is more random compared to the rank-based ant system (RAS) that was used to solve the same BWR CRP design problem in the previous work. In addition, the local search procedure was used to make improvements after PSO, by adding the single control rod (CR) effect. The design goal was to obtain the CRP so that the thermal limits and shutdown margin would satisfy the design requirement and the cycle length, which is implicitly controlled by the axial power distribution, would be acceptable. The results showed that the same acceptable CRP found in the previous work could be obtained.

  9. Development of MODIS data-based algorithm for retrieving sea surface temperature in coastal waters.

    Science.gov (United States)

    Wang, Jiao; Deng, Zhiqiang

    2017-06-01

    A new algorithm was developed for retrieving sea surface temperature (SST) in coastal waters using satellite remote sensing data from Moderate Resolution Imaging Spectroradiometer (MODIS) aboard Aqua platform. The new SST algorithm was trained using the Artificial Neural Network (ANN) method and tested using 8 years of remote sensing data from MODIS Aqua sensor and in situ sensing data from the US coastal waters in Louisiana, Texas, Florida, California, and New Jersey. The ANN algorithm could be utilized to map SST in both deep offshore and particularly shallow nearshore waters at the high spatial resolution of 1 km, greatly expanding the coverage of remote sensing-based SST data from offshore waters to nearshore waters. Applications of the ANN algorithm require only the remotely sensed reflectance values from the two MODIS Aqua thermal bands 31 and 32 as input data. Application results indicated that the ANN algorithm was able to explaining 82-90% variations in observed SST in US coastal waters. While the algorithm is generally applicable to the retrieval of SST, it works best for nearshore waters where important coastal resources are located and existing algorithms are either not applicable or do not work well, making the new ANN-based SST algorithm unique and particularly useful to coastal resource management.

  10. Evolving Stochastic Learning Algorithm based on Tsallis entropic index

    Science.gov (United States)

    Anastasiadis, A. D.; Magoulas, G. D.

    2006-03-01

    In this paper, inspired from our previous algorithm, which was based on the theory of Tsallis statistical mechanics, we develop a new evolving stochastic learning algorithm for neural networks. The new algorithm combines deterministic and stochastic search steps by employing a different adaptive stepsize for each network weight, and applies a form of noise that is characterized by the nonextensive entropic index q, regulated by a weight decay term. The behavior of the learning algorithm can be made more stochastic or deterministic depending on the trade off between the temperature T and the q values. This is achieved by introducing a formula that defines a time-dependent relationship between these two important learning parameters. Our experimental study verifies that there are indeed improvements in the convergence speed of this new evolving stochastic learning algorithm, which makes learning faster than using the original Hybrid Learning Scheme (HLS). In addition, experiments are conducted to explore the influence of the entropic index q and temperature T on the convergence speed and stability of the proposed method.

  11. On developing B-spline registration algorithms for multi-core processors

    International Nuclear Information System (INIS)

    Shackleford, J A; Kandasamy, N; Sharp, G C

    2010-01-01

    Spline-based deformable registration methods are quite popular within the medical-imaging community due to their flexibility and robustness. However, they require a large amount of computing time to obtain adequate results. This paper makes two contributions towards accelerating B-spline-based registration. First, we propose a grid-alignment scheme and associated data structures that greatly reduce the complexity of the registration algorithm. Based on this grid-alignment scheme, we then develop highly data parallel designs for B-spline registration within the stream-processing model, suitable for implementation on multi-core processors such as graphics processing units (GPUs). Particular attention is focused on an optimal method for performing analytic gradient computations in a data parallel fashion. CPU and GPU versions are validated for execution time and registration quality. Performance results on large images show that our GPU algorithm achieves a speedup of 15 times over the single-threaded CPU implementation whereas our multi-core CPU algorithm achieves a speedup of 8 times over the single-threaded implementation. The CPU and GPU versions achieve near-identical registration quality in terms of RMS differences between the generated vector fields.

  12. Kidnapping Detection and Recognition in Previous Unknown Environment

    Directory of Open Access Journals (Sweden)

    Yang Tian

    2017-01-01

    Full Text Available An unaware event referred to as kidnapping makes the estimation result of localization incorrect. In a previous unknown environment, incorrect localization result causes incorrect mapping result in Simultaneous Localization and Mapping (SLAM by kidnapping. In this situation, the explored area and unexplored area are divided to make the kidnapping recovery difficult. To provide sufficient information on kidnapping, a framework to judge whether kidnapping has occurred and to identify the type of kidnapping with filter-based SLAM is proposed. The framework is called double kidnapping detection and recognition (DKDR by performing two checks before and after the “update” process with different metrics in real time. To explain one of the principles of DKDR, we describe a property of filter-based SLAM that corrects the mapping result of the environment using the current observations after the “update” process. Two classical filter-based SLAM algorithms, Extend Kalman Filter (EKF SLAM and Particle Filter (PF SLAM, are modified to show that DKDR can be simply and widely applied in existing filter-based SLAM algorithms. Furthermore, a technique to determine the adapted thresholds of metrics in real time without previous data is presented. Both simulated and experimental results demonstrate the validity and accuracy of the proposed method.

  13. Development of computational algorithms for quantification of pulmonary structures

    International Nuclear Information System (INIS)

    Oliveira, Marcela de; Alvarez, Matheus; Alves, Allan F.F.; Miranda, Jose R.A.; Pina, Diana R.

    2012-01-01

    The high-resolution computed tomography has become the imaging diagnostic exam most commonly used for the evaluation of the squeals of Paracoccidioidomycosis. The subjective evaluations the radiological abnormalities found on HRCT images do not provide an accurate quantification. The computer-aided diagnosis systems produce a more objective assessment of the abnormal patterns found in HRCT images. Thus, this research proposes the development of algorithms in MATLAB® computing environment can quantify semi-automatically pathologies such as pulmonary fibrosis and emphysema. The algorithm consists in selecting a region of interest (ROI), and by the use of masks, filter densities and morphological operators, to obtain a quantification of the injured area to the area of a healthy lung. The proposed method was tested on ten HRCT scans of patients with confirmed PCM. The results of semi-automatic measurements were compared with subjective evaluations performed by a specialist in radiology, falling to a coincidence of 80% for emphysema and 58% for fibrosis. (author)

  14. Development of algorithm for depreciation costs allocation in dynamic input-output industrial enterprise model

    Directory of Open Access Journals (Sweden)

    Keller Alevtina

    2017-01-01

    Full Text Available The article considers the issue of allocation of depreciation costs in the dynamic inputoutput model of an industrial enterprise. Accounting the depreciation costs in such a model improves the policy of fixed assets management. It is particularly relevant to develop the algorithm for the allocation of depreciation costs in the construction of dynamic input-output model of an industrial enterprise, since such enterprises have a significant amount of fixed assets. Implementation of terms of the adequacy of such an algorithm itself allows: evaluating the appropriateness of investments in fixed assets, studying the final financial results of an industrial enterprise, depending on management decisions in the depreciation policy. It is necessary to note that the model in question for the enterprise is always degenerate. It is caused by the presence of zero rows in the matrix of capital expenditures by lines of structural elements unable to generate fixed assets (part of the service units, households, corporate consumers. The paper presents the algorithm for the allocation of depreciation costs for the model. This algorithm was developed by the authors and served as the basis for further development of the flowchart for subsequent implementation with use of software. The construction of such algorithm and its use for dynamic input-output models of industrial enterprises is actualized by international acceptance of the effectiveness of the use of input-output models for national and regional economic systems. This is what allows us to consider that the solutions discussed in the article are of interest to economists of various industrial enterprises.

  15. Autonomous Star Tracker Algorithms

    DEFF Research Database (Denmark)

    Betto, Maurizio; Jørgensen, John Leif; Kilsgaard, Søren

    1998-01-01

    Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances.......Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances....

  16. Development and validation of an online interactive, multimedia wound care algorithms program.

    Science.gov (United States)

    Beitz, Janice M; van Rijswijk, Lia

    2012-01-01

    To provide education based on evidence-based and validated wound care algorithms we designed and implemented an interactive, Web-based learning program for teaching wound care. A mixed methods quantitative pilot study design with qualitative components was used to test and ascertain the ease of use, validity, and reliability of the online program. A convenience sample of 56 RN wound experts (formally educated, certified in wound care, or both) participated. The interactive, online program consists of a user introduction, interactive assessment of 15 acute and chronic wound photos, user feedback about the percentage correct, partially correct, or incorrect algorithm and dressing choices and a user survey. After giving consent, participants accessed the online program, provided answers to the demographic survey, and completed the assessment module and photographic test, along with a posttest survey. The construct validity of the online interactive program was strong. Eighty-five percent (85%) of algorithm and 87% of dressing choices were fully correct even though some programming design issues were identified. Online study results were consistently better than previously conducted comparable paper-pencil study results. Using a 5-point Likert-type scale, participants rated the program's value and ease of use as 3.88 (valuable to very valuable) and 3.97 (easy to very easy), respectively. Similarly the research process was described qualitatively as "enjoyable" and "exciting." This digital program was well received indicating its "perceived benefits" for nonexpert users, which may help reduce barriers to implementing safe, evidence-based care. Ongoing research using larger sample sizes may help refine the program or algorithms while identifying clinician educational needs. Initial design imperfections and programming problems identified also underscored the importance of testing all paper and Web-based programs designed to educate health care professionals or guide

  17. Development of algorithms for building inventory compilation through remote sensing and statistical inferencing

    Science.gov (United States)

    Sarabandi, Pooya

    Building inventories are one of the core components of disaster vulnerability and loss estimations models, and as such, play a key role in providing decision support for risk assessment, disaster management and emergency response efforts. In may parts of the world inclusive building inventories, suitable for the use in catastrophe models cannot be found. Furthermore, there are serious shortcomings in the existing building inventories that include incomplete or out-dated information on critical attributes as well as missing or erroneous values for attributes. In this dissertation a set of methodologies for updating spatial and geometric information of buildings from single and multiple high-resolution optical satellite images are presented. Basic concepts, terminologies and fundamentals of 3-D terrain modeling from satellite images are first introduced. Different sensor projection models are then presented and sources of optical noise such as lens distortions are discussed. An algorithm for extracting height and creating 3-D building models from a single high-resolution satellite image is formulated. The proposed algorithm is a semi-automated supervised method capable of extracting attributes such as longitude, latitude, height, square footage, perimeter, irregularity index and etc. The associated errors due to the interactive nature of the algorithm are quantified and solutions for minimizing the human-induced errors are proposed. The height extraction algorithm is validated against independent survey data and results are presented. The validation results show that an average height modeling accuracy of 1.5% can be achieved using this algorithm. Furthermore, concept of cross-sensor data fusion for the purpose of 3-D scene reconstruction using quasi-stereo images is developed in this dissertation. The developed algorithm utilizes two or more single satellite images acquired from different sensors and provides the means to construct 3-D building models in a more

  18. A note on the linear memory Baum-Welch algorithm

    DEFF Research Database (Denmark)

    Jensen, Jens Ledet

    2009-01-01

    We demonstrate the simplicity and generality of the recently introduced linear space Baum-Welch algorithm for hidden Markov models. We also point to previous literature on the subject.......We demonstrate the simplicity and generality of the recently introduced linear space Baum-Welch algorithm for hidden Markov models. We also point to previous literature on the subject....

  19. DEVELOPMENT OF THE ALGORITHM FOR CHOOSING THE OPTIMAL SCENARIO FOR THE DEVELOPMENT OF THE REGION'S ECONOMY

    Directory of Open Access Journals (Sweden)

    I. S. Borisova

    2018-01-01

    Full Text Available Purpose: the article deals with the development of an algorithm for choosing the optimal scenario for the development of the regional economy. Since the "Strategy for socio-economic development of the Lipetsk region for the period until 2020" does not contain scenarios for the development of the region, the algorithm for choosing the optimal scenario for the development of the regional economy is formalized. The scenarios for the development of the economy of the Lipetsk region according to the indicators of the Program of social and economic development are calculated: "Quality of life index", "Average monthly nominal wage", "Level of registered unemployment", "Growth rate of gross regional product", "The share of innovative products in the total volume of goods shipped, works performed and services rendered by industrial organizations", "Total volume of atmospheric pollution per unit GRP" and "Satisfaction of the population with the activity of executive bodies of state power of the region". Based on the calculation of development scenarios, the dynamics of the values of these indicators was developed in the implementation of scenarios for the development of the economy of the Lipetsk region in 2016–2020. Discounted financial costs of economic participants for realization of scenarios of development of economy of the Lipetsk region are estimated. It is shown that the current situation in the economy of the Russian Federation assumes the choice of a paradigm for the innovative development of territories and requires all participants in economic relations at the regional level to concentrate their resources on the creation of new science-intensive products. An assessment of the effects of the implementation of reasonable scenarios for the development of the economy of the Lipetsk region was carried out. It is shown that the most acceptable is the "base" scenario, which assumes a consistent change in the main indicators. The specific economic

  20. Development of Speckle Interferometry Algorithm and System

    International Nuclear Information System (INIS)

    Shamsir, A. A. M.; Jafri, M. Z. M.; Lim, H. S.

    2011-01-01

    Electronic speckle pattern interferometry (ESPI) method is a wholefield, non destructive measurement method widely used in the industries such as detection of defects on metal bodies, detection of defects in intergrated circuits in digital electronics components and in the preservation of priceless artwork. In this research field, this method is widely used to develop algorithms and to develop a new laboratory setup for implementing the speckle pattern interferometry. In speckle interferometry, an optically rough test surface is illuminated with an expanded laser beam creating a laser speckle pattern in the space surrounding the illuminated region. The speckle pattern is optically mixed with a second coherent light field that is either another speckle pattern or a smooth light field. This produces an interferometric speckle pattern that will be detected by sensor to count the change of the speckle pattern due to force given. In this project, an experimental setup of ESPI is proposed to analyze a stainless steel plate using 632.8 nm (red) wavelength of lights.

  1. Development of a Framework for Genetic Algorithms

    OpenAIRE

    Wååg, Håkan

    2009-01-01

    Genetic algorithms is a method of optimization that can be used tosolve many different kinds of problems. This thesis focuses ondeveloping a framework for genetic algorithms that is capable ofsolving at least the two problems explored in the work. Otherproblems are supported by allowing user-made extensions.The purpose of this thesis is to explore the possibilities of geneticalgorithms for optimization problems and artificial intelligenceapplications.To test the framework two applications are...

  2. VISUALIZATION OF PAGERANK ALGORITHM

    OpenAIRE

    Perhaj, Ervin

    2013-01-01

    The goal of the thesis is to develop a web application that help users understand the functioning of the PageRank algorithm. The thesis consists of two parts. First we develop an algorithm to calculate PageRank values of web pages. The input of algorithm is a list of web pages and links between them. The user enters the list through the web interface. From the data the algorithm calculates PageRank value for each page. The algorithm repeats the process, until the difference of PageRank va...

  3. The Algorithmic Imaginary

    DEFF Research Database (Denmark)

    Bucher, Taina

    2017-01-01

    the notion of the algorithmic imaginary. It is argued that the algorithmic imaginary – ways of thinking about what algorithms are, what they should be and how they function – is not just productive of different moods and sensations but plays a generative role in moulding the Facebook algorithm itself...... of algorithms affect people's use of these platforms, if at all? To help answer these questions, this article examines people's personal stories about the Facebook algorithm through tweets and interviews with 25 ordinary users. To understand the spaces where people and algorithms meet, this article develops...

  4. Implementation on Landsat Data of a Simple Cloud Mask Algorithm Developed for MODIS Land Bands

    Science.gov (United States)

    Oreopoulos, Lazaros; Wilson, Michael J.; Varnai, Tamas

    2010-01-01

    This letter assesses the performance on Landsat-7 images of a modified version of a cloud masking algorithm originally developed for clear-sky compositing of Moderate Resolution Imaging Spectroradiometer (MODIS) images at northern mid-latitudes. While data from recent Landsat missions include measurements at thermal wavelengths, and such measurements are also planned for the next mission, thermal tests are not included in the suggested algorithm in its present form to maintain greater versatility and ease of use. To evaluate the masking algorithm we take advantage of the availability of manual (visual) cloud masks developed at USGS for the collection of Landsat scenes used here. As part of our evaluation we also include the Automated Cloud Cover Assesment (ACCA) algorithm that includes thermal tests and is used operationally by the Landsat-7 mission to provide scene cloud fractions, but no cloud masks. We show that the suggested algorithm can perform about as well as ACCA both in terms of scene cloud fraction and pixel-level cloud identification. Specifically, we find that the algorithm gives an error of 1.3% for the scene cloud fraction of 156 scenes, and a root mean square error of 7.2%, while it agrees with the manual mask for 93% of the pixels, figures very similar to those from ACCA (1.2%, 7.1%, 93.7%).

  5. Parallel algorithms for placement and routing in VLSI design. Ph.D. Thesis

    Science.gov (United States)

    Brouwer, Randall Jay

    1991-01-01

    The computational requirements for high quality synthesis, analysis, and verification of very large scale integration (VLSI) designs have rapidly increased with the fast growing complexity of these designs. Research in the past has focused on the development of heuristic algorithms, special purpose hardware accelerators, or parallel algorithms for the numerous design tasks to decrease the time required for solution. Two new parallel algorithms are proposed for two VLSI synthesis tasks, standard cell placement and global routing. The first algorithm, a parallel algorithm for global routing, uses hierarchical techniques to decompose the routing problem into independent routing subproblems that are solved in parallel. Results are then presented which compare the routing quality to the results of other published global routers and which evaluate the speedups attained. The second algorithm, a parallel algorithm for cell placement and global routing, hierarchically integrates a quadrisection placement algorithm, a bisection placement algorithm, and the previous global routing algorithm. Unique partitioning techniques are used to decompose the various stages of the algorithm into independent tasks which can be evaluated in parallel. Finally, results are presented which evaluate the various algorithm alternatives and compare the algorithm performance to other placement programs. Measurements are presented on the parallel speedups available.

  6. A New Augmentation Based Algorithm for Extracting Maximal Chordal Subgraphs.

    Science.gov (United States)

    Bhowmick, Sanjukta; Chen, Tzu-Yi; Halappanavar, Mahantesh

    2015-02-01

    A graph is chordal if every cycle of length greater than three contains an edge between non-adjacent vertices. Chordal graphs are of interest both theoretically, since they admit polynomial time solutions to a range of NP-hard graph problems, and practically, since they arise in many applications including sparse linear algebra, computer vision, and computational biology. A maximal chordal subgraph is a chordal subgraph that is not a proper subgraph of any other chordal subgraph. Existing algorithms for computing maximal chordal subgraphs depend on dynamically ordering the vertices, which is an inherently sequential process and therefore limits the algorithms' parallelizability. In this paper we explore techniques to develop a scalable parallel algorithm for extracting a maximal chordal subgraph. We demonstrate that an earlier attempt at developing a parallel algorithm may induce a non-optimal vertex ordering and is therefore not guaranteed to terminate with a maximal chordal subgraph. We then give a new algorithm that first computes and then repeatedly augments a spanning chordal subgraph. After proving that the algorithm terminates with a maximal chordal subgraph, we then demonstrate that this algorithm is more amenable to parallelization and that the parallel version also terminates with a maximal chordal subgraph. That said, the complexity of the new algorithm is higher than that of the previous parallel algorithm, although the earlier algorithm computes a chordal subgraph which is not guaranteed to be maximal. We experimented with our augmentation-based algorithm on both synthetic and real-world graphs. We provide scalability results and also explore the effect of different choices for the initial spanning chordal subgraph on both the running time and on the number of edges in the maximal chordal subgraph.

  7. DNABIT Compress - Genome compression algorithm.

    Science.gov (United States)

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-22

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.

  8. Fast geometric algorithms

    International Nuclear Information System (INIS)

    Noga, M.T.

    1984-01-01

    This thesis addresses a number of important problems that fall within the framework of the new discipline of Computational Geometry. The list of topics covered includes sorting and selection, convex hull algorithms, the L 1 hull, determination of the minimum encasing rectangle of a set of points, the Euclidean and L 1 diameter of a set of points, the metric traveling salesman problem, and finding the superrange of star-shaped and monotype polygons. The main theme of all the work was to develop a set of very fast state-of-the-art algorithms that supersede any rivals in terms of speed and ease of implementation. In some cases existing algorithms were refined; for others new techniques were developed that add to the present database of fast adaptive geometric algorithms. What emerges is a collection of techniques that is successful at merging modern tools developed in analysis of algorithms with those of classical geometry

  9. Developing the science product algorithm testbed for Chinese next-generation geostationary meteorological satellites: Fengyun-4 series

    Science.gov (United States)

    Min, Min; Wu, Chunqiang; Li, Chuan; Liu, Hui; Xu, Na; Wu, Xiao; Chen, Lin; Wang, Fu; Sun, Fenglin; Qin, Danyu; Wang, Xi; Li, Bo; Zheng, Zhaojun; Cao, Guangzhen; Dong, Lixin

    2017-08-01

    Fengyun-4A (FY-4A), the first of the Chinese next-generation geostationary meteorological satellites, launched in 2016, offers several advances over the FY-2: more spectral bands, faster imaging, and infrared hyperspectral measurements. To support the major objective of developing the prototypes of FY-4 science algorithms, two science product algorithm testbeds for imagers and sounders have been developed by the scientists in the FY-4 Algorithm Working Group (AWG). Both testbeds, written in FORTRAN and C programming languages for Linux or UNIX systems, have been tested successfully by using Intel/g compilers. Some important FY-4 science products, including cloud mask, cloud properties, and temperature profiles, have been retrieved successfully through using a proxy imager, Himawari-8/Advanced Himawari Imager (AHI), and sounder data, obtained from the Atmospheric InfraRed Sounder, thus demonstrating their robustness. In addition, in early 2016, the FY-4 AWG was developed based on the imager testbed—a near real-time processing system for Himawari-8/AHI data for use by Chinese weather forecasters. Consequently, robust and flexible science product algorithm testbeds have provided essential and productive tools for popularizing FY-4 data and developing substantial improvements in FY-4 products.

  10. Robust perception algorithms for road and track autonomous following

    Science.gov (United States)

    Marion, Vincent; Lecointe, Olivier; Lewandowski, Cecile; Morillon, Joel G.; Aufrere, Romuald; Marcotegui, Beatrix; Chapuis, Roland; Beucher, Serge

    2004-09-01

    The French Military Robotic Study Program (introduced in Aerosense 2003), sponsored by the French Defense Procurement Agency and managed by Thales Airborne Systems as the prime contractor, focuses on about 15 robotic themes, which can provide an immediate "operational add-on value." The paper details the "road and track following" theme (named AUT2), which main purpose was to develop a vision based sub-system to automatically detect roadsides of an extended range of roads and tracks suitable to military missions. To achieve the goal, efforts focused on three main areas: (1) Improvement of images quality at algorithms inputs, thanks to the selection of adapted video cameras, and the development of a THALES patented algorithm: it removes in real time most of the disturbing shadows in images taken in natural environments, enhances contrast and lowers reflection effect due to films of water. (2) Selection and improvement of two complementary algorithms (one is segment oriented, the other region based) (3) Development of a fusion process between both algorithms, which feeds in real time a road model with the best available data. Each previous step has been developed so that the global perception process is reliable and safe: as an example, the process continuously evaluates itself and outputs confidence criteria qualifying roadside detection. The paper presents the processes in details, and the results got from passed military acceptance tests, which trigger the next step: autonomous track following (named AUT3).

  11. Dataset exploited for the development and validation of automated cyanobacteria quantification algorithm, ACQUA

    Directory of Open Access Journals (Sweden)

    Emanuele Gandola

    2016-09-01

    Full Text Available The estimation and quantification of potentially toxic cyanobacteria in lakes and reservoirs are often used as a proxy of risk for water intended for human consumption and recreational activities. Here, we present data sets collected from three volcanic Italian lakes (Albano, Vico, Nemi that present filamentous cyanobacteria strains at different environments. Presented data sets were used to estimate abundance and morphometric characteristics of potentially toxic cyanobacteria comparing manual Vs. automated estimation performed by ACQUA (“ACQUA: Automated Cyanobacterial Quantification Algorithm for toxic filamentous genera using spline curves, pattern recognition and machine learning” (Gandola et al., 2016 [1]. This strategy was used to assess the algorithm performance and to set up the denoising algorithm. Abundance and total length estimations were used for software development, to this aim we evaluated the efficiency of statistical tools and mathematical algorithms, here described. The image convolution with the Sobel filter has been chosen to denoise input images from background signals, then spline curves and least square method were used to parameterize detected filaments and to recombine crossing and interrupted sections aimed at performing precise abundances estimations and morphometric measurements. Keywords: Comparing data, Filamentous cyanobacteria, Algorithm, Deoising, Natural sample

  12. Algorithm development and verification of UASCM for multi-dimension and multi-group neutron kinetics model

    International Nuclear Information System (INIS)

    Si, S.

    2012-01-01

    The Universal Algorithm of Stiffness Confinement Method (UASCM) for neutron kinetics model of multi-dimensional and multi-group transport equations or diffusion equations has been developed. The numerical experiments based on transport theory code MGSNM and diffusion theory code MGNEM have demonstrated that the algorithm has sufficient accuracy and stability. (authors)

  13. Development of an algorithm for quantifying extremity biological tissue

    International Nuclear Information System (INIS)

    Pavan, Ana L.M.; Miranda, Jose R.A.; Pina, Diana R. de

    2013-01-01

    The computerized radiology (CR) has become the most widely used device for image acquisition and production, since its introduction in the 80s. The detection and early diagnosis, obtained via CR, are important for the successful treatment of diseases such as arthritis, metabolic bone diseases, tumors, infections and fractures. However, the standards used for optimization of these images are based on international protocols. Therefore, it is necessary to compose radiographic techniques for CR system that provides a secure medical diagnosis, with doses as low as reasonably achievable. To this end, the aim of this work is to develop a quantifier algorithm of tissue, allowing the construction of a homogeneous end used phantom to compose such techniques. It was developed a database of computed tomography images of hand and wrist of adult patients. Using the Matlab ® software, was developed a computational algorithm able to quantify the average thickness of soft tissue and bones present in the anatomical region under study, as well as the corresponding thickness in simulators materials (aluminium and lucite). This was possible through the application of mask and Gaussian removal technique of histograms. As a result, was obtained an average thickness of soft tissue of 18,97 mm and bone tissue of 6,15 mm, and their equivalents in materials simulators of 23,87 mm of acrylic and 1,07mm of aluminum. The results obtained agreed with the medium thickness of biological tissues of a patient's hand pattern, enabling the construction of an homogeneous phantom

  14. An Algorithm for the Convolution of Legendre Series

    KAUST Repository

    Hale, Nicholas; Townsend, Alex

    2014-01-01

    An O(N2) algorithm for the convolution of compactly supported Legendre series is described. The algorithm is derived from the convolution theorem for Legendre polynomials and the recurrence relation satisfied by spherical Bessel functions. Combining with previous work yields an O(N 2) algorithm for the convolution of Chebyshev series. Numerical results are presented to demonstrate the improved efficiency over the existing algorithm. © 2014 Society for Industrial and Applied Mathematics.

  15. Fast filtering algorithm based on vibration systems and neural information exchange and its application to micro motion robot

    International Nuclear Information System (INIS)

    Gao Wa; Zha Fu-Sheng; Li Man-Tian; Song Bao-Yu

    2014-01-01

    This paper develops a fast filtering algorithm based on vibration systems theory and neural information exchange approach. The characters, including the derivation process and parameter analysis, are discussed and the feasibility and the effectiveness are testified by the filtering performance compared with various filtering methods, such as the fast wavelet transform algorithm, the particle filtering method and our previously developed single degree of freedom vibration system filtering algorithm, according to simulation and practical approaches. Meanwhile, the comparisons indicate that a significant advantage of the proposed fast filtering algorithm is its extremely fast filtering speed with good filtering performance. Further, the developed fast filtering algorithm is applied to the navigation and positioning system of the micro motion robot, which is a high real-time requirement for the signals preprocessing. Then, the preprocessing data is used to estimate the heading angle error and the attitude angle error of the micro motion robot. The estimation experiments illustrate the high practicality of the proposed fast filtering algorithm. (general)

  16. CoSMOS: Performance of Kurtosis Algorithm for Radio Frequency Interference Detection and Mitigation

    DEFF Research Database (Denmark)

    Misra, Sidharth; Kristensen, Steen Savstrup; Skou, Niels

    2007-01-01

    The performance of a previously developed algorithm for Radio Frequency Interference (RFI) detection and mitigation is experimentally evaluated. Results obtained from CoSMOS, an airborne campaign using a fully polarimetric L-band radiometer are analyzed for this purpose. Data is collected using two...

  17. Optimization of Antennas using a Hybrid Genetic-Algorithm Space-Mapping Algorithm

    DEFF Research Database (Denmark)

    Pantoja, M.F.; Bretones, A.R.; Meincke, Peter

    2006-01-01

    A hybrid global-local optimization technique for the design of antennas is presented. It consists of the subsequent application of a Genetic Algorithm (GA) that employs coarse models in the simulations and a space mapping (SM) that refines the solution found in the previous stage. The technique...

  18. A Clustal Alignment Improver Using Evolutionary Algorithms

    DEFF Research Database (Denmark)

    Thomsen, Rene; Fogel, Gary B.; Krink, Thimo

    2002-01-01

    Multiple sequence alignment (MSA) is a crucial task in bioinformatics. In this paper we extended previous work with evolutionary algorithms (EA) by using MSA solutions obtained from the wellknown Clustal V algorithm as a candidate solution seed of the initial EA population. Our results clearly show...

  19. New algorithms for identifying the flavour of [Formula: see text] mesons using pions and protons.

    Science.gov (United States)

    Aaij, R; Adeva, B; Adinolfi, M; Ajaltouni, Z; Akar, S; Albrecht, J; Alessio, F; Alexander, M; Ali, S; Alkhazov, G; Alvarez Cartelle, P; Alves, A A; Amato, S; Amerio, S; Amhis, Y; An, L; Anderlini, L; Andreassi, G; Andreotti, M; Andrews, J E; Appleby, R B; Archilli, F; d'Argent, P; Arnau Romeu, J; Artamonov, A; Artuso, M; Aslanides, E; Auriemma, G; Baalouch, M; Babuschkin, I; Bachmann, S; Back, J J; Badalov, A; Baesso, C; Baker, S; Baldini, W; Barlow, R J; Barschel, C; Barsuk, S; Barter, W; Baszczyk, M; Batozskaya, V; Batsukh, B; Battista, V; Bay, A; Beaucourt, L; Beddow, J; Bedeschi, F; Bediaga, I; Bel, L J; Bellee, V; Belloli, N; Belous, K; Belyaev, I; Ben-Haim, E; Bencivenni, G; Benson, S; Benton, J; Berezhnoy, A; Bernet, R; Bertolin, A; Betti, F; Bettler, M-O; van Beuzekom, M; Bezshyiko, Ia; Bifani, S; Billoir, P; Bird, T; Birnkraut, A; Bitadze, A; Bizzeti, A; Blake, T; Blanc, F; Blouw, J; Blusk, S; Bocci, V; Boettcher, T; Bondar, A; Bondar, N; Bonivento, W; Bordyuzhin, I; Borgheresi, A; Borghi, S; Borisyak, M; Borsato, M; Bossu, F; Boubdir, M; Bowcock, T J V; Bowen, E; Bozzi, C; Braun, S; Britsch, M; Britton, T; Brodzicka, J; Buchanan, E; Burr, C; Bursche, A; Buytaert, J; Cadeddu, S; Calabrese, R; Calvi, M; Calvo Gomez, M; Camboni, A; Campana, P; Campora Perez, D; Campora Perez, D H; Capriotti, L; Carbone, A; Carboni, G; Cardinale, R; Cardini, A; Carniti, P; Carson, L; Carvalho Akiba, K; Casse, G; Cassina, L; Castillo Garcia, L; Cattaneo, M; Cauet, Ch; Cavallero, G; Cenci, R; Charles, M; Charpentier, Ph; Chatzikonstantinidis, G; Chefdeville, M; Chen, S; Cheung, S F; Chobanova, V; Chrzaszcz, M; Cid Vidal, X; Ciezarek, G; Clarke, P E L; Clemencic, M; Cliff, H V; Closier, J; Coco, V; Cogan, J; Cogneras, E; Cogoni, V; Cojocariu, L; Collins, P; Comerma-Montells, A; Contu, A; Cook, A; Coombs, G; Coquereau, S; Corti, G; Corvo, M; Costa Sobral, C M; Couturier, B; Cowan, G A; Craik, D C; Crocombe, A; Cruz Torres, M; Cunliffe, S; Currie, R; D'Ambrosio, C; Da Cunha Marinho, F; Dall'Occo, E; Dalseno, J; David, P N Y; Davis, A; De Aguiar Francisco, O; De Bruyn, K; De Capua, S; De Cian, M; De Miranda, J M; De Paula, L; De Serio, M; De Simone, P; Dean, C T; Decamp, D; Deckenhoff, M; Del Buono, L; Demmer, M; Dendek, A; Derkach, D; Deschamps, O; Dettori, F; Dey, B; Di Canto, A; Dijkstra, H; Dordei, F; Dorigo, M; Dosil Suárez, A; Dovbnya, A; Dreimanis, K; Dufour, L; Dujany, G; Dungs, K; Durante, P; Dzhelyadin, R; Dziurda, A; Dzyuba, A; Déléage, N; Easo, S; Ebert, M; Egede, U; Egorychev, V; Eidelman, S; Eisenhardt, S; Eitschberger, U; Ekelhof, R; Eklund, L; Elsasser, Ch; Ely, S; Esen, S; Evans, H M; Evans, T; Falabella, A; Farley, N; Farry, S; Fay, R; Fazzini, D; Ferguson, D; Fernandez Prieto, A; Ferrari, F; Ferreira Rodrigues, F; Ferro-Luzzi, M; Filippov, S; Fini, R A; Fiore, M; Fiorini, M; Firlej, M; Fitzpatrick, C; Fiutowski, T; Fleuret, F; Fohl, K; Fontana, M; Fontanelli, F; Forshaw, D C; Forty, R; Franco Lima, V; Frank, M; Frei, C; Fu, J; Furfaro, E; Färber, C; Gallas Torreira, A; Galli, D; Gallorini, S; Gambetta, S; Gandelman, M; Gandini, P; Gao, Y; Garcia Martin, L M; García Pardiñas, J; Garra Tico, J; Garrido, L; Garsed, P J; Gascon, D; Gaspar, C; Gavardi, L; Gazzoni, G; Gerick, D; Gersabeck, E; Gersabeck, M; Gershon, T; Ghez, Ph; Gianì, S; Gibson, V; Girard, O G; Giubega, L; Gizdov, K; Gligorov, V V; Golubkov, D; Golutvin, A; Gomes, A; Gorelov, I V; Gotti, C; Grabalosa Gándara, M; Graciani Diaz, R; Granado Cardoso, L A; Graugés, E; Graverini, E; Graziani, G; Grecu, A; Griffith, P; Grillo, L; Gruberg Cazon, B R; Grünberg, O; Gushchin, E; Guz, Yu; Gys, T; Göbel, C; Hadavizadeh, T; Hadjivasiliou, C; Haefeli, G; Haen, C; Haines, S C; Hall, S; Hamilton, B; Han, X; Hansmann-Menzemer, S; Harnew, N; Harnew, S T; Harrison, J; Hatch, M; He, J; Head, T; Heister, A; Hennessy, K; Henrard, P; Henry, L; Hernando Morata, J A; van Herwijnen, E; Heß, M; Hicheur, A; Hill, D; Hombach, C; Hopchev, P H; Hulsbergen, W; Humair, T; Hushchyn, M; Hussain, N; Hutchcroft, D; Idzik, M; Ilten, P; Jacobsson, R; Jaeger, A; Jalocha, J; Jans, E; Jawahery, A; Jiang, F; John, M; Johnson, D; Jones, C R; Joram, C; Jost, B; Jurik, N; Kandybei, S; Kanso, W; Karacson, M; Kariuki, J M; Karodia, S; Kecke, M; Kelsey, M; Kenyon, I R; Kenzie, M; Ketel, T; Khairullin, E; Khanji, B; Khurewathanakul, C; Kirn, T; Klaver, S; Klimaszewski, K; Koliiev, S; Kolpin, M; Komarov, I; Koopman, R F; Koppenburg, P; Kosmyntseva, A; Kozeiha, M; Kravchuk, L; Kreplin, K; Kreps, M; Krokovny, P; Kruse, F; Krzemien, W; Kucewicz, W; Kucharczyk, M; Kudryavtsev, V; Kuonen, A K; Kurek, K; Kvaratskheliya, T; Lacarrere, D; Lafferty, G; Lai, A; Lambert, D; Lanfranchi, G; Langenbruch, C; Latham, T; Lazzeroni, C; Le Gac, R; van Leerdam, J; Lees, J-P; Leflat, A; Lefrançois, J; Lefèvre, R; Lemaitre, F; Lemos Cid, E; Leroy, O; Lesiak, T; Leverington, B; Li, Y; Likhomanenko, T; Lindner, R; Linn, C; Lionetto, F; Liu, B; Liu, X; Loh, D; Longstaff, I; Lopes, J H; Lucchesi, D; Lucio Martinez, M; Luo, H; Lupato, A; Luppi, E; Lupton, O; Lusiani, A; Lyu, X; Machefert, F; Maciuc, F; Maev, O; Maguire, K; Malde, S; Malinin, A; Maltsev, T; Manca, G; Mancinelli, G; Manning, P; Maratas, J; Marchand, J F; Marconi, U; Marin Benito, C; Marino, P; Marks, J; Martellotti, G; Martin, M; Martinelli, M; Martinez Santos, D; Martinez Vidal, F; Martins Tostes, D; Massacrier, L M; Massafferri, A; Matev, R; Mathad, A; Mathe, Z; Matteuzzi, C; Mauri, A; Maurin, B; Mazurov, A; McCann, M; McCarthy, J; McNab, A; McNulty, R; Meadows, B; Meier, F; Meissner, M; Melnychuk, D; Merk, M; Merli, A; Michielin, E; Milanes, D A; Minard, M-N; Mitzel, D S; Mogini, A; Molina Rodriguez, J; Monroy, I A; Monteil, S; Morandin, M; Morawski, P; Mordà, A; Morello, M J; Moron, J; Morris, A B; Mountain, R; Muheim, F; Mulder, M; Mussini, M; Müller, D; Müller, J; Müller, K; Müller, V; Naik, P; Nakada, T; Nandakumar, R; Nandi, A; Nasteva, I; Needham, M; Neri, N; Neubert, S; Neufeld, N; Neuner, M; Nguyen, A D; Nguyen, T D; Nguyen-Mau, C; Nieswand, S; Niet, R; Nikitin, N; Nikodem, T; Novoselov, A; O'Hanlon, D P; Oblakowska-Mucha, A; Obraztsov, V; Ogilvy, S; Oldeman, R; Onderwater, C J G; Otalora Goicochea, J M; Otto, A; Owen, P; Oyanguren, A; Pais, P R; Palano, A; Palombo, F; Palutan, M; Panman, J; Papanestis, A; Pappagallo, M; Pappalardo, L L; Parker, W; Parkes, C; Passaleva, G; Pastore, A; Patel, G D; Patel, M; Patrignani, C; Pearce, A; Pellegrino, A; Penso, G; Pepe Altarelli, M; Perazzini, S; Perret, P; Pescatore, L; Petridis, K; Petrolini, A; Petrov, A; Petruzzo, M; Picatoste Olloqui, E; Pietrzyk, B; Pikies, M; Pinci, D; Pistone, A; Piucci, A; Playfer, S; Plo Casasus, M; Poikela, T; Polci, F; Poluektov, A; Polyakov, I; Polycarpo, E; Pomery, G J; Popov, A; Popov, D; Popovici, B; Poslavskii, S; Potterat, C; Price, E; Price, J D; Prisciandaro, J; Pritchard, A; Prouve, C; Pugatch, V; Puig Navarro, A; Punzi, G; Qian, W; Quagliani, R; Rachwal, B; Rademacker, J H; Rama, M; Ramos Pernas, M; Rangel, M S; Raniuk, I; Ratnikov, F; Raven, G; Redi, F; Reichert, S; Dos Reis, A C; Remon Alepuz, C; Renaudin, V; Ricciardi, S; Richards, S; Rihl, M; Rinnert, K; Rives Molina, V; Robbe, P; Rodrigues, A B; Rodrigues, E; Rodriguez Lopez, J A; Rodriguez Perez, P; Rogozhnikov, A; Roiser, S; Rollings, A; Romanovskiy, V; Romero Vidal, A; Ronayne, J W; Rotondo, M; Rudolph, M S; Ruf, T; Ruiz Valls, P; Saborido Silva, J J; Sadykhov, E; Sagidova, N; Saitta, B; Salustino Guimaraes, V; Sanchez Mayordomo, C; Sanmartin Sedes, B; Santacesaria, R; Santamarina Rios, C; Santimaria, M; Santovetti, E; Sarti, A; Satriano, C; Satta, A; Saunders, D M; Savrina, D; Schael, S; Schellenberg, M; Schiller, M; Schindler, H; Schlupp, M; Schmelling, M; Schmelzer, T; Schmidt, B; Schneider, O; Schopper, A; Schubert, K; Schubiger, M; Schune, M-H; Schwemmer, R; Sciascia, B; Sciubba, A; Semennikov, A; Sergi, A; Serra, N; Serrano, J; Sestini, L; Seyfert, P; Shapkin, M; Shapoval, I; Shcheglov, Y; Shears, T; Shekhtman, L; Shevchenko, V; Shires, A; Siddi, B G; Silva Coutinho, R; Silva de Oliveira, L; Simi, G; Simone, S; Sirendi, M; Skidmore, N; Skwarnicki, T; Smith, E; Smith, I T; Smith, J; Smith, M; Snoek, H; Sokoloff, M D; Soler, F J P; Souza De Paula, B; Spaan, B; Spradlin, P; Sridharan, S; Stagni, F; Stahl, M; Stahl, S; Stefko, P; Stefkova, S; Steinkamp, O; Stemmle, S; Stenyakin, O; Stevenson, S; Stoica, S; Stone, S; Storaci, B; Stracka, S; Straticiuc, M; Straumann, U; Sun, L; Sutcliffe, W; Swientek, K; Syropoulos, V; Szczekowski, M; Szumlak, T; T'Jampens, S; Tayduganov, A; Tekampe, T; Teklishyn, M; Tellarini, G; Teubert, F; Thomas, E; van Tilburg, J; Tilley, M J; Tisserand, V; Tobin, M; Tolk, S; Tomassetti, L; Tonelli, D; Topp-Joergensen, S; Toriello, F; Tournefier, E; Tourneur, S; Trabelsi, K; Traill, M; Tran, M T; Tresch, M; Trisovic, A; Tsaregorodtsev, A; Tsopelas, P; Tully, A; Tuning, N; Ukleja, A; Ustyuzhanin, A; Uwer, U; Vacca, C; Vagnoni, V; Valassi, A; Valat, S; Valenti, G; Vallier, A; Vazquez Gomez, R; Vazquez Regueiro, P; Vecchi, S; van Veghel, M; Velthuis, J J; Veltri, M; Veneziano, G; Venkateswaran, A; Vernet, M; Vesterinen, M; Viaud, B; Vieira, D; Vieites Diaz, M; Vilasis-Cardona, X; Volkov, V; Vollhardt, A; Voneki, B; Vorobyev, A; Vorobyev, V; Voß, C; de Vries, J A; Vázquez Sierra, C; Waldi, R; Wallace, C; Wallace, R; Walsh, J; Wang, J; Ward, D R; Wark, H M; Watson, N K; Websdale, D; Weiden, A; Whitehead, M; Wicht, J; Wilkinson, G; Wilkinson, M; Williams, M; Williams, M P; Williams, M; Williams, T; Wilson, F F; Wimberley, J; Wishahi, J; Wislicki, W; Witek, M; Wormser, G; Wotton, S A; Wraight, K; Wyllie, K; Xie, Y; Xu, Z; Yang, Z; Yin, H; Yu, J; Yuan, X; Yushchenko, O; Zarebski, K A; Zavertyaev, M; Zhang, L; Zhang, Y; Zhelezov, A; Zheng, Y; Zhokhov, A; Zhu, X; Zhukov, V; Zucchelli, S

    2017-01-01

    Two new algorithms for use in the analysis of [Formula: see text] collision are developed to identify the flavour of [Formula: see text] mesons at production using pions and protons from the hadronization process. The algorithms are optimized and calibrated on data, using [Formula: see text] decays from [Formula: see text] collision data collected by LHCb at centre-of-mass energies of 7 and 8 TeV . The tagging power of the new pion algorithm is 60% greater than the previously available one; the algorithm using protons to identify the flavour of a [Formula: see text] meson is the first of its kind.

  20. A new hybrid genetic algorithm for optimizing the single and multivariate objective functions

    Energy Technology Data Exchange (ETDEWEB)

    Tumuluru, Jaya Shankar [Idaho National Laboratory; McCulloch, Richard Chet James [Idaho National Laboratory

    2015-07-01

    In this work a new hybrid genetic algorithm was developed which combines a rudimentary adaptive steepest ascent hill climbing algorithm with a sophisticated evolutionary algorithm in order to optimize complex multivariate design problems. By combining a highly stochastic algorithm (evolutionary) with a simple deterministic optimization algorithm (adaptive steepest ascent) computational resources are conserved and the solution converges rapidly when compared to either algorithm alone. In genetic algorithms natural selection is mimicked by random events such as breeding and mutation. In the adaptive steepest ascent algorithm each variable is perturbed by a small amount and the variable that caused the most improvement is incremented by a small step. If the direction of most benefit is exactly opposite of the previous direction with the most benefit then the step size is reduced by a factor of 2, thus the step size adapts to the terrain. A graphical user interface was created in MATLAB to provide an interface between the hybrid genetic algorithm and the user. Additional features such as bounding the solution space and weighting the objective functions individually are also built into the interface. The algorithm developed was tested to optimize the functions developed for a wood pelleting process. Using process variables (such as feedstock moisture content, die speed, and preheating temperature) pellet properties were appropriately optimized. Specifically, variables were found which maximized unit density, bulk density, tapped density, and durability while minimizing pellet moisture content and specific energy consumption. The time and computational resources required for the optimization were dramatically decreased using the hybrid genetic algorithm when compared to MATLAB's native evolutionary optimization tool.

  1. Advancements in the Development of an Operational Lightning Jump Algorithm for GOES-R GLM

    Science.gov (United States)

    Shultz, Chris; Petersen, Walter; Carey, Lawrence

    2011-01-01

    Rapid increases in total lightning have been shown to precede the manifestation of severe weather at the surface. These rapid increases have been termed lightning jumps, and are the current focus of algorithm development for the GOES-R Geostationary Lightning Mapper (GLM). Recent lightning jump algorithm work has focused on evaluation of algorithms in three additional regions of the country, as well as, markedly increasing the number of thunderstorms in order to evaluate the each algorithm s performance on a larger population of storms. Lightning characteristics of just over 600 thunderstorms have been studied over the past four years. The 2 lightning jump algorithm continues to show the most promise for an operational lightning jump algorithm, with a probability of detection of 82%, a false alarm rate of 35%, a critical success index of 57%, and a Heidke Skill Score of 0.73 on the entire population of thunderstorms. Average lead time for the 2 algorithm on all severe weather is 21.15 minutes, with a standard deviation of +/- 14.68 minutes. Looking at tornadoes alone, the average lead time is 18.71 minutes, with a standard deviation of +/-14.88 minutes. Moreover, removing the 2 lightning jumps that occur after a jump has been detected, and before severe weather is detected at the ground, the 2 lightning jump algorithm s false alarm rate drops from 35% to 21%. Cold season, low topped, and tropical environments cause problems for the 2 lightning jump algorithm, due to their relative dearth in lightning as compared to a supercellular or summertime airmass thunderstorm environment.

  2. Radionuclide identification algorithm for organic scintillator-based radiation portal monitor

    Energy Technology Data Exchange (ETDEWEB)

    Paff, Marc Gerrit, E-mail: mpaff@umich.edu; Di Fulvio, Angela; Clarke, Shaun D.; Pozzi, Sara A.

    2017-03-21

    We have developed an algorithm for on-the-fly radionuclide identification for radiation portal monitors using organic scintillation detectors. The algorithm was demonstrated on experimental data acquired with our pedestrian portal monitor on moving special nuclear material and industrial sources at a purpose-built radiation portal monitor testing facility. The experimental data also included common medical isotopes. The algorithm takes the power spectral density of the cumulative distribution function of the measured pulse height distributions and matches these to reference spectra using a spectral angle mapper. F-score analysis showed that the new algorithm exhibited significant performance improvements over previously implemented radionuclide identification algorithms for organic scintillators. Reliable on-the-fly radionuclide identification would help portal monitor operators more effectively screen out the hundreds of thousands of nuisance alarms they encounter annually due to recent nuclear-medicine patients and cargo containing naturally occurring radioactive material. Portal monitor operators could instead focus on the rare but potentially high impact incidents of nuclear and radiological material smuggling detection for which portal monitors are intended.

  3. Radionuclide identification algorithm for organic scintillator-based radiation portal monitor

    Science.gov (United States)

    Paff, Marc Gerrit; Di Fulvio, Angela; Clarke, Shaun D.; Pozzi, Sara A.

    2017-03-01

    We have developed an algorithm for on-the-fly radionuclide identification for radiation portal monitors using organic scintillation detectors. The algorithm was demonstrated on experimental data acquired with our pedestrian portal monitor on moving special nuclear material and industrial sources at a purpose-built radiation portal monitor testing facility. The experimental data also included common medical isotopes. The algorithm takes the power spectral density of the cumulative distribution function of the measured pulse height distributions and matches these to reference spectra using a spectral angle mapper. F-score analysis showed that the new algorithm exhibited significant performance improvements over previously implemented radionuclide identification algorithms for organic scintillators. Reliable on-the-fly radionuclide identification would help portal monitor operators more effectively screen out the hundreds of thousands of nuisance alarms they encounter annually due to recent nuclear-medicine patients and cargo containing naturally occurring radioactive material. Portal monitor operators could instead focus on the rare but potentially high impact incidents of nuclear and radiological material smuggling detection for which portal monitors are intended.

  4. A new warfarin dosing algorithm including VKORC1 3730 G > A polymorphism: comparison with results obtained by other published algorithms.

    Science.gov (United States)

    Cini, Michela; Legnani, Cristina; Cosmi, Benilde; Guazzaloca, Giuliana; Valdrè, Lelia; Frascaro, Mirella; Palareti, Gualtiero

    2012-08-01

    Warfarin dosing is affected by clinical and genetic variants, but the contribution of the genotype associated with warfarin resistance in pharmacogenetic algorithms has not been well assessed yet. We developed a new dosing algorithm including polymorphisms associated both with warfarin sensitivity and resistance in the Italian population, and its performance was compared with those of eight previously published algorithms. Clinical and genetic data (CYP2C9*2, CYP2C9*3, VKORC1 -1639 G > A, and VKORC1 3730 G > A) were used to elaborate the new algorithm. Derivation and validation groups comprised 55 (58.2% men, mean age 69 years) and 40 (57.5% men, mean age 70 years) patients, respectively, who were on stable anticoagulation therapy for at least 3 months with different oral anticoagulation therapy (OAT) indications. Performance of the new algorithm, evaluated with mean absolute error (MAE) defined as the absolute value of the difference between observed daily maintenance dose and predicted daily dose, correlation with the observed dose and R(2) value, was comparable with or slightly lower than that obtained using the other algorithms. The new algorithm could correctly assign 53.3%, 50.0%, and 57.1% of patients to the low (≤25 mg/week), intermediate (26-44 mg/week) and high (≥ 45 mg/week) dosing range, respectively. Our data showed a significant increase in predictive accuracy among patients requiring high warfarin dose compared with the other algorithms (ranging from 0% to 28.6%). The algorithm including VKORC1 3730 G > A, associated with warfarin resistance, allowed a more accurate identification of resistant patients who require higher warfarin dosage.

  5. The algorithms for calculating synthetic seismograms from a dipole source using the derivatives of Green's function

    Science.gov (United States)

    Pavlov, V. M.

    2017-07-01

    The problem of calculating complete synthetic seismograms from a point dipole with an arbitrary seismic moment tensor in a plane parallel medium composed of homogeneous elastic isotropic layers is considered. It is established that the solutions of the system of ordinary differential equations for the motion-stress vector have a reciprocity property, which allows obtaining a compact formula for the derivative of the motion vector with respect to the source depth. The reciprocity theorem for Green's functions with respect to the interchange of the source and receiver is obtained for a medium with cylindrical boundary. The differentiation of Green's functions with respect to the coordinates of the source leads to the same calculation formulas as the algorithm developed in the previous work (Pavlov, 2013). A new algorithm appears when the derivatives with respect to the horizontal coordinates of the source is replaced by the derivatives with respect to the horizontal coordinates of the receiver (with the minus sign). This algorithm is more transparent, compact, and economic than the previous one. It requires calculating the wavenumbers associated with Bessel function's roots of order 0 and order 1, whereas the previous algorithm additionally requires the second order roots.

  6. An algorithm for learning real-time automata

    NARCIS (Netherlands)

    Verwer, S.E.; De Weerdt, M.M.; Witteveen, C.

    2007-01-01

    We describe an algorithm for learning simple timed automata, known as real-time automata. The transitions of real-time automata can have a temporal constraint on the time of occurrence of the current symbol relative to the previous symbol. The learning algorithm is similar to the redblue fringe

  7. A Faster Algorithm for Computing Straight Skeletons

    KAUST Repository

    Mencel, Liam A.

    2014-01-01

    computation in O(n (log n) log r) time. It improves on the previously best known algorithm for this reduction, which is randomised, and runs in expected O(n √(h+1) log² n) time for a polygon with h holes. Using known motorcycle graph algorithms, our result

  8. Enhanced clinical pharmacy service targeting tools: risk-predictive algorithms.

    Science.gov (United States)

    El Hajji, Feras W D; Scullin, Claire; Scott, Michael G; McElnay, James C

    2015-04-01

    This study aimed to determine the value of using a mix of clinical pharmacy data and routine hospital admission spell data in the development of predictive algorithms. Exploration of risk factors in hospitalized patients, together with the targeting strategies devised, will enable the prioritization of clinical pharmacy services to optimize patient outcomes. Predictive algorithms were developed using a number of detailed steps using a 75% sample of integrated medicines management (IMM) patients, and validated using the remaining 25%. IMM patients receive targeted clinical pharmacy input throughout their hospital stay. The algorithms were applied to the validation sample, and predicted risk probability was generated for each patient from the coefficients. Risk threshold for the algorithms were determined by identifying the cut-off points of risk scores at which the algorithm would have the highest discriminative performance. Clinical pharmacy staffing levels were obtained from the pharmacy department staffing database. Numbers of previous emergency admissions and admission medicines together with age-adjusted co-morbidity and diuretic receipt formed a 12-month post-discharge and/or readmission risk algorithm. Age-adjusted co-morbidity proved to be the best index to predict mortality. Increased numbers of clinical pharmacy staff at ward level was correlated with a reduction in risk-adjusted mortality index (RAMI). Algorithms created were valid in predicting risk of in-hospital and post-discharge mortality and risk of hospital readmission 3, 6 and 12 months post-discharge. The provision of ward-based clinical pharmacy services is a key component to reducing RAMI and enabling the full benefits of pharmacy input to patient care to be realized. © 2014 John Wiley & Sons, Ltd.

  9. The geometry of entanglement and Grover's algorithm

    International Nuclear Information System (INIS)

    Iwai, Toshihiro; Hayashi, Naoki; Mizobe, Kimitake

    2008-01-01

    A measure of entanglement with respect to a bipartite partition of n-qubit has been defined and studied from the viewpoint of Riemannian geometry (Iwai 2007 J. Phys. A: Math. Theor. 40 12161). This paper has two aims. One is to study further the geometry of entanglement, and the other is to investigate Grover's search algorithms, both the original and the fixed-point ones, in reference with entanglement. As the distance between the maximally entangled states and the separable states is known already in the previous paper, this paper determines the set of maximally entangled states nearest to a typical separable state which is used as an initial state in Grover's search algorithms, and to find geodesic segments which realize the above-mentioned distance. As for Grover's algorithms, it is already known that while the initial and the target states are separable, the algorithms generate sequences of entangled states. This fact is confirmed also in the entanglement measure proposed in the previous paper, and then a split Grover algorithm is proposed which generates sequences of separable states only with respect to the bipartite partition

  10. The ASTRA Toolbox: A platform for advanced algorithm development in electron tomography

    Energy Technology Data Exchange (ETDEWEB)

    Aarle, Wim van, E-mail: wim.vanaarle@uantwerpen.be [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Palenstijn, Willem Jan, E-mail: willemjan.palenstijn@uantwerpen.be [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Centrum Wiskunde & Informatica, Science Park 123, NL-1098 XG Amsterdam (Netherlands); De Beenhouwer, Jan, E-mail: jan.debeenhouwer@uantwerpen.be [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Altantzis, Thomas, E-mail: thomas.altantzis@uantwerpen.be [Electron Microscopy for Materials Science, University of Antwerp, Groenenborgerlaan 171, B-2020 Wilrijk (Belgium); Bals, Sara, E-mail: sara.bals@uantwerpen.be [Electron Microscopy for Materials Science, University of Antwerp, Groenenborgerlaan 171, B-2020 Wilrijk (Belgium); Batenburg, K. Joost, E-mail: joost.batenburg@cwi.nl [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Centrum Wiskunde & Informatica, Science Park 123, NL-1098 XG Amsterdam (Netherlands); Mathematical Institute, Leiden University, P.O. Box 9512, NL-2300 RA Leiden (Netherlands); Sijbers, Jan, E-mail: jan.sijbers@uantwerpen.be [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium)

    2015-10-15

    We present the ASTRA Toolbox as an open platform for 3D image reconstruction in tomography. Most of the software tools that are currently used in electron tomography offer limited flexibility with respect to the geometrical parameters of the acquisition model and the algorithms used for reconstruction. The ASTRA Toolbox provides an extensive set of fast and flexible building blocks that can be used to develop advanced reconstruction algorithms, effectively removing these limitations. We demonstrate this flexibility, the resulting reconstruction quality, and the computational efficiency of this toolbox by a series of experiments, based on experimental dual-axis tilt series. - Highlights: • The ASTRA Toolbox is an open platform for 3D image reconstruction in tomography. • Advanced reconstruction algorithms can be prototyped using the fast and flexible building blocks. • This flexibility is demonstrated on a common use case: dual-axis tilt series reconstruction with prior knowledge. • The computational efficiency is validated on an experimentally measured tilt series.

  11. A sonification algorithm for developing the off-roads models for driving simulators

    Science.gov (United States)

    Chiroiu, Veturia; Brişan, Cornel; Dumitriu, Dan; Munteanu, Ligia

    2018-01-01

    In this paper, a sonification algorithm for developing the off-road models for driving simulators, is proposed. The aim of this algorithm is to overcome difficulties of heuristics identification which are best suited to a particular off-road profile built by measurements. The sonification algorithm is based on the stochastic polynomial chaos analysis suitable in solving equations with random input data. The fluctuations are generated by incomplete measurements leading to inhomogeneities of the cross-sectional curves of off-roads before and after deformation, the unstable contact between the tire and the road and the unreal distribution of contact and friction forces in the unknown contact domains. The approach is exercised on two particular problems and results compare favorably to existing analytical and numerical solutions. The sonification technique represents a useful multiscale analysis able to build a low-cost virtual reality environment with increased degrees of realism for driving simulators and higher user flexibility.

  12. The ASTRA Toolbox: A platform for advanced algorithm development in electron tomography

    International Nuclear Information System (INIS)

    Aarle, Wim van; Palenstijn, Willem Jan; De Beenhouwer, Jan; Altantzis, Thomas; Bals, Sara; Batenburg, K. Joost; Sijbers, Jan

    2015-01-01

    We present the ASTRA Toolbox as an open platform for 3D image reconstruction in tomography. Most of the software tools that are currently used in electron tomography offer limited flexibility with respect to the geometrical parameters of the acquisition model and the algorithms used for reconstruction. The ASTRA Toolbox provides an extensive set of fast and flexible building blocks that can be used to develop advanced reconstruction algorithms, effectively removing these limitations. We demonstrate this flexibility, the resulting reconstruction quality, and the computational efficiency of this toolbox by a series of experiments, based on experimental dual-axis tilt series. - Highlights: • The ASTRA Toolbox is an open platform for 3D image reconstruction in tomography. • Advanced reconstruction algorithms can be prototyped using the fast and flexible building blocks. • This flexibility is demonstrated on a common use case: dual-axis tilt series reconstruction with prior knowledge. • The computational efficiency is validated on an experimentally measured tilt series

  13. Development of a Crosstalk Suppression Algorithm for KID Readout

    Science.gov (United States)

    Lee, Kyungmin; Ishitsuka, H.; Oguri, S.; Suzuki, J.; Tajima, O.; Tomita, N.; Won, Eunil; Yoshida, M.

    2018-06-01

    The GroundBIRD telescope aims to detect B-mode polarization of the cosmic microwave background radiation using the kinetic inductance detector array as a polarimeter. For the readout of the signal from detector array, we have developed a frequency division multiplexing readout system based on a digital down converter method. These techniques in general have the leakage problems caused by the crosstalks. The window function was applied in the field programmable gate arrays to mitigate the effect of these problems and tested it in algorithm level.

  14. TH-E-BRE-04: An Online Replanning Algorithm for VMAT

    Energy Technology Data Exchange (ETDEWEB)

    Ahunbay, E; Li, X [Medical College of Wisconsin, Milwaukee, WI (United States); Moreau, M [Elekta, Inc, Verona, WI (Italy)

    2014-06-15

    Purpose: To develop a fast replanning algorithm based on segment aperture morphing (SAM) for online replanning of volumetric modulated arc therapy (VMAT) with flattening filtered (FF) and flattening filter free (FFF) beams. Methods: A software tool was developed to interface with a VMAT planning system ((Monaco, Elekta), enabling the output of detailed beam/machine parameters of original VMAT plans generated based on planning CTs for FF or FFF beams. A SAM algorithm, previously developed for fixed-beam IMRT, was modified to allow the algorithm to correct for interfractional variations (e.g., setup error, organ motion and deformation) by morphing apertures based on the geometric relationship between the beam's eye view of the anatomy from the planning CT and that from the daily CT for each control point. The algorithm was tested using daily CTs acquired using an in-room CT during daily IGRT for representative prostate cancer cases along with their planning CTs. The algorithm allows for restricted MLC leaf travel distance between control points of the VMAT delivery to prevent SAM from increasing leaf travel, and therefore treatment delivery time. Results: The VMAT plans adapted to the daily CT by SAM were found to improve the dosimetry relative to the IGRT repositioning plans for both FF and FFF beams. For the adaptive plans, the changes in leaf travel distance between control points were < 1cm for 80% of the control points with no restriction. When restricted to the original plans' maximum travel distance, the dosimetric effect was minimal. The adaptive plans were delivered successfully with similar delivery times as the original plans. The execution of the SAM algorithm was < 10 seconds. Conclusion: The SAM algorithm can quickly generate deliverable online-adaptive VMAT plans based on the anatomy of the day for both FF and FFF beams.

  15. TH-E-BRE-04: An Online Replanning Algorithm for VMAT

    International Nuclear Information System (INIS)

    Ahunbay, E; Li, X; Moreau, M

    2014-01-01

    Purpose: To develop a fast replanning algorithm based on segment aperture morphing (SAM) for online replanning of volumetric modulated arc therapy (VMAT) with flattening filtered (FF) and flattening filter free (FFF) beams. Methods: A software tool was developed to interface with a VMAT planning system ((Monaco, Elekta), enabling the output of detailed beam/machine parameters of original VMAT plans generated based on planning CTs for FF or FFF beams. A SAM algorithm, previously developed for fixed-beam IMRT, was modified to allow the algorithm to correct for interfractional variations (e.g., setup error, organ motion and deformation) by morphing apertures based on the geometric relationship between the beam's eye view of the anatomy from the planning CT and that from the daily CT for each control point. The algorithm was tested using daily CTs acquired using an in-room CT during daily IGRT for representative prostate cancer cases along with their planning CTs. The algorithm allows for restricted MLC leaf travel distance between control points of the VMAT delivery to prevent SAM from increasing leaf travel, and therefore treatment delivery time. Results: The VMAT plans adapted to the daily CT by SAM were found to improve the dosimetry relative to the IGRT repositioning plans for both FF and FFF beams. For the adaptive plans, the changes in leaf travel distance between control points were < 1cm for 80% of the control points with no restriction. When restricted to the original plans' maximum travel distance, the dosimetric effect was minimal. The adaptive plans were delivered successfully with similar delivery times as the original plans. The execution of the SAM algorithm was < 10 seconds. Conclusion: The SAM algorithm can quickly generate deliverable online-adaptive VMAT plans based on the anatomy of the day for both FF and FFF beams

  16. A comprehensive evaluation of alignment algorithms in the context of RNA-seq.

    Directory of Open Access Journals (Sweden)

    Robert Lindner

    Full Text Available Transcriptome sequencing (RNA-Seq overcomes limitations of previously used RNA quantification methods and provides one experimental framework for both high-throughput characterization and quantification of transcripts at the nucleotide level. The first step and a major challenge in the analysis of such experiments is the mapping of sequencing reads to a transcriptomic origin including the identification of splicing events. In recent years, a large number of such mapping algorithms have been developed, all of which have in common that they require algorithms for aligning a vast number of reads to genomic or transcriptomic sequences. Although the FM-index based aligner Bowtie has become a de facto standard within mapping pipelines, a much larger number of possible alignment algorithms have been developed also including other variants of FM-index based aligners. Accordingly, developers and users of RNA-seq mapping pipelines have the choice among a large number of available alignment algorithms. To provide guidance in the choice of alignment algorithms for these purposes, we evaluated the performance of 14 widely used alignment programs from three different algorithmic classes: algorithms using either hashing of the reference transcriptome, hashing of reads, or a compressed FM-index representation of the genome. Here, special emphasis was placed on both precision and recall and the performance for different read lengths and numbers of mismatches and indels in a read. Our results clearly showed the significant reduction in memory footprint and runtime provided by FM-index based aligners at a precision and recall comparable to the best hash table based aligners. Furthermore, the recently developed Bowtie 2 alignment algorithm shows a remarkable tolerance to both sequencing errors and indels, thus, essentially making hash-based aligners obsolete.

  17. A new algorithm for the simulation of the Boltzmann equation using the direct simulation monte-carlo method

    International Nuclear Information System (INIS)

    Ganjaei, A. A.; Nourazar, S. S.

    2009-01-01

    A new algorithm, the modified direct simulation Monte-Carlo (MDSMC) method, for the simulation of Couette- Taylor gas flow problem is developed. The Taylor series expansion is used to obtain the modified equation of the first order time discretization of the collision equation and the new algorithm, MDSMC, is implemented to simulate the collision equation in the Boltzmann equation. In the new algorithm (MDSMC) there exists a new extra term which takes in to account the effect of the second order collision. This new extra term has the effect of enhancing the appearance of the first Taylor instabilities of vortices streamlines. In the new algorithm (MDSMC) there also exists a second order term in time step in the probabilistic coefficients which has the effect of simulation with higher accuracy than the previous DSMC algorithm. The appearance of the first Taylor instabilities of vortices streamlines using the MDSMC algorithm at different ratios of ω/ν (experimental data of Taylor) occurred at less time-step than using the DSMC algorithm. The results of the torque developed on the stationary cylinder using the MDSMC algorithm show better agreement in comparison with the experimental data of Kuhlthau than the results of the torque developed on the stationary cylinder using the DSMC algorithm

  18. Demosaicking algorithm for the Kodak-RGBW color filter array

    Science.gov (United States)

    Rafinazari, M.; Dubois, E.

    2015-01-01

    Digital cameras capture images through different Color Filter Arrays and then reconstruct the full color image. Each CFA pixel only captures one primary color component; the other primary components will be estimated using information from neighboring pixels. During the demosaicking algorithm, the two unknown color components will be estimated at each pixel location. Most of the demosaicking algorithms use the RGB Bayer CFA pattern with Red, Green and Blue filters. The least-Squares Luma-Chroma demultiplexing method is a state of the art demosaicking method for the Bayer CFA. In this paper we develop a new demosaicking algorithm using the Kodak-RGBW CFA. This particular CFA reduces noise and improves the quality of the reconstructed images by adding white pixels. We have applied non-adaptive and adaptive demosaicking method using the Kodak-RGBW CFA on the standard Kodak image dataset and the results have been compared with previous work.

  19. A false-alarm aware methodology to develop robust and efficient multi-scale infrared small target detection algorithm

    Science.gov (United States)

    Moradi, Saed; Moallem, Payman; Sabahi, Mohamad Farzan

    2018-03-01

    False alarm rate and detection rate are still two contradictory metrics for infrared small target detection in an infrared search and track system (IRST), despite the development of new detection algorithms. In certain circumstances, not detecting true targets is more tolerable than detecting false items as true targets. Hence, considering background clutter and detector noise as the sources of the false alarm in an IRST system, in this paper, a false alarm aware methodology is presented to reduce false alarm rate while the detection rate remains undegraded. To this end, advantages and disadvantages of each detection algorithm are investigated and the sources of the false alarms are determined. Two target detection algorithms having independent false alarm sources are chosen in a way that the disadvantages of the one algorithm can be compensated by the advantages of the other one. In this work, multi-scale average absolute gray difference (AAGD) and Laplacian of point spread function (LoPSF) are utilized as the cornerstones of the desired algorithm of the proposed methodology. After presenting a conceptual model for the desired algorithm, it is implemented through the most straightforward mechanism. The desired algorithm effectively suppresses background clutter and eliminates detector noise. Also, since the input images are processed through just four different scales, the desired algorithm has good capability for real-time implementation. Simulation results in term of signal to clutter ratio and background suppression factor on real and simulated images prove the effectiveness and the performance of the proposed methodology. Since the desired algorithm was developed based on independent false alarm sources, our proposed methodology is expandable to any pair of detection algorithms which have different false alarm sources.

  20. Development of independent MU/treatment time verification algorithm for non-IMRT treatment planning: A clinical experience

    Science.gov (United States)

    Tatli, Hamza; Yucel, Derya; Yilmaz, Sercan; Fayda, Merdan

    2018-02-01

    The aim of this study is to develop an algorithm for independent MU/treatment time (TT) verification for non-IMRT treatment plans, as a part of QA program to ensure treatment delivery accuracy. Two radiotherapy delivery units and their treatment planning systems (TPS) were commissioned in Liv Hospital Radiation Medicine Center, Tbilisi, Georgia. Beam data were collected according to vendors' collection guidelines, and AAPM reports recommendations, and processed by Microsoft Excel during in-house algorithm development. The algorithm is designed and optimized for calculating SSD and SAD treatment plans, based on AAPM TG114 dose calculation recommendations, coded and embedded in MS Excel spreadsheet, as a preliminary verification algorithm (VA). Treatment verification plans were created by TPSs based on IAEA TRS 430 recommendations, also calculated by VA, and point measurements were collected by solid water phantom, and compared. Study showed that, in-house VA can be used for non-IMRT plans MU/TT verifications.

  1. Vectorised Spreading Activation algorithm for centrality measurement

    Directory of Open Access Journals (Sweden)

    Alexander Troussov

    2011-01-01

    Full Text Available Spreading Activation is a family of graph-based algorithms widely used in areas such as information retrieval, epidemic models, and recommender systems. In this paper we introduce a novel Spreading Activation (SA method that we call Vectorised Spreading Activation (VSA. VSA algorithms, like “traditional” SA algorithms, iteratively propagate the activation from the initially activated set of nodes to the other nodes in a network through outward links. The level of the node’s activation could be used as a centrality measurement in accordance with dynamic model-based view of centrality that focuses on the outcomes for nodes in a network where something is flowing from node to node across the edges. Representing the activation by vectors allows the use of the information about various dimensionalities of the flow and the dynamic of the flow. In this capacity, VSA algorithms can model multitude of complex multidimensional network flows. We present the results of numerical simulations on small synthetic social networks and multi­dimensional network models of folksonomies which show that the results of VSA propagation are more sensitive to the positions of the initial seed and to the community structure of the network than the results produced by traditional SA algorithms. We tentatively conclude that the VSA methods could be instrumental to develop scalable and computationally efficient algorithms which could achieve synergy between computation of centrality indexes with detection of community structures in networks. Based on our preliminary results and on improvements made over previous studies, we foresee advances and applications in the current state of the art of this family of algorithms and their applications to centrality measurement.

  2. Development of Image Reconstruction Algorithms in electrical Capacitance Tomography

    International Nuclear Information System (INIS)

    Fernandez Marron, J. L.; Alberdi Primicia, J.; Barcala Riveira, J. M.

    2007-01-01

    The Electrical Capacitance Tomography (ECT) has not obtained a good development in order to be used at industrial level. That is due first to difficulties in the measurement of very little capacitances (in the range of femto farads) and second to the problem of reconstruction on- line of the images. This problem is due also to the small numbers of electrodes (maximum 16), that made the usual algorithms of reconstruction has many errors. In this work it is described a new purely geometrical method that could be used for this purpose. (Author) 4 refs

  3. Integration of genetic algorithm, computer simulation and design of experiments for forecasting electrical energy consumption

    International Nuclear Information System (INIS)

    Azadeh, A.; Tarverdian, S.

    2007-01-01

    This study presents an integrated algorithm for forecasting monthly electrical energy consumption based on genetic algorithm (GA), computer simulation and design of experiments using stochastic procedures. First, time-series model is developed as a benchmark for GA and simulation. Computer simulation is developed to generate random variables for monthly electricity consumption. This is achieved to foresee the effects of probabilistic distribution on monthly electricity consumption. The GA and simulated-based GA models are then developed by the selected time-series model. Therefore, there are four treatments to be considered in analysis of variance (ANOVA) which are actual data, time series, GA and simulated-based GA. Furthermore, ANOVA is used to test the null hypothesis of the above four alternatives being equal. If the null hypothesis is accepted, then the lowest mean absolute percentage error (MAPE) value is used to select the best model, otherwise the Duncan Multiple Range Test (DMRT) method of paired comparison is used to select the optimum model, which could be time series, GA or simulated-based GA. In case of ties the lowest MAPE value is considered as the benchmark. The integrated algorithm has several unique features. First, it is flexible and identifies the best model based on the results of ANOVA and MAPE, whereas previous studies consider the best-fit GA model based on MAPE or relative error results. Second, the proposed algorithm may identify conventional time series as the best model for future electricity consumption forecasting because of its dynamic structure, whereas previous studies assume that GA always provide the best solutions and estimation. To show the applicability and superiority of the proposed algorithm, the monthly electricity consumption in Iran from March 1994 to February 2005 (131 months) is used and applied to the proposed algorithm

  4. Expeditious 3D poisson vlasov algorithm applied to ion extraction from a plasma

    International Nuclear Information System (INIS)

    Whealton, J.H.; McGaffey, R.W.; Meszaros, P.S.

    1983-01-01

    A new 3D Poisson Vlasov algorithm is under development which differs from a previous algorithm, referenced in this paper, in two respects: the mesh lines are cartesian, and the Poisson equation is solved iteratively. The resulting algorithm has been used to examine the same boundary value problem as considered in the earlier algorithm except that the number of nodes is 2 times greater. The same physical results were obtained except the computational time was reduced by a factor of 60 and the memory requirement was reduced by a factor of 10. This algorithm at present restricts Neumann boundary conditions to orthogonal planes lying along mesh lines. No such restriction applies to Dirichlet boundaries. An emittance diagram is shown below where those points lying on the y = 0 line start on the axis of symmetry and those near the y = 1 line start near the slot end

  5. New recursive-least-squares algorithms for nonlinear active control of sound and vibration using neural networks.

    Science.gov (United States)

    Bouchard, M

    2001-01-01

    In recent years, a few articles describing the use of neural networks for nonlinear active control of sound and vibration were published. Using a control structure with two multilayer feedforward neural networks (one as a nonlinear controller and one as a nonlinear plant model), steepest descent algorithms based on two distinct gradient approaches were introduced for the training of the controller network. The two gradient approaches were sometimes called the filtered-x approach and the adjoint approach. Some recursive-least-squares algorithms were also introduced, using the adjoint approach. In this paper, an heuristic procedure is introduced for the development of recursive-least-squares algorithms based on the filtered-x and the adjoint gradient approaches. This leads to the development of new recursive-least-squares algorithms for the training of the controller neural network in the two networks structure. These new algorithms produce a better convergence performance than previously published algorithms. Differences in the performance of algorithms using the filtered-x and the adjoint gradient approaches are discussed in the paper. The computational load of the algorithms discussed in the paper is evaluated for multichannel systems of nonlinear active control. Simulation results are presented to compare the convergence performance of the algorithms, showing the convergence gain provided by the new algorithms.

  6. Four (Algorithms) in One (Bag): An Integrative Framework of Knowledge for Teaching the Standard Algorithms of the Basic Arithmetic Operations

    Science.gov (United States)

    Raveh, Ira; Koichu, Boris; Peled, Irit; Zaslavsky, Orit

    2016-01-01

    In this article we present an integrative framework of knowledge for teaching the standard algorithms of the four basic arithmetic operations. The framework is based on a mathematical analysis of the algorithms, a connectionist perspective on teaching mathematics and an analogy with previous frameworks of knowledge for teaching arithmetic…

  7. A Faster Algorithm for Computing Motorcycle Graphs

    KAUST Repository

    Vigneron, Antoine E.; Yan, Lie

    2014-01-01

    We present a new algorithm for computing motorcycle graphs that runs in (Formula presented.) time for any (Formula presented.), improving on all previously known algorithms. The main application of this result is to computing the straight skeleton of a polygon. It allows us to compute the straight skeleton of a non-degenerate polygon with (Formula presented.) holes in (Formula presented.) expected time. If all input coordinates are (Formula presented.)-bit rational numbers, we can compute the straight skeleton of a (possibly degenerate) polygon with (Formula presented.) holes in (Formula presented.) expected time. In particular, it means that we can compute the straight skeleton of a simple polygon in (Formula presented.) expected time if all input coordinates are (Formula presented.)-bit rationals, while all previously known algorithms have worst-case running time (Formula presented.). © 2014 Springer Science+Business Media New York.

  8. A Faster Algorithm for Computing Motorcycle Graphs

    KAUST Repository

    Vigneron, Antoine E.

    2014-08-29

    We present a new algorithm for computing motorcycle graphs that runs in (Formula presented.) time for any (Formula presented.), improving on all previously known algorithms. The main application of this result is to computing the straight skeleton of a polygon. It allows us to compute the straight skeleton of a non-degenerate polygon with (Formula presented.) holes in (Formula presented.) expected time. If all input coordinates are (Formula presented.)-bit rational numbers, we can compute the straight skeleton of a (possibly degenerate) polygon with (Formula presented.) holes in (Formula presented.) expected time. In particular, it means that we can compute the straight skeleton of a simple polygon in (Formula presented.) expected time if all input coordinates are (Formula presented.)-bit rationals, while all previously known algorithms have worst-case running time (Formula presented.). © 2014 Springer Science+Business Media New York.

  9. Unveiling the development of intracranial injury using dynamic brain EIT: an evaluation of current reconstruction algorithms.

    Science.gov (United States)

    Li, Haoting; Chen, Rongqing; Xu, Canhua; Liu, Benyuan; Tang, Mengxing; Yang, Lin; Dong, Xiuzhen; Fu, Feng

    2017-08-21

    Dynamic brain electrical impedance tomography (EIT) is a promising technique for continuously monitoring the development of cerebral injury. While there are many reconstruction algorithms available for brain EIT, there is still a lack of study to compare their performance in the context of dynamic brain monitoring. To address this problem, we develop a framework for evaluating different current algorithms with their ability to correctly identify small intracranial conductivity changes. Firstly, a simulation 3D head phantom with realistic layered structure and impedance distribution is developed. Next several reconstructing algorithms, such as back projection (BP), damped least-square (DLS), Bayesian, split Bregman (SB) and GREIT are introduced. We investigate their temporal response, noise performance, location and shape error with respect to different noise levels on the simulation phantom. The results show that the SB algorithm demonstrates superior performance in reducing image error. To further improve the location accuracy, we optimize SB by incorporating the brain structure-based conductivity distribution priors, in which differences of the conductivities between different brain tissues and the inhomogeneous conductivity distribution of the skull are considered. We compare this novel algorithm (called SB-IBCD) with SB and DLS using anatomically correct head shaped phantoms with spatial varying skull conductivity. Main results and Significance: The results showed that SB-IBCD is the most effective in unveiling small intracranial conductivity changes, where it can reduce the image error by an average of 30.0% compared to DLS.

  10. Development of an improved genetic algorithm and its application in the optimal design of ship nuclear power system

    International Nuclear Information System (INIS)

    Jia Baoshan; Yu Jiyang; You Songbo

    2005-01-01

    This article focuses on the development of an improved genetic algorithm and its application in the optimal design of the ship nuclear reactor system, whose goal is to find a combination of system parameter values that minimize the mass or volume of the system given the power capacity requirement and safety criteria. An improved genetic algorithm (IGA) was developed using an 'average fitness value' grouping + 'specified survival probability' rank selection method and a 'separate-recombine' duplication operator. Combining with a simulated annealing algorithm (SAA) that continues the local search after the IGA reaches a satisfactory point, the algorithm gave satisfactory optimization results from both search efficiency and accuracy perspectives. This IGA-SAA algorithm successfully solved the design optimization problem of ship nuclear power system. It is an advanced and efficient methodology that can be applied to the similar optimization problems in other areas. (authors)

  11. DNABIT Compress – Genome compression algorithm

    Science.gov (United States)

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-01

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that “DNABIT Compress” algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases. PMID:21383923

  12. A new algorithm for hip fracture surgery

    DEFF Research Database (Denmark)

    Palm, Henrik; Krasheninnikoff, Michael; Holck, Kim

    2012-01-01

    Background and purpose Treatment of hip fracture patients is controversial. We implemented a new operative and supervision algorithm (the Hvidovre algorithm) for surgical treatment of all hip fractures, primarily based on own previously published results. Methods 2,000 consecutive patients over 50...... years of age who were admitted and operated on because of a hip fracture were prospectively included. 1,000 of these patients were included after implementation of the algorithm. Demographic parameters, hospital treatment, and reoperations within the first postoperative year were assessed from patient...... by reoperations was reduced from 24% of total hospitalization before the algorithm was introduced to 18% after it was introduced. Interpretation It is possible to implement an algorithm for treatment of all hip fracture patients in a large teaching hospital. In our case, the Hvidovre algorithm both raised...

  13. The development of a novel knowledge-based weaning algorithm using pulmonary parameters: a simulation study.

    Science.gov (United States)

    Guler, Hasan; Kilic, Ugur

    2018-03-01

    Weaning is important for patients and clinicians who have to determine correct weaning time so that patients do not become addicted to the ventilator. There are already some predictors developed, such as the rapid shallow breathing index (RSBI), the pressure time index (PTI), and Jabour weaning index. Many important dimensions of weaning are sometimes ignored by these predictors. This is an attempt to develop a knowledge-based weaning process via fuzzy logic that eliminates the disadvantages of the present predictors. Sixteen vital parameters listed in published literature have been used to determine the weaning decisions in the developed system. Since there are considered to be too many individual parameters in it, related parameters were grouped together to determine acid-base balance, adequate oxygenation, adequate pulmonary function, hemodynamic stability, and the psychological status of the patients. To test the performance of the developed algorithm, 20 clinical scenarios were generated using Monte Carlo simulations and the Gaussian distribution method. The developed knowledge-based algorithm and RSBI predictor were applied to the generated scenarios. Finally, a clinician evaluated each clinical scenario independently. The Student's t test was used to show the statistical differences between the developed weaning algorithm, RSBI, and the clinician's evaluation. According to the results obtained, there were no statistical differences between the proposed methods and the clinician evaluations.

  14. New algorithm for risk analysis in radiotherapy

    International Nuclear Information System (INIS)

    Torres, Antonio; Montes de Oca, Joe

    2015-01-01

    Risk analyses applied to radiotherapy treatments have become an undeniable necessity, considering the dangers generated by the combination of using powerful radiation fields on patients and the occurrence of human errors and equipment failures during these treatments. The technique par excellence to execute these analyses has been the risk matrix. This paper presents the development of a new algorithm to execute the task with wide graphic and analytic potentialities, thus transforming it into a very useful option for risk monitoring and the optimization of quality assurance. The system SECURE- MR, which is the basic software of this algorithm, has been successfully used in risk analysis regarding different kinds of radiotherapies. Compared to previous methods, It offers new possibilities of analysis considering risk controlling factors as the robustness of reducers of initiators frequency and its consequences. Their analytic capacities and graphs allow novel developments to classify risk contributing factors, to represent information processes as well as accidental sequences. The paper shows the application of the proposed system to a generic process of radiotherapy treatment using a lineal accelerator. (author)

  15. A Novel Algorithm (G-JPSO and Its Development for the Optimal Control of Pumps in Water Distribution Networks

    Directory of Open Access Journals (Sweden)

    Rasoul Rajabpour

    2017-01-01

    Full Text Available Recent decades have witnessed growing applications of metaheuristic techniques as efficient tools for solving complex engineering problems. One such method is the JPSO algorithm. In this study, innovative modifications were made in the nature of the jump algorithm JPSO to make it capable of coping with graph-based solutions, which led to the development of a new algorithm called ‘G-JPSO’. The new algorithm was then used to solve the Fletcher-Powell optimal control problem and its application to optimal control of pumps in water distribution networks was evaluated. Optimal control of pumps consists in an optimum operation timetable (on and off for each of the pumps at the desired time interval. Maximum number of on and off positions for each pump was introduced into the objective function as a constraint such that not only would power consumption at each node be reduced but such problem requirements as the minimum pressure required at each node and minimum/maximum storage tank heights would be met. To determine the optimal operation of pumps, a model-based optimization-simulation algorithm was developed based on G-JPSO and JPSO algorithms. The model proposed by van Zyl was used to determine the optimal operation of the distribution network. Finally, the results obtained from the proposed algorithm were compared with those obtained from ant colony, genetic, and JPSO algorithms to show the robustness of the proposed algorithm in finding near-optimum solutions at reasonable computation costs.

  16. A new cut-based algorithm for the multi-state flow network reliability problem

    International Nuclear Information System (INIS)

    Yeh, Wei-Chang; Bae, Changseok; Huang, Chia-Ling

    2015-01-01

    Many real-world systems can be modeled as multi-state network systems in which reliability can be derived in terms of the lower bound points of level d, called d-minimal cuts (d-MCs). This study proposes a new method to find and verify obtained d-MCs with simple and useful found properties for the multi-state flow network reliability problem. The proposed algorithm runs in O(mσp) time, which represents a significant improvement over the previous O(mp 2 σ) time bound based on max-flow/min-cut, where p, σ and m denote the number of MCs, d-MC candidates and edges, respectively. The proposed algorithm also conquers the weakness of some existing methods, which failed to remove duplicate d-MCs in special cases. A step-by-step example is given to demonstrate how the proposed algorithm locates and verifies all d-MC candidates. As evidence of the utility of the proposed approach, we present extensive computational results on 20 benchmark networks in another example. The computational results compare favorably with a previously developed algorithm in the literature. - Highlights: • A new method is proposed to find all d-MCs for the multi-state flow networks. • The proposed method can prevent the generation of d-MC duplicates. • The proposed method is simpler and more efficient than the best-known algorithms

  17. Development of a general learning algorithm with applications in nuclear reactor systems

    Energy Technology Data Exchange (ETDEWEB)

    Brittain, C.R.; Otaduy, P.J.; Perez, R.B.

    1989-12-01

    The objective of this study was development of a generalized learning algorithm that can learn to predict a particular feature of a process by observation of a set of representative input examples. The algorithm uses pattern matching and statistical analysis techniques to find a functional relationship between descriptive attributes of the input examples and the feature to be predicted. The algorithm was tested by applying it to a set of examples consisting of performance descriptions for 277 fuel cycles of Oak Ridge National Laboratory's High Flux Isotope Reactor (HFIR). The program learned to predict the critical rod position for the HFIR from core configuration data prior to reactor startup. The functional relationship bases its predictions on initial core reactivity, the number of certain targets placed in the center of the reactor, and the total exposure of the control plates. Twelve characteristic fuel cycle clusters were identified. Nine fuel cycles were diagnosed as having noisy data, and one could not be predicted by the functional relationship. 13 refs., 6 figs.

  18. Development of a general learning algorithm with applications in nuclear reactor systems

    International Nuclear Information System (INIS)

    Brittain, C.R.; Otaduy, P.J.; Perez, R.B.

    1989-12-01

    The objective of this study was development of a generalized learning algorithm that can learn to predict a particular feature of a process by observation of a set of representative input examples. The algorithm uses pattern matching and statistical analysis techniques to find a functional relationship between descriptive attributes of the input examples and the feature to be predicted. The algorithm was tested by applying it to a set of examples consisting of performance descriptions for 277 fuel cycles of Oak Ridge National Laboratory's High Flux Isotope Reactor (HFIR). The program learned to predict the critical rod position for the HFIR from core configuration data prior to reactor startup. The functional relationship bases its predictions on initial core reactivity, the number of certain targets placed in the center of the reactor, and the total exposure of the control plates. Twelve characteristic fuel cycle clusters were identified. Nine fuel cycles were diagnosed as having noisy data, and one could not be predicted by the functional relationship. 13 refs., 6 figs

  19. Developing operation algorithms for vision subsystems in autonomous mobile robots

    Science.gov (United States)

    Shikhman, M. V.; Shidlovskiy, S. V.

    2018-05-01

    The paper analyzes algorithms for selecting keypoints on the image for the subsequent automatic detection of people and obstacles. The algorithm is based on the histogram of oriented gradients and the support vector method. The combination of these methods allows successful selection of dynamic and static objects. The algorithm can be applied in various autonomous mobile robots.

  20. Modelling and development of estimation and control algorithms: application to a bio process; Modelisation et elaboration d`algorithmes d`estimation et de commande: application a un bioprocede

    Energy Technology Data Exchange (ETDEWEB)

    Maher, M

    1995-02-03

    Modelling, estimation and control of an alcoholic fermentation process is the purpose of this thesis. A simple mathematical model of a fermentation process is established by using experimental results obtained on the plant. This nonlinear model is used for numerical simulation, analysis and synthesis of estimation and control algorithms. The problem of state and parameter nonlinear estimation of bio-processes is studied. Two estimation techniques are developed and proposed to bypass the lack of sensors for certain physical variables. Their performances are studied by numerical simulation. One of these estimators is validated on experimental results of batch and continuous fermentations. An adaptive control by law is proposed for the regulation and tracking of the substrate concentration of the plant by acting on the dilution rate. It is a nonlinear control strategy coupled with the previous validated estimator. The performance of this control law is evaluated by a real application to a continuous flow fermentation process. (author) refs.

  1. Sequence-based prediction of protein protein interaction using a deep-learning algorithm.

    Science.gov (United States)

    Sun, Tanlin; Zhou, Bo; Lai, Luhua; Pei, Jianfeng

    2017-05-25

    Protein-protein interactions (PPIs) are critical for many biological processes. It is therefore important to develop accurate high-throughput methods for identifying PPI to better understand protein function, disease occurrence, and therapy design. Though various computational methods for predicting PPI have been developed, their robustness for prediction with external datasets is unknown. Deep-learning algorithms have achieved successful results in diverse areas, but their effectiveness for PPI prediction has not been tested. We used a stacked autoencoder, a type of deep-learning algorithm, to study the sequence-based PPI prediction. The best model achieved an average accuracy of 97.19% with 10-fold cross-validation. The prediction accuracies for various external datasets ranged from 87.99% to 99.21%, which are superior to those achieved with previous methods. To our knowledge, this research is the first to apply a deep-learning algorithm to sequence-based PPI prediction, and the results demonstrate its potential in this field.

  2. An integrated environment for fast development and performance assessment of sonar image processing algorithms - SSIE

    DEFF Research Database (Denmark)

    Henriksen, Lars

    1996-01-01

    The sonar simulator integrated environment (SSIE) is a tool for developing high performance processing algorithms for single or sequences of sonar images. The tool is based on MATLAB providing a very short lead time from concept to executable code and thereby assessment of the algorithms tested...... of the algorithms is the availability of sonar images. To accommodate this problem the SSIE has been equipped with a simulator capable of generating high fidelity sonar images for a given scene of objects, sea-bed AUV path, etc. In the paper the main components of the SSIE is described and examples of different...... processing steps are given...

  3. Development of real time diagnostics and feedback algorithms for JET in view of the next step

    Energy Technology Data Exchange (ETDEWEB)

    Murari, A.; Barana, O. [Consorzio RFX Associazione EURATOM ENEA per la Fusione, Corso Stati Uniti 4, Padua (Italy); Felton, R.; Zabeo, L.; Piccolo, F.; Sartori, F. [Euratom/UKAEA Fusion Assoc., Culham Science Centre, Abingdon, Oxon (United Kingdom); Joffrin, E.; Mazon, D.; Laborde, L.; Moreau, D. [Association EURATOM-CEA, CEA Cadarache, 13 - Saint-Paul-lez-Durance (France); Albanese, R. [Assoc. Euratom-ENEA-CREATE, Univ. Mediterranea RC (Italy); Arena, P.; Bruno, M. [Assoc. Euratom-ENEA-CREATE, Univ.di Catania (Italy); Ambrosino, G.; Ariola, M. [Assoc. Euratom-ENEA-CREATE, Univ. Napoli Federico Napoli (Italy); Crisanti, F. [Associazone EURATOM ENEA sulla Fusione, C.R. Frascati (Italy); Luna, E. de la; Sanchez, J. [Associacion EURATOM CIEMAT para Fusion, Madrid (Spain)

    2004-07-01

    Real time control of many plasma parameters will be an essential aspect in the development of reliable high performance operation of Next Step Tokamaks. The main prerequisites for any feedback scheme are the precise real-time determination of the quantities to be controlled, requiring top quality and highly reliable diagnostics, and the availability of robust control algorithms. A new set of real time diagnostics was recently implemented on JET to prove the feasibility of determining, with high accuracy and time resolution, the most important plasma quantities. With regard to feedback algorithms, new model-based controllers were developed to allow a more robust control of several plasma parameters. Both diagnostics and algorithms were successfully used in several experiments, ranging from H-mode plasmas to configuration with ITBs (internal thermal barriers). Since elaboration of computationally heavy measurements is often required, significant attention was devoted to non-algorithmic methods like Digital or Cellular Neural/Nonlinear Networks. The real time hardware and software adopted architectures are also described with particular attention to their relevance to ITER. (authors)

  4. Development of real time diagnostics and feedback algorithms for JET in view of the next step

    International Nuclear Information System (INIS)

    Murari, A.; Felton, R.; Zabeo, L.; Piccolo, F.; Sartori, F.; Murari, A.; Barana, O.; Albanese, R.; Joffrin, E.; Mazon, D.; Laborde, L.; Moreau, D.; Arena, P.; Bruno, M.; Ambrosino, G.; Ariola, M.; Crisanti, F.; Luna, E. de la; Sanchez, J.

    2004-01-01

    Real time control of many plasma parameters will be an essential aspect in the development of reliable high performance operation of Next Step Tokamaks. The main prerequisites for any feedback scheme are the precise real-time determination of the quantities to be controlled, requiring top quality and highly reliable diagnostics, and the availability of robust control algorithms. A new set of real time diagnostics was recently implemented on JET to prove the feasibility of determining, with high accuracy and time resolution, the most important plasma quantities. With regard to feedback algorithms, new model-based controllers were developed to allow a more robust control of several plasma parameters. Both diagnostics and algorithms were successfully used in several experiments, ranging from H-mode plasmas to configuration with internal transport barriers. Since elaboration of computationally heavy measurements is often required, significant attention was devoted to non-algorithmic methods like Digital or Cellular Neural/Nonlinear Networks. The real time hardware and software adopted architectures are also described with particular attention to their relevance to ITER. (authors)

  5. Development of real time diagnostics and feedback algorithms for JET in view of the next step

    International Nuclear Information System (INIS)

    Murari, A.; Barana, O.; Murari, A.; Felton, R.; Zabeo, L.; Piccolo, F.; Sartori, F.; Joffrin, E.; Mazon, D.; Laborde, L.; Moreau, D.; Albanese, R.; Arena, P.; Bruno, M.; Ambrosino, G.; Ariola, M.; Crisanti, F.; Luna, E. de la; Sanchez, J.

    2004-01-01

    Real time control of many plasma parameters will be an essential aspect in the development of reliable high performance operation of Next Step Tokamaks. The main prerequisites for any feedback scheme are the precise real-time determination of the quantities to be controlled, requiring top quality and highly reliable diagnostics, and the availability of robust control algorithms. A new set of real time diagnostics was recently implemented on JET to prove the feasibility of determining, with high accuracy and time resolution, the most important plasma quantities. With regard to feedback algorithms, new model-based controllers were developed to allow a more robust control of several plasma parameters. Both diagnostics and algorithms were successfully used in several experiments, ranging from H-mode plasmas to configuration with ITBs (internal thermal barriers). Since elaboration of computationally heavy measurements is often required, significant attention was devoted to non-algorithmic methods like Digital or Cellular Neural/Nonlinear Networks. The real time hardware and software adopted architectures are also described with particular attention to their relevance to ITER. (authors)

  6. Bobcat 2013: a hyperspectral data collection supporting the development and evaluation of spatial-spectral algorithms

    Science.gov (United States)

    Kaufman, Jason; Celenk, Mehmet; White, A. K.; Stocker, Alan D.

    2014-06-01

    The amount of hyperspectral imagery (HSI) data currently available is relatively small compared to other imaging modalities, and what is suitable for developing, testing, and evaluating spatial-spectral algorithms is virtually nonexistent. In this work, a significant amount of coincident airborne hyperspectral and high spatial resolution panchromatic imagery that supports the advancement of spatial-spectral feature extraction algorithms was collected to address this need. The imagery was collected in April 2013 for Ohio University by the Civil Air Patrol, with their Airborne Real-time Cueing Hyperspectral Enhanced Reconnaissance (ARCHER) sensor. The target materials, shapes, and movements throughout the collection area were chosen such that evaluation of change detection algorithms, atmospheric compensation techniques, image fusion methods, and material detection and identification algorithms is possible. This paper describes the collection plan, data acquisition, and initial analysis of the collected imagery.

  7. Development of an image reconstruction algorithm for a few number of projection data

    International Nuclear Information System (INIS)

    Vieira, Wilson S.; Brandao, Luiz E.; Braz, Delson

    2007-01-01

    An image reconstruction algorithm was developed for specific cases of radiotracer applications in industry (rotating cylindrical mixers), involving a very few number of projection data. The algorithm was planned for imaging radioactive isotope distributions around the center of circular planes. The method consists of adapting the original expectation maximization algorithm (EM) to solve the ill-posed emission tomography inverse problem in order to reconstruct transversal 2D images of an object with only four projections. To achieve this aim, counts of photons emitted by selected radioactive sources in the plane, after they had been simulated using the commercial software MICROSHIELD 5.05, constitutes the projections and a computational code (SPECTEM) was developed to generate activity vectors or images related to those sources. SPECTEM is flexible to support simultaneous changes of the detectors's geometry, the medium under investigation and the properties of the gamma radiation. As a consequence of the code had been followed correctly the proposed method, good results were obtained and they encouraged us to continue the next step of the research: the validation of SPECTEM utilizing experimental data to check its real performance. We aim this code will improve considerably radiotracer methodology, making easier the diagnosis of fails in industrial processes. (author)

  8. Development of an image reconstruction algorithm for a few number of projection data

    Energy Technology Data Exchange (ETDEWEB)

    Vieira, Wilson S.; Brandao, Luiz E. [Instituto de Engenharia Nuclear (IEN-CNEN/RJ), Rio de Janeiro , RJ (Brazil)]. E-mails: wilson@ien.gov.br; brandao@ien.gov.br; Braz, Delson [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Coordenacao dos Programa de Pos-graduacao de Engenharia (COPPE). Lab. de Instrumentacao Nuclear]. E-mail: delson@mailhost.lin.ufrj.br

    2007-07-01

    An image reconstruction algorithm was developed for specific cases of radiotracer applications in industry (rotating cylindrical mixers), involving a very few number of projection data. The algorithm was planned for imaging radioactive isotope distributions around the center of circular planes. The method consists of adapting the original expectation maximization algorithm (EM) to solve the ill-posed emission tomography inverse problem in order to reconstruct transversal 2D images of an object with only four projections. To achieve this aim, counts of photons emitted by selected radioactive sources in the plane, after they had been simulated using the commercial software MICROSHIELD 5.05, constitutes the projections and a computational code (SPECTEM) was developed to generate activity vectors or images related to those sources. SPECTEM is flexible to support simultaneous changes of the detectors's geometry, the medium under investigation and the properties of the gamma radiation. As a consequence of the code had been followed correctly the proposed method, good results were obtained and they encouraged us to continue the next step of the research: the validation of SPECTEM utilizing experimental data to check its real performance. We aim this code will improve considerably radiotracer methodology, making easier the diagnosis of fails in industrial processes. (author)

  9. Development of estimation algorithm of loose parts and analysis of impact test data

    International Nuclear Information System (INIS)

    Kim, Jung Soo; Ham, Chang Sik; Jung, Chul Hwan; Hwang, In Koo; Kim, Tak Hwane; Kim, Tae Hwane; Park, Jin Ho

    1999-11-01

    Loose parts are produced by being parted from the structure of the reactor coolant system or by coming into RCS from the outside during test operation, refueling, and overhaul time. These loose parts are mixed with reactor coolant fluid and collide with RCS components. When loose parts are occurred within RCS, it is necessary to estimate the impact point and the mass of loose parts. In this report an analysis algorithm for the estimation of the impact point and mass of loose part is developed. The developed algorithm was tested with the impact test data of Yonggwang-3. The estimated impact point using the proposed algorithm in this report had 5 percent error to the real test data. The estimated mass was analyzed within 28 percent error bound using the same unit's data. We analyzed the characteristic frequency of each sensor because this frequency effected the estimation of impact point and mass. The characteristic frequency of the background noise during normal operation was compared with that of the impact test data. The result of the comparison illustrated that the characteristic frequency bandwidth of the impact test data was lower than that of the background noise during normal operation. by the comparison, the integrity of sensor and monitoring system could be checked, too. (author)

  10. AUTOMATION OF CALCULATION ALGORITHMS FOR EFFICIENCY ESTIMATION OF TRANSPORT INFRASTRUCTURE DEVELOPMENT

    Directory of Open Access Journals (Sweden)

    Sergey Kharitonov

    2015-06-01

    Full Text Available Optimum transport infrastructure usage is an important aspect of the development of the national economy of the Russian Federation. Thus, development of instruments for assessing the efficiency of infrastructure is impossible without constant monitoring of a number of significant indicators. This work is devoted to the selection of indicators and the method of their calculation in relation to the transport subsystem as airport infrastructure. The work also reflects aspects of the evaluation of the possibilities of algorithmic computational mechanisms to improve the tools of public administration transport subsystems.

  11. Contrast data mining concepts, algorithms, and applications

    CERN Document Server

    Dong, Guozhu

    2012-01-01

    A Fruitful Field for Researching Data Mining Methodology and for Solving Real-Life Problems Contrast Data Mining: Concepts, Algorithms, and Applications collects recent results from this specialized area of data mining that have previously been scattered in the literature, making them more accessible to researchers and developers in data mining and other fields. The book not only presents concepts and techniques for contrast data mining, but also explores the use of contrast mining to solve challenging problems in various scientific, medical, and business domains. Learn from Real Case Studies

  12. Rapid mental computation system as a tool for algorithmic thinking of elementary school students development

    OpenAIRE

    Ziatdinov, Rushan; Musa, Sajid

    2013-01-01

    In this paper, we describe the possibilities of using a rapid mental computation system in elementary education. The system consists of a number of readily memorized operations that allow one to perform arithmetic computations very quickly. These operations are actually simple algorithms which can develop or improve the algorithmic thinking of pupils. Using a rapid mental computation system allows forming the basis for the study of computer science in secondary school.

  13. Development of Nuclear Power Plant Safety Evaluation Method for the Automation Algorithm Application

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Seung Geun; Seong, Poong Hyun [KAIST, Daejeon (Korea, Republic of)

    2016-10-15

    It is commonly believed that replacing human operators to the automated system would guarantee greater efficiency, lower workloads, and fewer human error. Conventional machine learning techniques are considered as not capable to handle complex situations in NPP. Due to these kinds of issues, automation is not actively adopted although human error probability drastically increases during abnormal situations in NPP due to overload of information, high workload, and short time available for diagnosis. Recently, new machine learning techniques, which are known as ‘deep learning’ techniques have been actively applied to many fields, and the deep learning technique-based artificial intelligences (AIs) are showing better performance than conventional AIs. In 2015, deep Q-network (DQN) which is one of the deep learning techniques was developed and applied to train AI that automatically plays various Atari 2800 games, and this AI surpassed the human-level playing in many kind of games. Also in 2016, ‘Alpha-Go’, which was developed by ‘Google Deepmind’ based on deep learning technique to play the game of Go (i.e. Baduk), was defeated Se-dol Lee who is the World Go champion with score of 4:1. By the effort for reducing human error in NPPs, the ultimate goal of this study is the development of automation algorithm which can cover various situations in NPPs. As the first part, quantitative and real-time NPP safety evaluation method is being developed in order to provide the training criteria for automation algorithm. For that, EWS concept of medical field was adopted, and the applicability is investigated in this paper. Practically, the application of full automation (i.e. fully replaces human operators) may requires much more time for the validation and investigation of side-effects after the development of automation algorithm, and so the adoption in the form of full automation will take long time.

  14. Development of Nuclear Power Plant Safety Evaluation Method for the Automation Algorithm Application

    International Nuclear Information System (INIS)

    Kim, Seung Geun; Seong, Poong Hyun

    2016-01-01

    It is commonly believed that replacing human operators to the automated system would guarantee greater efficiency, lower workloads, and fewer human error. Conventional machine learning techniques are considered as not capable to handle complex situations in NPP. Due to these kinds of issues, automation is not actively adopted although human error probability drastically increases during abnormal situations in NPP due to overload of information, high workload, and short time available for diagnosis. Recently, new machine learning techniques, which are known as ‘deep learning’ techniques have been actively applied to many fields, and the deep learning technique-based artificial intelligences (AIs) are showing better performance than conventional AIs. In 2015, deep Q-network (DQN) which is one of the deep learning techniques was developed and applied to train AI that automatically plays various Atari 2800 games, and this AI surpassed the human-level playing in many kind of games. Also in 2016, ‘Alpha-Go’, which was developed by ‘Google Deepmind’ based on deep learning technique to play the game of Go (i.e. Baduk), was defeated Se-dol Lee who is the World Go champion with score of 4:1. By the effort for reducing human error in NPPs, the ultimate goal of this study is the development of automation algorithm which can cover various situations in NPPs. As the first part, quantitative and real-time NPP safety evaluation method is being developed in order to provide the training criteria for automation algorithm. For that, EWS concept of medical field was adopted, and the applicability is investigated in this paper. Practically, the application of full automation (i.e. fully replaces human operators) may requires much more time for the validation and investigation of side-effects after the development of automation algorithm, and so the adoption in the form of full automation will take long time

  15. Portable Health Algorithms Test System

    Science.gov (United States)

    Melcher, Kevin J.; Wong, Edmond; Fulton, Christopher E.; Sowers, Thomas S.; Maul, William A.

    2010-01-01

    A document discusses the Portable Health Algorithms Test (PHALT) System, which has been designed as a means for evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT system allows systems health management algorithms to be developed in a graphical programming environment, to be tested and refined using system simulation or test data playback, and to be evaluated in a real-time hardware-in-the-loop mode with a live test article. The integrated hardware and software development environment provides a seamless transition from algorithm development to real-time implementation. The portability of the hardware makes it quick and easy to transport between test facilities. This hard ware/software architecture is flexible enough to support a variety of diagnostic applications and test hardware, and the GUI-based rapid prototyping capability is sufficient to support development execution, and testing of custom diagnostic algorithms. The PHALT operating system supports execution of diagnostic algorithms under real-time constraints. PHALT can perform real-time capture and playback of test rig data with the ability to augment/ modify the data stream (e.g. inject simulated faults). It performs algorithm testing using a variety of data input sources, including real-time data acquisition, test data playback, and system simulations, and also provides system feedback to evaluate closed-loop diagnostic response and mitigation control.

  16. Quantum algorithm for linear regression

    Science.gov (United States)

    Wang, Guoming

    2017-07-01

    We present a quantum algorithm for fitting a linear regression model to a given data set using the least-squares approach. Differently from previous algorithms which yield a quantum state encoding the optimal parameters, our algorithm outputs these numbers in the classical form. So by running it once, one completely determines the fitted model and then can use it to make predictions on new data at little cost. Moreover, our algorithm works in the standard oracle model, and can handle data sets with nonsparse design matrices. It runs in time poly( log2(N ) ,d ,κ ,1 /ɛ ) , where N is the size of the data set, d is the number of adjustable parameters, κ is the condition number of the design matrix, and ɛ is the desired precision in the output. We also show that the polynomial dependence on d and κ is necessary. Thus, our algorithm cannot be significantly improved. Furthermore, we also give a quantum algorithm that estimates the quality of the least-squares fit (without computing its parameters explicitly). This algorithm runs faster than the one for finding this fit, and can be used to check whether the given data set qualifies for linear regression in the first place.

  17. Computer and machine vision theory, algorithms, practicalities

    CERN Document Server

    Davies, E R

    2012-01-01

    Computer and Machine Vision: Theory, Algorithms, Practicalities (previously entitled Machine Vision) clearly and systematically presents the basic methodology of computer and machine vision, covering the essential elements of the theory while emphasizing algorithmic and practical design constraints. This fully revised fourth edition has brought in more of the concepts and applications of computer vision, making it a very comprehensive and up-to-date tutorial text suitable for graduate students, researchers and R&D engineers working in this vibrant subject. Key features include: Practical examples and case studies give the 'ins and outs' of developing real-world vision systems, giving engineers the realities of implementing the principles in practice New chapters containing case studies on surveillance and driver assistance systems give practical methods on these cutting-edge applications in computer vision Necessary mathematics and essential theory are made approachable by careful explanations and well-il...

  18. Mentoring to develop research selfefficacy, with particular reference to previously disadvantaged individuals

    OpenAIRE

    S. Schulze

    2010-01-01

    The development of inexperienced researchers is crucial. In response to the lack of research self-efficacy of many previously disadvantaged individuals, the article examines how mentoring can enhance the research self-efficacy of mentees. The study is grounded in the self-efficacy theory (SET) – an aspect of the social cognitive theory (SCT). Insights were gained from an in-depth study of SCT, SET and mentoring, and from a completed mentoring project. This led to the formulation of three basi...

  19. Development of a generally applicable morphokinetic algorithm capable of predicting the implantation potential of embryos transferred on Day 3

    Science.gov (United States)

    Petersen, Bjørn Molt; Boel, Mikkel; Montag, Markus; Gardner, David K.

    2016-01-01

    STUDY QUESTION Can a generally applicable morphokinetic algorithm suitable for Day 3 transfers of time-lapse monitored embryos originating from different culture conditions and fertilization methods be developed for the purpose of supporting the embryologist's decision on which embryo to transfer back to the patient in assisted reproduction? SUMMARY ANSWER The algorithm presented here can be used independently of culture conditions and fertilization method and provides predictive power not surpassed by other published algorithms for ranking embryos according to their blastocyst formation potential. WHAT IS KNOWN ALREADY Generally applicable algorithms have so far been developed only for predicting blastocyst formation. A number of clinics have reported validated implantation prediction algorithms, which have been developed based on clinic-specific culture conditions and clinical environment. However, a generally applicable embryo evaluation algorithm based on actual implantation outcome has not yet been reported. STUDY DESIGN, SIZE, DURATION Retrospective evaluation of data extracted from a database of known implantation data (KID) originating from 3275 embryos transferred on Day 3 conducted in 24 clinics between 2009 and 2014. The data represented different culture conditions (reduced and ambient oxygen with various culture medium strategies) and fertilization methods (IVF, ICSI). The capability to predict blastocyst formation was evaluated on an independent set of morphokinetic data from 11 218 embryos which had been cultured to Day 5. PARTICIPANTS/MATERIALS, SETTING, METHODS The algorithm was developed by applying automated recursive partitioning to a large number of annotation types and derived equations, progressing to a five-fold cross-validation test of the complete data set and a validation test of different incubation conditions and fertilization methods. The results were expressed as receiver operating characteristics curves using the area under the

  20. The development of gamma energy identify algorithm for compact radiation sensors using stepwise refinement technique

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Hyun Jun [Div. of Radiation Regulation, Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of); Kim, Ye Won; Kim, Hyun Duk; Cho, Gyu Seong [Dept. of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Yi, Yun [Dept. of of Electronics and Information Engineering, Korea University, Seoul (Korea, Republic of)

    2017-06-15

    A gamma energy identifying algorithm using spectral decomposition combined with smoothing method was suggested to confirm the existence of the artificial radio isotopes. The algorithm is composed by original pattern recognition method and smoothing method to enhance the performance to identify gamma energy of radiation sensors that have low energy resolution. The gamma energy identifying algorithm for the compact radiation sensor is a three-step of refinement process. Firstly, the magnitude set is calculated by the original spectral decomposition. Secondly, the magnitude of modeling error in the magnitude set is reduced by the smoothing method. Thirdly, the expected gamma energy is finally decided based on the enhanced magnitude set as a result of the spectral decomposition with the smoothing method. The algorithm was optimized for the designed radiation sensor composed of a CsI (Tl) scintillator and a silicon pin diode. The two performance parameters used to estimate the algorithm are the accuracy of expected gamma energy and the number of repeated calculations. The original gamma energy was accurately identified with the single energy of gamma radiation by adapting this modeling error reduction method. Also the average error decreased by half with the multi energies of gamma radiation in comparison to the original spectral decomposition. In addition, the number of repeated calculations also decreased by half even in low fluence conditions under 104 (/0.09 cm{sup 2} of the scintillator surface). Through the development of this algorithm, we have confirmed the possibility of developing a product that can identify artificial radionuclides nearby using inexpensive radiation sensors that are easy to use by the public. Therefore, it can contribute to reduce the anxiety of the public exposure by determining the presence of artificial radionuclides in the vicinity.

  1. An algorithmic decomposition of claw-free graphs leading to an O(n^3) algorithm for the weighted stable set problem

    OpenAIRE

    Faenza, Y.; Oriolo, G.; Stauffer, G.

    2011-01-01

    We propose an algorithm for solving the maximum weighted stable set problem on claw-free graphs that runs in O(n^3)-time, drastically improving the previous best known complexity bound. This algorithm is based on a novel decomposition theorem for claw-free graphs, which is also intioduced in the present paper. Despite being weaker than the well-known structure result for claw-free graphs given by Chudnovsky and Seymour, our decomposition theorem is, on the other hand, algorithmic, i.e. it is ...

  2. Geometric approximation algorithms

    CERN Document Server

    Har-Peled, Sariel

    2011-01-01

    Exact algorithms for dealing with geometric objects are complicated, hard to implement in practice, and slow. Over the last 20 years a theory of geometric approximation algorithms has emerged. These algorithms tend to be simple, fast, and more robust than their exact counterparts. This book is the first to cover geometric approximation algorithms in detail. In addition, more traditional computational geometry techniques that are widely used in developing such algorithms, like sampling, linear programming, etc., are also surveyed. Other topics covered include approximate nearest-neighbor search, shape approximation, coresets, dimension reduction, and embeddings. The topics covered are relatively independent and are supplemented by exercises. Close to 200 color figures are included in the text to illustrate proofs and ideas.

  3. Development of a new genetic algorithm to solve the feedstock scheduling problem in an anaerobic digester

    Science.gov (United States)

    Cram, Ana Catalina

    As worldwide environmental awareness grow, alternative sources of energy have become important to mitigate climate change. Biogas in particular reduces greenhouse gas emissions that contribute to global warming and has the potential of providing 25% of the annual demand for natural gas in the U.S. In 2011, 55,000 metric tons of methane emissions were reduced and 301 metric tons of carbon dioxide emissions were avoided through the use of biogas alone. Biogas is produced by anaerobic digestion through the fermentation of organic material. It is mainly composed of methane with a rage of 50 to 80% in its concentration. Carbon dioxide covers 20 to 50% and small amounts of hydrogen, carbon monoxide and nitrogen. The biogas production systems are anaerobic digestion facilities and the optimal operation of an anaerobic digester requires the scheduling of all batches from multiple feedstocks during a specific time horizon. The availability times, biomass quantities, biogas production rates and storage decay rates must all be taken into account for maximal biogas production to be achieved during the planning horizon. Little work has been done to optimize the scheduling of different types of feedstock in anaerobic digestion facilities to maximize the total biogas produced by these systems. Therefore, in the present thesis, a new genetic algorithm is developed with the main objective of obtaining the optimal sequence in which different feedstocks will be processed and the optimal time to allocate to each feedstock in the digester with the main objective of maximizing the production of biogas considering different types of feedstocks, arrival times and decay rates. Moreover, all batches need to be processed in the digester in a specified time with the restriction that only one batch can be processed at a time. The developed algorithm is applied to 3 different examples and a comparison with results obtained in previous studies is presented.

  4. Some multigrid algorithms for SIMD machines

    Energy Technology Data Exchange (ETDEWEB)

    Dendy, J.E. Jr. [Los Alamos National Lab., NM (United States)

    1996-12-31

    Previously a semicoarsening multigrid algorithm suitable for use on SIMD architectures was investigated. Through the use of new software tools, the performance of this algorithm has been considerably improved. The method has also been extended to three space dimensions. The method performs well for strongly anisotropic problems and for problems with coefficients jumping by orders of magnitude across internal interfaces. The parallel efficiency of this method is analyzed, and its actual performance on the CM-5 is compared with its performance on the CRAY-YMP. A standard coarsening multigrid algorithm is also considered, and we compare its performance on these two platforms as well.

  5. Initialization-free generalized Deutsche-Jazz's algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Chi, Dong Pyo [School of Mathematical Sciences, Seoul National University, Seoul (Korea, Republic of)]. E-mail: dpchi@math.snu.ac.kr; Kim, Jinsoo [School of Electrical Engineering and Computer Science, Seoul National University, Seoul (Korea)]. E-mail: jkim@ee.snu.ac.kr; Lee, Soojoon [School of Mathematical Sciences, Seoul National University, Seoul (Korea)]. E-mail: level@math.snu.ac.kr

    2001-06-29

    We generalize the Deutsch-Jozsa algorithm by exploiting summations of the roots of unity. The generalized algorithm distinguishes a wider class of functions promised to be either constant or many to one and onto an evenly spaced range. As previously, the generalized quantum algorithm solves this problem using a single functional evaluation. We also consider the problem of distinguishing constant and evenly balanced functions and present a quantum algorithm for this problem that does not require any initialization of an auxiliary register involved in the process of functional evaluation and after solving the problem recovers the initial state of an auxiliary register. (author)

  6. A Performance Evaluation of Lightning-NO Algorithms in CMAQ

    Science.gov (United States)

    In the Community Multiscale Air Quality (CMAQv5.2) model, we have implemented two algorithms for lightning NO production; one algorithm is based on the hourly observed cloud-to-ground lightning strike data from National Lightning Detection Network (NLDN) to replace the previous m...

  7. A GPU-based finite-size pencil beam algorithm with 3D-density correction for radiotherapy dose calculation

    International Nuclear Information System (INIS)

    Gu Xuejun; Jia Xun; Jiang, Steve B; Jelen, Urszula; Li Jinsheng

    2011-01-01

    Targeting at the development of an accurate and efficient dose calculation engine for online adaptive radiotherapy, we have implemented a finite-size pencil beam (FSPB) algorithm with a 3D-density correction method on graphics processing unit (GPU). This new GPU-based dose engine is built on our previously published ultrafast FSPB computational framework (Gu et al 2009 Phys. Med. Biol. 54 6287-97). Dosimetric evaluations against Monte Carlo dose calculations are conducted on ten IMRT treatment plans (five head-and-neck cases and five lung cases). For all cases, there is improvement with the 3D-density correction over the conventional FSPB algorithm and for most cases the improvement is significant. Regarding the efficiency, because of the appropriate arrangement of memory access and the usage of GPU intrinsic functions, the dose calculation for an IMRT plan can be accomplished well within 1 s (except for one case) with this new GPU-based FSPB algorithm. Compared to the previous GPU-based FSPB algorithm without 3D-density correction, this new algorithm, though slightly sacrificing the computational efficiency (∼5-15% lower), has significantly improved the dose calculation accuracy, making it more suitable for online IMRT replanning.

  8. Fluid-structure-coupling algorithm

    International Nuclear Information System (INIS)

    McMaster, W.H.; Gong, E.Y.; Landram, C.S.; Quinones, D.F.

    1980-01-01

    A fluid-structure-interaction algorithm has been developed and incorporated into the two dimensional code PELE-IC. This code combines an Eulerian incompressible fluid algorithm with a Lagrangian finite element shell algorithm and incorporates the treatment of complex free surfaces. The fluid structure, and coupling algorithms have been verified by the calculation of solved problems from the literature and from air and steam blowdown experiments. The code has been used to calculate loads and structural response from air blowdown and the oscillatory condensation of steam bubbles in water suppression pools typical of boiling water reactors. The techniques developed here have been extended to three dimensions and implemented in the computer code PELE-3D

  9. Fluid structure coupling algorithm

    International Nuclear Information System (INIS)

    McMaster, W.H.; Gong, E.Y.; Landram, C.S.; Quinones, D.F.

    1980-01-01

    A fluid-structure-interaction algorithm has been developed and incorporated into the two-dimensional code PELE-IC. This code combines an Eulerian incompressible fluid algorithm with a Lagrangian finite element shell algorithm and incorporates the treatment of complex free surfaces. The fluid structure and coupling algorithms have been verified by the calculation of solved problems from the literature and from air and steam blowdown experiments. The code has been used to calculate loads and structural response from air blowdown and the oscillatory condensation of steam bubbles in water suppression pools typical of boiling water reactors. The techniques developed have been extended to three dimensions and implemented in the computer code PELE-3D

  10. Development of Human-level Decision Making Algorithm for NPPs through Deep Neural Networks : Conceptual Approach

    International Nuclear Information System (INIS)

    Kim, Seung Geun; Seong, Poong Hyun

    2017-01-01

    Development of operation support systems and automation systems are closely related to machine learning field. However, since it is hard to achieve human-level delicacy and flexibility for complex tasks with conventional machine learning technologies, only operation support systems with simple purposes were developed and high-level automation related studies were not actively conducted. As one of the efforts for reducing human error in NPPs and technical advance toward automation, the ultimate goal of this research is to develop human-level decision making algorithm for NPPs during emergency situations. The concepts of SL, RL, policy network, value network, and MCTS, which were applied to decision making algorithm for other fields are introduced and combined with nuclear field specifications. Since the research is currently at the conceptual stage, more research is warranted.

  11. A relation between irreversibility and unlinkability for biometric template protection algorithms

    OpenAIRE

    井沼, 学

    2014-01-01

    For biometric recognition systems, privacy protection of enrolled users’ biometric information, which are called biometric templates, is a critical problem. Recently, various template protection algorithms have been proposed and many related previous works have discussed security notions to evaluate the protection performance of these protection algorithms. Irreversibility and unlinkability are important security notions discussed in many related previous works. In this paper, we prove that u...

  12. An adaptive left–right eigenvector evolution algorithm for vibration isolation control

    International Nuclear Information System (INIS)

    Wu, T Y

    2009-01-01

    The purpose of this research is to investigate the feasibility of utilizing an adaptive left and right eigenvector evolution (ALREE) algorithm for active vibration isolation. As depicted in the previous paper presented by Wu and Wang (2008 Smart Mater. Struct. 17 015048), the structural vibration behavior depends on both the disturbance rejection capability and mode shape distributions, which correspond to the left and right eigenvector distributions of the system, respectively. In this paper, a novel adaptive evolution algorithm is developed for finding the optimal combination of left–right eigenvectors of the vibration isolator, which is an improvement over the simultaneous left–right eigenvector assignment (SLREA) method proposed by Wu and Wang (2008 Smart Mater. Struct. 17 015048). The isolation performance index used in the proposed algorithm is defined by combining the orthogonality index of left eigenvectors and the modal energy ratio index of right eigenvectors. Through the proposed ALREE algorithm, both the left and right eigenvectors evolve such that the isolation performance index decreases, and therefore one can find the optimal combination of left–right eigenvectors of the closed-loop system for vibration isolation purposes. The optimal combination of left–right eigenvectors is then synthesized to determine the feedback gain matrix of the closed-loop system. The result of the active isolation control shows that the proposed method can be utilized to improve the vibration isolation performance compared with the previous approaches

  13. Golden Sine Algorithm: A Novel Math-Inspired Algorithm

    Directory of Open Access Journals (Sweden)

    TANYILDIZI, E.

    2017-05-01

    Full Text Available In this study, Golden Sine Algorithm (Gold-SA is presented as a new metaheuristic method for solving optimization problems. Gold-SA has been developed as a new search algorithm based on population. This math-based algorithm is inspired by sine that is a trigonometric function. In the algorithm, random individuals are created as many as the number of search agents with uniform distribution for each dimension. The Gold-SA operator searches to achieve a better solution in each iteration by trying to bring the current situation closer to the target value. The solution space is narrowed by the golden section so that the areas that are supposed to give only good results are scanned instead of the whole solution space scan. In the tests performed, it is seen that Gold-SA has better results than other population based methods. In addition, Gold-SA has fewer algorithm-dependent parameters and operators than other metaheuristic methods, increasing the importance of this method by providing faster convergence of this new method.

  14. Model-based Bayesian signal extraction algorithm for peripheral nerves

    Science.gov (United States)

    Eggers, Thomas E.; Dweiri, Yazan M.; McCallum, Grant A.; Durand, Dominique M.

    2017-10-01

    Objective. Multi-channel cuff electrodes have recently been investigated for extracting fascicular-level motor commands from mixed neural recordings. Such signals could provide volitional, intuitive control over a robotic prosthesis for amputee patients. Recent work has demonstrated success in extracting these signals in acute and chronic preparations using spatial filtering techniques. These extracted signals, however, had low signal-to-noise ratios and thus limited their utility to binary classification. In this work a new algorithm is proposed which combines previous source localization approaches to create a model based method which operates in real time. Approach. To validate this algorithm, a saline benchtop setup was created to allow the precise placement of artificial sources within a cuff and interference sources outside the cuff. The artificial source was taken from five seconds of chronic neural activity to replicate realistic recordings. The proposed algorithm, hybrid Bayesian signal extraction (HBSE), is then compared to previous algorithms, beamforming and a Bayesian spatial filtering method, on this test data. An example chronic neural recording is also analyzed with all three algorithms. Main results. The proposed algorithm improved the signal to noise and signal to interference ratio of extracted test signals two to three fold, as well as increased the correlation coefficient between the original and recovered signals by 10-20%. These improvements translated to the chronic recording example and increased the calculated bit rate between the recovered signals and the recorded motor activity. Significance. HBSE significantly outperforms previous algorithms in extracting realistic neural signals, even in the presence of external noise sources. These results demonstrate the feasibility of extracting dynamic motor signals from a multi-fascicled intact nerve trunk, which in turn could extract motor command signals from an amputee for the end goal of

  15. Improvement in PWR automatic optimization reloading methods using genetic algorithm

    International Nuclear Information System (INIS)

    Levine, S.H.; Ivanov, K.; Feltus, M.

    1996-01-01

    The objective of using automatic optimized reloading methods is to provide the Nuclear Engineer with an efficient method for reloading a nuclear reactor which results in superior core configurations that minimize fuel costs. Previous methods developed by Levine et al required a large effort to develop the initial core loading using a priority loading scheme. Subsequent modifications to this core configuration were made using expert rules to produce the final core design. Improvements in this technique have been made by using a genetic algorithm to produce improved core reload designs for PWRs more efficiently (authors)

  16. Improvement in PWR automatic optimization reloading methods using genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Levine, S H; Ivanov, K; Feltus, M [Pennsylvania State Univ., University Park, PA (United States)

    1996-12-01

    The objective of using automatic optimized reloading methods is to provide the Nuclear Engineer with an efficient method for reloading a nuclear reactor which results in superior core configurations that minimize fuel costs. Previous methods developed by Levine et al required a large effort to develop the initial core loading using a priority loading scheme. Subsequent modifications to this core configuration were made using expert rules to produce the final core design. Improvements in this technique have been made by using a genetic algorithm to produce improved core reload designs for PWRs more efficiently (authors).

  17. A simulated-based neural network algorithm for forecasting electrical energy consumption in Iran

    International Nuclear Information System (INIS)

    Azadeh, A.; Ghaderi, S.F.; Sohrabkhani, S.

    2008-01-01

    This study presents an integrated algorithm for forecasting monthly electrical energy consumption based on artificial neural network (ANN), computer simulation and design of experiments using stochastic procedures. First, an ANN approach is illustrated based on supervised multi-layer perceptron (MLP) network for the electrical consumption forecasting. The chosen model, therefore, can be compared to that of estimated by time series model. Computer simulation is developed to generate random variables for monthly electricity consumption. This is achieved to foresee the effects of probabilistic distribution on monthly electricity consumption. The simulated-based ANN model is then developed. Therefore, there are four treatments to be considered in analysis of variance (ANOVA), which are actual data, time series, ANN and simulated-based ANN. Furthermore, ANOVA is used to test the null hypothesis of the above four alternatives being statistically equal. If the null hypothesis is accepted, then the lowest mean absolute percentage error (MAPE) value is used to select the best model, otherwise the Duncan method (DMRT) of paired comparison is used to select the optimum model which could be time series, ANN or simulated-based ANN. In case of ties the lowest MAPE value is considered as the benchmark. The integrated algorithm has several unique features. First, it is flexible and identifies the best model based on the results of ANOVA and MAPE, whereas previous studies consider the best fitted ANN model based on MAPE or relative error results. Second, the proposed algorithm may identify conventional time series as the best model for future electricity consumption forecasting because of its dynamic structure, whereas previous studies assume that ANN always provide the best solutions and estimation. To show the applicability and superiority of the proposed algorithm, the monthly electricity consumption in Iran from March 1994 to February 2005 (131 months) is used and applied to

  18. Collaboration space division in collaborative product development based on a genetic algorithm

    Science.gov (United States)

    Qian, Xueming; Ma, Yanqiao; Feng, Huan

    2018-02-01

    The advance in the global environment, rapidly changing markets, and information technology has created a new stage for design. In such an environment, one strategy for success is the Collaborative Product Development (CPD). Organizing people effectively is the goal of Collaborative Product Development, and it solves the problem with certain foreseeability. The development group activities are influenced not only by the methods and decisions available, but also by correlation among personnel. Grouping the personnel according to their correlation intensity is defined as collaboration space division (CSD). Upon establishment of a correlation matrix (CM) of personnel and an analysis of the collaboration space, the genetic algorithm (GA) and minimum description length (MDL) principle may be used as tools in optimizing collaboration space. The MDL principle is used in setting up an object function, and the GA is used as a methodology. The algorithm encodes spatial information as a chromosome in binary. After repetitious crossover, mutation, selection and multiplication, a robust chromosome is found, which can be decoded into an optimal collaboration space. This new method can calculate the members in sub-spaces and individual groupings within the staff. Furthermore, the intersection of sub-spaces and public persons belonging to all sub-spaces can be determined simultaneously.

  19. An improved algorithm for connectivity analysis of distribution networks

    International Nuclear Information System (INIS)

    Kansal, M.L.; Devi, Sunita

    2007-01-01

    In the present paper, an efficient algorithm for connectivity analysis of moderately sized distribution networks has been suggested. Algorithm is based on generation of all possible minimal system cutsets. The algorithm is efficient as it identifies only the necessary and sufficient conditions of system failure conditions in n-out-of-n type of distribution networks. The proposed algorithm is demonstrated with the help of saturated and unsaturated distribution networks. The computational efficiency of the algorithm is justified by comparing the computational efforts with the previously suggested appended spanning tree (AST) algorithm. The proposed technique has the added advantage as it can be utilized for generation of system inequalities which is useful in reliability estimation of capacitated networks

  20. THE ALGORITHM AND PROGRAM OF M-MATRICES SEARCH AND STUDY

    Directory of Open Access Journals (Sweden)

    Y. N. Balonin

    2013-05-01

    Full Text Available The algorithm and software for search and study of orthogonal bases matrices – minimax matrices (M-matrix are considered. The algorithm scheme is shown, comments on calculation blocks are given, and interface of the MMatrix software system developed with participation of the authors is explained. The results of the universal algorithm work are presented as Hadamard matrices, Belevitch matrices (C-matrices, conference matrices and matrices of even and odd orders complementary and closely related to those ones by their properties, in particular, the matrix of the 22-th order for which there is no C-matrix. Examples of portraits for alternative matrices of the 255-th and the 257-th orders are given corresponding to the sequences of Mersenne and Fermat numbers. A new way to get Hadamard matrices is explained, different from the previously known procedures based on iterative processes and calculations of Lagrange symbols, with theoretical and practical meaning.

  1. Team Cooperation in a Network of Multi-Vehicle Unmanned Systems Synthesis of Consensus Algorithms

    CERN Document Server

    Semsar-Kazerooni, Elham

    2013-01-01

    Team Cooperation in a Network of Multi-Vehicle Unmanned Systems develops a framework for modeling and control of a network of multi-agent unmanned systems in a cooperative manner and with consideration of non-ideal and practical considerations. The main focus of this book is the development of “synthesis-based” algorithms rather than on conventional “analysis-based” approaches to the team cooperation, specifically the team consensus problems. The authors provide a set of modified “design-based” consensus algorithms whose optimality is verified through introduction of performance indices. This book also: Provides synthesis-based methodology for team cooperation Introduces a consensus-protocol optimized performance index  Offers comparisons for use of proper indices in measuring team performance Analyzes and predicts  performance of  previously designed consensus algorithms Analyses and predicts team behavior in the presence of non-ideal considerations such as actuator anomalies and faults as wel...

  2. Compression of fingerprint data using the wavelet vector quantization image compression algorithm. 1992 progress report

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, J.N.; Brislawn, C.M.

    1992-04-11

    This report describes the development of a Wavelet Vector Quantization (WVQ) image compression algorithm for fingerprint raster files. The pertinent work was performed at Los Alamos National Laboratory for the Federal Bureau of Investigation. This document describes a previously-sent package of C-language source code, referred to as LAFPC, that performs the WVQ fingerprint compression and decompression tasks. The particulars of the WVQ algorithm and the associated design procedure are detailed elsewhere; the purpose of this document is to report the results of the design algorithm for the fingerprint application and to delineate the implementation issues that are incorporated in LAFPC. Special attention is paid to the computation of the wavelet transform, the fast search algorithm used for the VQ encoding, and the entropy coding procedure used in the transmission of the source symbols.

  3. Algorithmic tools for interpreting vital signs.

    Science.gov (United States)

    Rathbun, Melina C; Ruth-Sahd, Lisa A

    2009-07-01

    Today's complex world of nursing practice challenges nurse educators to develop teaching methods that promote critical thinking skills and foster quick problem solving in the novice nurse. Traditional pedagogies previously used in the classroom and clinical setting are no longer adequate to prepare nursing students for entry into practice. In addition, educators have expressed frustration when encouraging students to apply newly learned theoretical content to direct the care of assigned patients in the clinical setting. This article presents algorithms as an innovative teaching strategy to guide novice student nurses in the interpretation and decision making related to vital sign assessment in an acute care setting.

  4. Chronic wrist pain: diagnosis and management. Development and use of a new algorithm

    NARCIS (Netherlands)

    van Vugt, R. M.; Bijlsma, J. W.; van Vugt, A. C.

    1999-01-01

    Chronic wrist pain can be difficult to manage and the differential diagnosis is extensive. To provide guidelines for assessment of the painful wrist an algorithm was developed to encourage a structured approach to the diagnosis and management of these patients. A review of the literature on causes

  5. Geometric Algorithms for Private-Cache Chip Multiprocessors

    DEFF Research Database (Denmark)

    Ajwani, Deepak; Sitchinava, Nodari; Zeh, Norbert

    2010-01-01

    -D convex hulls. These results are obtained by analyzing adaptations of either the PEM merge sort algorithm or PRAM algorithms. For the second group of problems—orthogonal line segment intersection reporting, batched range reporting, and related problems—more effort is required. What distinguishes......We study techniques for obtaining efficient algorithms for geometric problems on private-cache chip multiprocessors. We show how to obtain optimal algorithms for interval stabbing counting, 1-D range counting, weighted 2-D dominance counting, and for computing 3-D maxima, 2-D lower envelopes, and 2...... these problems from the ones in the previous group is the variable output size, which requires I/O-efficient load balancing strategies based on the contribution of the individual input elements to the output size. To obtain nearly optimal algorithms for these problems, we introduce a parallel distribution...

  6. Scheduling language and algorithm development study. Appendix: Study approach and activity summary

    Science.gov (United States)

    1974-01-01

    The approach and organization of the study to develop a high level computer programming language and a program library are presented. The algorithm and problem modeling analyses are summarized. The approach used to identify and specify the capabilities required in the basic language is described. Results of the analyses used to define specifications for the scheduling module library are presented.

  7. Cryptographic protocol security analysis based on bounded constructing algorithm

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    An efficient approach to analyzing cryptographic protocols is to develop automatic analysis tools based on formal methods. However, the approach has encountered the high computational complexity problem due to reasons that participants of protocols are arbitrary, their message structures are complex and their executions are concurrent. We propose an efficient automatic verifying algorithm for analyzing cryptographic protocols based on the Cryptographic Protocol Algebra (CPA) model proposed recently, in which algebraic techniques are used to simplify the description of cryptographic protocols and their executions. Redundant states generated in the analysis processes are much reduced by introducing a new algebraic technique called Universal Polynomial Equation and the algorithm can be used to verify the correctness of protocols in the infinite states space. We have implemented an efficient automatic analysis tool for cryptographic protocols, called ACT-SPA, based on this algorithm, and used the tool to check more than 20 cryptographic protocols. The analysis results show that this tool is more efficient, and an attack instance not offered previously is checked by using this tool.

  8. Development of a thermal control algorithm using artificial neural network models for improved thermal comfort and energy efficiency in accommodation buildings

    International Nuclear Information System (INIS)

    Moon, Jin Woo; Jung, Sung Kwon

    2016-01-01

    Highlights: • An ANN model for predicting optimal start moment of the cooling system was developed. • An ANN model for predicting the amount of cooling energy consumption was developed. • An optimal control algorithm was developed employing two ANN models. • The algorithm showed the advanced thermal comfort and energy efficiency. - Abstract: The aim of this study was to develop a control algorithm to demonstrate the improved thermal comfort and building energy efficiency of accommodation buildings in the cooling season. For this, two artificial neural network (ANN)-based predictive and adaptive models were developed and employed in the algorithm. One model predicted the cooling energy consumption during the unoccupied period for different setback temperatures and the other predicted the time required for restoring current indoor temperature to the normal set-point temperature. Using numerical simulation methods, the prediction accuracy of the two ANN models and the performance of the algorithm were tested. Through the test result analysis, the two ANN models showed their prediction accuracy with an acceptable error rate when applied in the control algorithm. In addition, the two ANN models based algorithm can be used to provide a more comfortable and energy efficient indoor thermal environment than the two conventional control methods, which respectively employed a fixed set-point temperature for the entire day and a setback temperature during the unoccupied period. Therefore, the operating range was 23–26 °C during the occupied period and 25–28 °C during the unoccupied period. Based on the analysis, it can be concluded that the optimal algorithm with two predictive and adaptive ANN models can be used to design a more comfortable and energy efficient indoor thermal environment for accommodation buildings in a comprehensive manner.

  9. Development of response models for the Earth Radiation Budget Experiment (ERBE) sensors. Part 4: Preliminary nonscanner models and count conversion algorithms

    Science.gov (United States)

    Halyo, Nesim; Choi, Sang H.

    1987-01-01

    Two count conversion algorithms and the associated dynamic sensor model for the M/WFOV nonscanner radiometers are defined. The sensor model provides and updates the constants necessary for the conversion algorithms, though the frequency with which these updates were needed was uncertain. This analysis therefore develops mathematical models for the conversion of irradiance at the sensor field of view (FOV) limiter into data counts, derives from this model two algorithms for the conversion of data counts to irradiance at the sensor FOV aperture and develops measurement models which account for a specific target source together with a sensor. The resulting algorithms are of the gain/offset and Kalman filter types. The gain/offset algorithm was chosen since it provided sufficient accuracy using simpler computations.

  10. Developing Subdomain Allocation Algorithms Based on Spatial and Communicational Constraints to Accelerate Dust Storm Simulation

    Science.gov (United States)

    Gui, Zhipeng; Yu, Manzhu; Yang, Chaowei; Jiang, Yunfeng; Chen, Songqing; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Hassan, Mohammed Anowarul; Jin, Baoxuan

    2016-01-01

    Dust storm has serious disastrous impacts on environment, human health, and assets. The developments and applications of dust storm models have contributed significantly to better understand and predict the distribution, intensity and structure of dust storms. However, dust storm simulation is a data and computing intensive process. To improve the computing performance, high performance computing has been widely adopted by dividing the entire study area into multiple subdomains and allocating each subdomain on different computing nodes in a parallel fashion. Inappropriate allocation may introduce imbalanced task loads and unnecessary communications among computing nodes. Therefore, allocation is a key factor that may impact the efficiency of parallel process. An allocation algorithm is expected to consider the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire simulation. This research introduces three algorithms to optimize the allocation by considering the spatial and communicational constraints: 1) an Integer Linear Programming (ILP) based algorithm from combinational optimization perspective; 2) a K-Means and Kernighan-Lin combined heuristic algorithm (K&K) integrating geometric and coordinate-free methods by merging local and global partitioning; 3) an automatic seeded region growing based geometric and local partitioning algorithm (ASRG). The performance and effectiveness of the three algorithms are compared based on different factors. Further, we adopt the K&K algorithm as the demonstrated algorithm for the experiment of dust model simulation with the non-hydrostatic mesoscale model (NMM-dust) and compared the performance with the MPI default sequential allocation. The results demonstrate that K&K method significantly improves the simulation performance with better subdomain allocation. This method can also be adopted for other relevant atmospheric and numerical

  11. An Active Sensor Algorithm for Corn Nitrogen Recommendations Based on a Chlorophyll Meter Algorithm

    Science.gov (United States)

    In previous work we found active canopy sensor reflectance assessments of corn (Zea mays L.) N status acquired at two growth stages (V11 and V15) have the greatest potential for directing in-season N applications, but emphasized an algorithm was needed to translate sensor readings into appropriate N...

  12. Measuring river from the cloud - River width algorithm development on Google Earth Engine

    Science.gov (United States)

    Yang, X.; Pavelsky, T.; Allen, G. H.; Donchyts, G.

    2017-12-01

    Rivers are some of the most dynamic features of the terrestrial land surface. They help distribute freshwater, nutrients, sediment, and they are also responsible for some of the greatest natural hazards. Despite their importance, our understanding of river behavior is limited at the global scale, in part because we do not have a river observational dataset that spans both time and space. Remote sensing data represent a rich, largely untapped resource for observing river dynamics. In particular, publicly accessible archives of satellite optical imagery, which date back to the 1970s, can be used to study the planview morphodynamics of rivers at the global scale. Here we present an image processing algorithm developed using the Google Earth Engine cloud-based platform, that can automatically extracts river centerlines and widths from Landsat 5, 7, and 8 scenes at 30 m resolution. Our algorithm makes use of the latest monthly global surface water history dataset and an existing Global River Width from Landsat (GRWL) dataset to efficiently extract river masks from each Landsat scene. Then a combination of distance transform and skeletonization techniques are used to extract river centerlines. Finally, our algorithm calculates wetted river width at each centerline pixel perpendicular to its local centerline direction. We validated this algorithm using in situ data estimated from 16 USGS gauge stations (N=1781). We find that 92% of the width differences are within 60 m (i.e. the minimum length of 2 Landsat pixels). Leveraging Earth Engine's infrastructure of collocated data and processing power, our goal is to use this algorithm to reconstruct the morphodynamic history of rivers globally by processing over 100,000 Landsat 5 scenes, covering from 1984 to 2013.

  13. Systems Engineering Approach to Develop Guidance, Navigation and Control Algorithms for Unmanned Ground Vehicle

    Science.gov (United States)

    2016-09-01

    Global Positioning System HNA hybrid navigation algorithm HRI human-robot interface IED Improvised Explosive Device IMU inertial measurement unit...Potential Field Method R&D research and development RDT&E Research, development, test and evaluation RF radiofrequency RGB red, green and blue ROE...were radiofrequency (RF) controlled and pneumatically actuated upon receiving the wireless commands from the radio operator. The pairing of such an

  14. A Prototype Hail Detection Algorithm and Hail Climatology Developed with the Advanced Microwave Sounding Unit (AMSU)

    Science.gov (United States)

    Ferraro, Ralph; Beauchamp, James; Cecil, Dan; Heymsfeld, Gerald

    2015-01-01

    In previous studies published in the open literature, a strong relationship between the occurrence of hail and the microwave brightness temperatures (primarily at 37 and 85 GHz) was documented. These studies were performed with the Nimbus-7 SMMR, the TRMM Microwave Imager (TMI) and most recently, the Aqua AMSR-E sensor. This lead to climatologies of hail frequency from TMI and AMSR-E, however, limitations include geographical domain of the TMI sensor (35 S to 35 N) and the overpass time of the Aqua satellite (130 am/pm local time), both of which reduce an accurate mapping of hail events over the global domain and the full diurnal cycle. Nonetheless, these studies presented exciting, new applications for passive microwave sensors. Since 1998, NOAA and EUMETSAT have been operating the AMSU-A/B and the MHS on several operational satellites: NOAA-15 through NOAA-19; MetOp-A and -B. With multiple satellites in operation since 2000, the AMSU/MHS sensors provide near global coverage every 4 hours, thus, offering a much larger time and temporal sampling than TRMM or AMSR-E. With similar observation frequencies near 30 and 85 GHz and additionally three at the 183 GHz water vapor band, the potential to detect strong convection associated with severe storms on a more comprehensive time and space scale exists. In this study, we develop a prototype AMSU-based hail detection algorithm through the use of collocated satellite and surface hail reports over the continental U.S. for a 12-year period (2000-2011). Compared with the surface observations, the algorithm detects approximately 40 percent of hail occurrences. The simple threshold algorithm is then used to generate a hail climatology that is based on all available AMSU observations during 2000-11 that is stratified in several ways, including total hail occurrence by month (March through September), total annual, and over the diurnal cycle. Independent comparisons are made compared to similar data sets derived from other

  15. Two-Step Proximal Gradient Algorithm for Low-Rank Matrix Completion

    Directory of Open Access Journals (Sweden)

    Qiuyu Wang

    2016-06-01

    Full Text Available In this paper, we  propose a two-step proximal gradient algorithm to solve nuclear norm regularized least squares for the purpose of recovering low-rank data matrix from sampling of its entries. Each iteration generated by the proposed algorithm is a combination of the latest three points, namely, the previous point, the current iterate, and its proximal gradient point. This algorithm preserves the computational simplicity of classical proximal gradient algorithm where a singular value decomposition in proximal operator is involved. Global convergence is followed directly in the literature. Numerical results are reported to show the efficiency of the algorithm.

  16. Time-advance algorithms based on Hamilton's principle

    International Nuclear Information System (INIS)

    Lewis, H.R.; Kostelec, P.J.

    1993-01-01

    Time-advance algorithms based on Hamilton's variational principle are being developed for application to problems in plasma physics and other areas. Hamilton's principle was applied previously to derive a system of ordinary differential equations in time whose solution provides an approximation to the evolution of a plasma described by the Vlasov-Maxwell equations. However, the variational principle was not used to obtain an algorithm for solving the ordinary differential equations numerically. The present research addresses the numerical solution of systems of ordinary differential equations via Hamilton's principle. The basic idea is first to choose a class of functions for approximating the solution of the ordinary differential equations over a specific time interval. Then the parameters in the approximating function are determined by applying Hamilton's principle exactly within the class of approximating functions. For example, if an approximate solution is desired between time t and time t + Δ t, the class of approximating functions could be polynomials in time up to some degree. The issue of how to choose time-advance algorithms is very important for achieving efficient, physically meaningful computer simulations. The objective is to reliably simulate those characteristics of an evolving system that are scientifically most relevant. Preliminary numerical results are presented, including comparisons with other computational methods

  17. Rapid Mental Сomputation System as a Tool for Algorithmic Thinking of Elementary School Students Development

    Directory of Open Access Journals (Sweden)

    Rushan Ziatdinov

    2012-07-01

    Full Text Available In this paper, we describe the possibilities of using a rapid mental computation system in elementary education. The system consists of a number of readily memorized operations that allow one to perform arithmetic computations very quickly. These operations are actually simple algorithms which can develop or improve the algorithmic thinking of pupils. Using a rapid mental computation system allows forming the basis for the study of computer science in secondary school.

  18. A Novel Geo-Broadcast Algorithm for V2V Communications over WSN

    Directory of Open Access Journals (Sweden)

    José J. Anaya

    2014-08-01

    Full Text Available The key for enabling the next generation of advanced driver assistance systems (ADAS, the cooperative systems, is the availability of vehicular communication technologies, whose mandatory installation in cars is foreseen in the next few years. The definition of the communications is in the final step of development, with great efforts on standardization and some field operational tests of network devices and applications. However, some inter-vehicular communications issues are not sufficiently developed and are the target of research. One of these challenges is the construction of stable networks based on the position of the nodes of the vehicular network, as well as the broadcast of information destined to nodes concentrated in a specific geographic area without collapsing the network. In this paper, a novel algorithm for geo-broadcast communications is presented, based on the evolution of previous results in vehicular mesh networks using wireless sensor networks with IEEE 802.15.4 technology. This algorithm has been designed and compared with the IEEE 802.11p algorithms, implemented and validated in controlled conditions and tested on real vehicles. The results suggest that the characteristics of the designed broadcast algorithm can improve any vehicular communications architecture to complement a geo-networking functionality that supports a variety of ADAS.

  19. The development of a new algorithm to calculate a survival function in non-parametric ways

    International Nuclear Information System (INIS)

    Ahn, Kwang Won; Kim, Yoon Ik; Chung, Chang Hyun; Kim, Kil Yoo

    2001-01-01

    In this study, a generalized formula of the Kaplan-Meier method is developed. The idea of this algorithm is that the result of the Kaplan-Meier estimator is the same as that of the redistribute-to-the right algorithm. Hence, the result of the Kaplan-Meier estimator is used when we redistribute to the right. This can be explained as the following steps, at first, the same mass is distributed to all the points. At second, when you reach the censored points, you must redistribute the mass of that point to the right according to the following rule; to normalize the masses, which are located to the right of the censored point, and redistribute the mass of the censored point to the right according to the ratio of the normalized mass. Until now, we illustrate the main idea of this algorithm.The meaning of that idea is more efficient than PL-estimator in the sense that it decreases the mass of after that area. Just like a redistribute to the right algorithm, this method is enough for the probability theory

  20. An administrative data validation study of the accuracy of algorithms for identifying rheumatoid arthritis: the influence of the reference standard on algorithm performance.

    Science.gov (United States)

    Widdifield, Jessica; Bombardier, Claire; Bernatsky, Sasha; Paterson, J Michael; Green, Diane; Young, Jacqueline; Ivers, Noah; Butt, Debra A; Jaakkimainen, R Liisa; Thorne, J Carter; Tu, Karen

    2014-06-23

    We have previously validated administrative data algorithms to identify patients with rheumatoid arthritis (RA) using rheumatology clinic records as the reference standard. Here we reassessed the accuracy of the algorithms using primary care records as the reference standard. We performed a retrospective chart abstraction study using a random sample of 7500 adult patients under the care of 83 family physicians contributing to the Electronic Medical Record Administrative data Linked Database (EMRALD) in Ontario, Canada. Using physician-reported diagnoses as the reference standard, we computed and compared the sensitivity, specificity, and predictive values for over 100 administrative data algorithms for RA case ascertainment. We identified 69 patients with RA for a lifetime RA prevalence of 0.9%. All algorithms had excellent specificity (>97%). However, sensitivity varied (75-90%) among physician billing algorithms. Despite the low prevalence of RA, most algorithms had adequate positive predictive value (PPV; 51-83%). The algorithm of "[1 hospitalization RA diagnosis code] or [3 physician RA diagnosis codes with ≥1 by a specialist over 2 years]" had a sensitivity of 78% (95% CI 69-88), specificity of 100% (95% CI 100-100), PPV of 78% (95% CI 69-88) and NPV of 100% (95% CI 100-100). Administrative data algorithms for detecting RA patients achieved a high degree of accuracy amongst the general population. However, results varied slightly from our previous report, which can be attributed to differences in the reference standards with respect to disease prevalence, spectrum of disease, and type of comparator group.

  1. New Optimization Algorithms in Physics

    CERN Document Server

    Hartmann, Alexander K

    2004-01-01

    Many physicists are not aware of the fact that they can solve their problems by applying optimization algorithms. Since the number of such algorithms is steadily increasing, many new algorithms have not been presented comprehensively until now. This presentation of recently developed algorithms applied in physics, including demonstrations of how they work and related results, aims to encourage their application, and as such the algorithms selected cover concepts and methods from statistical physics to optimization problems emerging in theoretical computer science.

  2. Learning from nature: Nature-inspired algorithms

    DEFF Research Database (Denmark)

    Albeanu, Grigore; Madsen, Henrik; Popentiu-Vladicescu, Florin

    2016-01-01

    .), genetic and evolutionary strategies, artificial immune systems etc. Well-known examples of applications include: aircraft wing design, wind turbine design, bionic car, bullet train, optimal decisions related to traffic, appropriate strategies to survive under a well-adapted immune system etc. Based......During last decade, the nature has inspired researchers to develop new algorithms. The largest collection of nature-inspired algorithms is biology-inspired: swarm intelligence (particle swarm optimization, ant colony optimization, cuckoo search, bees' algorithm, bat algorithm, firefly algorithm etc...... on collective social behaviour of organisms, researchers have developed optimization strategies taking into account not only the individuals, but also groups and environment. However, learning from nature, new classes of approaches can be identified, tested and compared against already available algorithms...

  3. Algorithms

    Indian Academy of Sciences (India)

    ticians but also forms the foundation of computer science. Two ... with methods of developing algorithms for solving a variety of problems but ... applications of computers in science and engineer- ... numerical calculus are as important. We will ...

  4. Duality quantum algorithm efficiently simulates open quantum systems

    Science.gov (United States)

    Wei, Shi-Jie; Ruan, Dong; Long, Gui-Lu

    2016-01-01

    Because of inevitable coupling with the environment, nearly all practical quantum systems are open system, where the evolution is not necessarily unitary. In this paper, we propose a duality quantum algorithm for simulating Hamiltonian evolution of an open quantum system. In contrast to unitary evolution in a usual quantum computer, the evolution operator in a duality quantum computer is a linear combination of unitary operators. In this duality quantum algorithm, the time evolution of the open quantum system is realized by using Kraus operators which is naturally implemented in duality quantum computer. This duality quantum algorithm has two distinct advantages compared to existing quantum simulation algorithms with unitary evolution operations. Firstly, the query complexity of the algorithm is O(d3) in contrast to O(d4) in existing unitary simulation algorithm, where d is the dimension of the open quantum system. Secondly, By using a truncated Taylor series of the evolution operators, this duality quantum algorithm provides an exponential improvement in precision compared with previous unitary simulation algorithm. PMID:27464855

  5. Explaining algorithms using metaphors

    CERN Document Server

    Forišek, Michal

    2013-01-01

    There is a significant difference between designing a new algorithm, proving its correctness, and teaching it to an audience. When teaching algorithms, the teacher's main goal should be to convey the underlying ideas and to help the students form correct mental models related to the algorithm. This process can often be facilitated by using suitable metaphors. This work provides a set of novel metaphors identified and developed as suitable tools for teaching many of the 'classic textbook' algorithms taught in undergraduate courses worldwide. Each chapter provides exercises and didactic notes fo

  6. Improved algorithm for quantum separability and entanglement detection

    International Nuclear Information System (INIS)

    Ioannou, L.M.; Ekert, A.K.; Travaglione, B.C.; Cheung, D.

    2004-01-01

    Determining whether a quantum state is separable or entangled is a problem of fundamental importance in quantum information science. It has recently been shown that this problem is NP-hard, suggesting that an efficient, general solution does not exist. There is a highly inefficient 'basic algorithm' for solving the quantum separability problem which follows from the definition of a separable state. By exploiting specific properties of the set of separable states, we introduce a classical algorithm that solves the problem significantly faster than the 'basic algorithm', allowing a feasible separability test where none previously existed, e.g., in 3x3-dimensional systems. Our algorithm also provides a unique tool in the experimental detection of entanglement

  7. Development and validation of a simple algorithm for initiation of CPAP in neonates with respiratory distress in Malawi.

    Science.gov (United States)

    Hundalani, Shilpa G; Richards-Kortum, Rebecca; Oden, Maria; Kawaza, Kondwani; Gest, Alfred; Molyneux, Elizabeth

    2015-07-01

    Low-cost bubble continuous positive airway pressure (bCPAP) systems have been shown to improve survival in neonates with respiratory distress, in developing countries including Malawi. District hospitals in Malawi implementing CPAP requested simple and reliable guidelines to enable healthcare workers with basic skills and minimal training to determine when treatment with CPAP is necessary. We developed and validated TRY (T: Tone is good, R: Respiratory Distress and Y=Yes) CPAP, a simple algorithm to identify neonates with respiratory distress who would benefit from CPAP. To validate the TRY CPAP algorithm for neonates with respiratory distress in a low-resource setting. We constructed an algorithm using a combination of vital signs, tone and birth weight to determine the need for CPAP in neonates with respiratory distress. Neonates admitted to the neonatal ward of Queen Elizabeth Central Hospital, in Blantyre, Malawi, were assessed in a prospective, cross-sectional study. Nurses and paediatricians-in-training assessed neonates to determine whether they required CPAP using the TRY CPAP algorithm. To establish the accuracy of the TRY CPAP algorithm in evaluating the need for CPAP, their assessment was compared with the decision of a neonatologist blinded to the TRY CPAP algorithm findings. 325 neonates were evaluated over a 2-month period; 13% were deemed to require CPAP by the neonatologist. The inter-rater reliability with the algorithm was 0.90 for nurses and 0.97 for paediatricians-in-training using the neonatologist's assessment as the reference standard. The TRY CPAP algorithm has the potential to be a simple and reliable tool to assist nurses and clinicians in identifying neonates who require treatment with CPAP in low-resource settings. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  8. DEVELOPMENT OF 2D HUMAN BODY MODELING USING THINNING ALGORITHM

    Directory of Open Access Journals (Sweden)

    K. Srinivasan

    2010-11-01

    Full Text Available Monitoring the behavior and activities of people in Video surveillance has gained more applications in Computer vision. This paper proposes a new approach to model the human body in 2D view for the activity analysis using Thinning algorithm. The first step of this work is Background subtraction which is achieved by the frame differencing algorithm. Thinning algorithm has been used to find the skeleton of the human body. After thinning, the thirteen feature points like terminating points, intersecting points, shoulder, elbow, and knee points have been extracted. Here, this research work attempts to represent the body model in three different ways such as Stick figure model, Patch model and Rectangle body model. The activities of humans have been analyzed with the help of 2D model for the pre-defined poses from the monocular video data. Finally, the time consumption and efficiency of our proposed algorithm have been evaluated.

  9. A COMPARISON BETWEEN TWO ALGORITHMS FOR THE RETRIEVAL OF SOIL MOISTURE USING AMSR-E DATA

    Directory of Open Access Journals (Sweden)

    Simonetta ePaloscia

    2015-04-01

    Full Text Available A comparison between two algorithms for estimating soil moisture with microwave satellite data was carried out by using the datasets collected on the four Agricultural Research Service (ARS watershed sites in the US from 2002 to 2009. These sites collectively represent a wide range of ground conditions and precipitation regimes (from natural to agricultural surfaces and from desert to humid regions and provide long-term in-situ data. One of the algorithms is the artificial neural network-based algorithm developed by the Institute of Applied Physics of the National Research Council (IFAC-CNR (HydroAlgo and the second one is the Single Channel Algorithm (SCA developed by USDA-ARS (US Department of Agriculture-Agricultural Research Service. Both algorithms are based on the same radiative transfer equations but are implemented very differently. Both made use of datasets provided by the Japanese Aerospace Exploration Agency (JAXA, within the framework of Advanced Microwave Scanning Radiometer–Earth Observing System (AMSR-E and Global Change Observation Mission–Water GCOM/AMSR-2 programs. Results demonstrated that both algorithms perform better than the mission specified accuracy, with Root Mean Square Error (RMSE ≤0.06 m3/m3 and Bias <0.02 m3/m3. These results expand on previous investigations using different algorithms and sites. The novelty of the paper consists of the fact that it is the first intercomparison of the HydroAlgo algorithm with a more traditional retrieval algorithm, which offers an approach to higher spatial resolution products.

  10. Experimental investigation of the velocity field in buoyant diffusion flames using PIV and TPIV algorithm

    Science.gov (United States)

    L. Sun; X. Zhou; S.M. Mahalingam; D.R. Weise

    2005-01-01

    We investigated a simultaneous temporally and spatially resolved 2-D velocity field above a burning circular pan of alcohol using particle image velocimetry (PIV). The results obtained from PIV were used to assess a thermal particle image velocimetry (TPIV) algorithm previously developed to approximate the velocity field using the temperature field, simultaneously...

  11. The Development of Advanced Processing and Analysis Algorithms for Improved Neutron Multiplicity Measurements

    International Nuclear Information System (INIS)

    Santi, P.; Favalli, A.; Hauck, D.; Henzl, V.; Henzlova, D.; Ianakiev, K.; Iliev, M.; Swinhoe, M.; Croft, S.; Worrall, L.

    2015-01-01

    One of the most distinctive and informative signatures of special nuclear materials is the emission of correlated neutrons from either spontaneous or induced fission. Because the emission of correlated neutrons is a unique and unmistakable signature of nuclear materials, the ability to effectively detect, process, and analyze these emissions will continue to play a vital role in the non-proliferation, safeguards, and security missions. While currently deployed neutron measurement techniques based on 3He proportional counter technology, such as neutron coincidence and multiplicity counters currently used by the International Atomic Energy Agency, have proven to be effective over the past several decades for a wide range of measurement needs, a number of technical and practical limitations exist in continuing to apply this technique to future measurement needs. In many cases, those limitations exist within the algorithms that are used to process and analyze the detected signals from these counters that were initially developed approximately 20 years ago based on the technology and computing power that was available at that time. Over the past three years, an effort has been undertaken to address the general shortcomings in these algorithms by developing new algorithms that are based on fundamental physics principles that should lead to the development of more sensitive neutron non-destructive assay instrumentation. Through this effort, a number of advancements have been made in correcting incoming data for electronic dead time, connecting the two main types of analysis techniques used to quantify the data (Shift register analysis and Feynman variance to mean analysis), and in the underlying physical model, known as the point model, that is used to interpret the data in terms of the characteristic properties of the item being measured. The current status of the testing and evaluation of these advancements in correlated neutron analysis techniques will be discussed

  12. Development of pattern recognition algorithms for particles detection from atmospheric images

    International Nuclear Information System (INIS)

    Khatchadourian, S.

    2010-01-01

    The HESS experiment consists of a system of telescopes destined to observe cosmic rays. Since the project has achieved a high level of performances, a second phase of the project has been initiated. This implies the addition of a new telescope which is more sensitive than its predecessors and which is capable of collecting a huge amount of images. In this context, all data collected by the telescope can not be retained because of storage limitations. Therefore, a new real-time system trigger must be designed in order to select interesting events on the fly. The purpose of this thesis was to propose a trigger solution to efficiently discriminate events (images) which are captured by the telescope. The first part of this thesis was to develop pattern recognition algorithms to be implemented within the trigger. A processing chain based on neural networks and Zernike moments has been validated. The second part of the thesis has focused on the implementation of the proposed algorithms onto an FPGA target, taking into account the application constraints in terms of resources and execution time. (author)

  13. A computer-aided detection (CAD) system with a 3D algorithm for small acute intracranial hemorrhage

    Science.gov (United States)

    Wang, Ximing; Fernandez, James; Deshpande, Ruchi; Lee, Joon K.; Chan, Tao; Liu, Brent

    2012-02-01

    Acute Intracranial hemorrhage (AIH) requires urgent diagnosis in the emergency setting to mitigate eventual sequelae. However, experienced radiologists may not always be available to make a timely diagnosis. This is especially true for small AIH, defined as lesion smaller than 10 mm in size. A computer-aided detection (CAD) system for the detection of small AIH would facilitate timely diagnosis. A previously developed 2D algorithm shows high false positive rates in the evaluation based on LAC/USC cases, due to the limitation of setting up correct coordinate system for the knowledge-based classification system. To achieve a higher sensitivity and specificity, a new 3D algorithm is developed. The algorithm utilizes a top-hat transformation and dynamic threshold map to detect small AIH lesions. Several key structures of brain are detected and are used to set up a 3D anatomical coordinate system. A rule-based classification of the lesion detected is applied based on the anatomical coordinate system. For convenient evaluation in clinical environment, the CAD module is integrated with a stand-alone system. The CAD is evaluated by small AIH cases and matched normal collected in LAC/USC. The result of 3D CAD and the previous 2D CAD has been compared.

  14. Development and image quality assessment of a contrast-enhancement algorithm for display of digital chest radiographs

    International Nuclear Information System (INIS)

    Rehm, K.

    1992-01-01

    This dissertation presents a contrast-enhancement algorithm Artifact-Suppressed Adaptive Histogram Equalization (ASAHE). This algorithm was developed as part of a larger effort to replace the film radiographs currently used in radiology departments with digital images. Among the expected benefits of digital radiology are improved image management and greater diagnostic accuracy. Film radiographs record X-ray transmission data at high spatial resolution, and a wide dynamic range of signal. Current digital radiography systems record an image at reduced spatial resolution and with coarse sampling of the available dynamic range. These reductions have a negative impact on diagnostic accuracy. The contrast-enhancement algorithm presented in this dissertation is designed to boost diagnostic accuracy of radiologists using digital images. The ASAHE algorithm is an extension of an earlier technique called Adaptive Histogram Equalization (AHE). The AHE algorithm is unsuitable for chest radiographs because it over-enhances noise, and introduces boundary artifacts. The modifications incorporated in ASAHE suppress the artifacts and allow processing of chest radiographs. This dissertation describes the psychophysical methods used to evaluate the effects of processing algorithms on human observer performance. An experiment conducted with anthropomorphic phantoms and simulated nodules showed the ASAHE algorithm to be superior for human detection of nodules when compared to a computed radiography system's algorithm that is in current use. An experiment conducted using clinical images demonstrating pneumothoraces (partial lung collapse) indicated no difference in human observer accuracy when ASAHE images were compared to computed radiography images, but greater ease of diagnosis when ASAHE images were used. These results provide evidence to suggest that Artifact-Suppressed Adaptive Histogram Equalization can be effective in increasing diagnostic accuracy and efficiency

  15. A DIFFERENTIAL EVOLUTION ALGORITHM DEVELOPED FOR A NURSE SCHEDULING PROBLEM

    Directory of Open Access Journals (Sweden)

    Shahnazari-Shahrezaei, P.

    2012-11-01

    Full Text Available Nurse scheduling is a type of manpower allocation problem that tries to satisfy hospital managers objectives and nurses preferences as much as possible by generating fair shift schedules. This paper presents a nurse scheduling problem based on a real case study, and proposes two meta-heuristics a differential evolution algorithm (DE and a greedy randomised adaptive search procedure (GRASP to solve it. To investigate the efficiency of the proposed algorithms, two problems are solved. Furthermore, some comparison metrics are applied to examine the reliability of the proposed algorithms. The computational results in this paper show that the proposed DE outperforms the GRASP.

  16. Length-Bounded Hybrid CPU/GPU Pattern Matching Algorithm for Deep Packet Inspection

    Directory of Open Access Journals (Sweden)

    Yi-Shan Lin

    2017-01-01

    Full Text Available Since frequent communication between applications takes place in high speed networks, deep packet inspection (DPI plays an important role in the network application awareness. The signature-based network intrusion detection system (NIDS contains a DPI technique that examines the incoming packet payloads by employing a pattern matching algorithm that dominates the overall inspection performance. Existing studies focused on implementing efficient pattern matching algorithms by parallel programming on software platforms because of the advantages of lower cost and higher scalability. Either the central processing unit (CPU or the graphic processing unit (GPU were involved. Our studies focused on designing a pattern matching algorithm based on the cooperation between both CPU and GPU. In this paper, we present an enhanced design for our previous work, a length-bounded hybrid CPU/GPU pattern matching algorithm (LHPMA. In the preliminary experiment, the performance and comparison with the previous work are displayed, and the experimental results show that the LHPMA can achieve not only effective CPU/GPU cooperation but also higher throughput than the previous method.

  17. Algorithm FIRE-Feynman Integral REduction

    International Nuclear Information System (INIS)

    Smirnov, A.V.

    2008-01-01

    The recently developed algorithm FIRE performs the reduction of Feynman integrals to master integrals. It is based on a number of strategies, such as applying the Laporta algorithm, the s-bases algorithm, region-bases and integrating explicitly over loop momenta when possible. Currently it is being used in complicated three-loop calculations.

  18. A simple greedy algorithm for dynamic graph orientation

    DEFF Research Database (Denmark)

    Berglin, Edvin; Brodal, Gerth Stølting

    2017-01-01

    Graph orientations with low out-degree are one of several ways to efficiently store sparse graphs. If the graphs allow for insertion and deletion of edges, one may have to flip the orientation of some edges to prevent blowing up the maximum out-degree. We use arboricity as our sparsity measure....... With an immensely simple greedy algorithm, we get parametrized trade-off bounds between out-degree and worst case number of flips, which previously only existed for amortized number of flips. We match the previous best worst-case algorithm (in O(log n) flips) for general arboricity and beat it for either constant...... or super-logarithmic arboricity. We also match a previous best amortized result for at least logarithmic arboricity, and give the first results with worst-case O(1) and O(sqrt(log n)) flips nearly matching degree bounds to their respective amortized solutions....

  19. Evaluation of in silico algorithms for use with ACMG/AMP clinical variant interpretation guidelines.

    Science.gov (United States)

    Ghosh, Rajarshi; Oak, Ninad; Plon, Sharon E

    2017-11-28

    The American College of Medical Genetics and American College of Pathologists (ACMG/AMP) variant classification guidelines for clinical reporting are widely used in diagnostic laboratories for variant interpretation. The ACMG/AMP guidelines recommend complete concordance of predictions among all in silico algorithms used without specifying the number or types of algorithms. The subjective nature of this recommendation contributes to discordance of variant classification among clinical laboratories and prevents definitive classification of variants. Using 14,819 benign or pathogenic missense variants from the ClinVar database, we compared performance of 25 algorithms across datasets differing in distinct biological and technical variables. There was wide variability in concordance among different combinations of algorithms with particularly low concordance for benign variants. We also identify a previously unreported source of error in variant interpretation (false concordance) where concordant in silico predictions are opposite to the evidence provided by other sources. We identified recently developed algorithms with high predictive power and robust to variables such as disease mechanism, gene constraint, and mode of inheritance, although poorer performing algorithms are more frequently used based on review of the clinical genetics literature (2011-2017). Our analyses identify algorithms with high performance characteristics independent of underlying disease mechanisms. We describe combinations of algorithms with increased concordance that should improve in silico algorithm usage during assessment of clinically relevant variants using the ACMG/AMP guidelines.

  20. A unified analysis of FBP-based algorithms in helical cone-beam and circular cone- and fan-beam scans

    International Nuclear Information System (INIS)

    Pan Xiaochuan; Xia Dan; Zou Yu; Yu Lifeng

    2004-01-01

    A circular scanning trajectory is and will likely remain a popular choice of trajectory in computed tomography (CT) imaging because it is easy to implement and control. Filtered-backprojection (FBP)-based algorithms have been developed previously for approximate and exact reconstruction of the entire image or a region of interest within the image in circular cone-beam and fan-beam cases. Recently, we have developed a 3D FBP-based algorithm for image reconstruction on PI-line segments in a helical cone-beam scan. In this work, we demonstrated that the 3D FBP-based algorithm indeed provided a rather general formulation for image reconstruction from divergent projections (such as cone-beam and fan-beam projections). On the basis of this formulation we derived new approximate or exact algorithms for image reconstruction in circular cone-beam or fan-beam scans, which can be interpreted as special cases of the helical scan. Existing algorithms corresponding to the derived algorithms were identified. We also performed a preliminary numerical study to verify our theoretical results in each of the cases. The results in the work can readily be generalized to other non-circular trajectories

  1. Recent developments in structure-preserving algorithms for oscillatory differential equations

    CERN Document Server

    Wu, Xinyuan

    2018-01-01

    The main theme of this book is recent progress in structure-preserving algorithms for solving initial value problems of oscillatory differential equations arising in a variety of research areas, such as astronomy, theoretical physics, electronics, quantum mechanics and engineering. It systematically describes the latest advances in the development of structure-preserving integrators for oscillatory differential equations, such as structure-preserving exponential integrators, functionally fitted energy-preserving integrators, exponential Fourier collocation methods, trigonometric collocation methods, and symmetric and arbitrarily high-order time-stepping methods. Most of the material presented here is drawn from the recent literature. Theoretical analysis of the newly developed schemes shows their advantages in the context of structure preservation. All the new methods introduced in this book are proven to be highly effective compared with the well-known codes in the scientific literature. This book also addre...

  2. MicroTrack: an algorithm for concurrent projectome and microstructure estimation.

    Science.gov (United States)

    Sherbondy, Anthony J; Rowe, Matthew C; Alexander, Daniel C

    2010-01-01

    This paper presents MicroTrack, an algorithm that combines global tractography and direct microstructure estimation using diffusion-weighted imaging data. Previous work recovers connectivity via tractography independently from estimating microstructure features, such as axon diameter distribution and density. However, the two estimates have great potential to inform one another given the common assumption that microstructural features remain consistent along fibers. Here we provide a preliminary examination of this hypothesis. We adapt a global tractography algorithm to associate axon diameter with each putative pathway and optimize both the set of pathways and their microstructural parameters to find the best fit of this holistic white-matter model to the MRI data. We demonstrate in simulation that, with a multi-shell HARDI acquisition, this approach not only improves estimates of microstructural parameters over voxel-by-voxel estimation, but provides a solution to long standing problems in tractography. In particular, a simple experiment demonstrates the resolution of the well known ambiguity between crossing and kissing fibers. The results strongly motivate further development of this kind of algorithm for brain connectivity mapping.

  3. Warehouse sizing algorithm for edification works of construc-tion sector

    Directory of Open Access Journals (Sweden)

    Andres Mauricio Hualpa Zuñiga

    2015-08-01

    Full Text Available This article contains the development of an algorithm applied to the solution of problems of sizing of storage spaces in companies in the construction sector. This problem is justified under the degree of informality that occurs at the time of assigning storage areas, without considering parameters related to stages of construction, the characteristics of the product and the provisions of the work area. In a previous study it is identified that the degree of informality at the moment of assigning storage areas, generates poor rates of capacity utilization and delivery of incomplete orders. The design of the algorithm is supported by a comprehensive model of sizing subjected to a system of equations with variables of quantity, volume and material dimensions, to finally establish the necessary storage area. The algorithm is adapted to programming language in order to present the results in graphic language where the sizing of storage spaces is visible. These results are validated through the evaluation of storage capacity utilization and completely delivered orders for different cargo units, where improvements in these indicators are shown.

  4. Development of an algorithm for X-ray exposures using the Panasonic UD-802A thermoluminescent dosemeter

    International Nuclear Information System (INIS)

    McKittrick, Leo; Currivan, Lorraine; Pollard, David; Nicholls, Colyn; Romero, A.M.; Palethorpe, Jeffrey

    2008-01-01

    Full text: As part of its continuous quality improvement the Dosimetry Service of the Radiological Protection Institute of Ireland (RPII) in conjunction with Panasonic Industrial Europe (UK) has investigated further the use of the standard Panasonic algorithm for X-ray exposures using the Panasonic UD-802A TL dosemeter. Originally developed to satisfy the obsolete standard ANSI 13.11-1983, the standard Panasonic dose algorithm has undergone several revisions such as HPS N13.11-2001. This paper presents a dose algorithm that can be used to correct the dose response at low energies such as X-ray radiation using a four element TL dosemeter due to the behaviour of two different independent phosphors. A series of irradiations with a range of energies using N-20 up to Co-60 were carried out with our particular interest being in responses to X-ray irradiations. Irradiations were performed at: RRPPS, University Hospital Birmingham NHS Foundation Trust, U.K.; HPA, U.K. and CIEMAT, Madrid, Spain. Different irradiation conditions were employed which included: X-ray from narrow and wide spectra as described by ISO 4037-1 (1996), and ISO water slab phantom and PMMA slab phantom respectively. Using the UD-802A TLD and UD-854AT hanger combination, the response data from the series of irradiations was utilised to validate and if necessary, modify the photon/beta branches of the algorithm to: 1. Best estimate Hp(10) and Hp(0.07); 2. Provide information on irradiation energies; 3. Verification by performance tests. This work further advances the algorithm developed at CIEMAT whereby a best-fit, polynomial trend is used with the dose response variations between the independent phosphors. (author)

  5. A pipelined FPGA implementation of an encryption algorithm based on genetic algorithm

    Science.gov (United States)

    Thirer, Nonel

    2013-05-01

    With the evolution of digital data storage and exchange, it is essential to protect the confidential information from every unauthorized access. High performance encryption algorithms were developed and implemented by software and hardware. Also many methods to attack the cipher text were developed. In the last years, the genetic algorithm has gained much interest in cryptanalysis of cipher texts and also in encryption ciphers. This paper analyses the possibility to use the genetic algorithm as a multiple key sequence generator for an AES (Advanced Encryption Standard) cryptographic system, and also to use a three stages pipeline (with four main blocks: Input data, AES Core, Key generator, Output data) to provide a fast encryption and storage/transmission of a large amount of data.

  6. The study of Kruskal's and Prim's algorithms on the Multiple Instruction and Single Data stream computer system

    Directory of Open Access Journals (Sweden)

    A. Yu. Popov

    2015-01-01

    Full Text Available Bauman Moscow State Technical University is implementing a project to develop operating principles of computer system having radically new architecture. A developed working model of the system allowed us to evaluate an efficiency of developed hardware and software. The experimental results presented in previous studies, as well as the analysis of operating principles of new computer system permit to draw conclusions regarding its efficiency in solving discrete optimization problems related to processing of sets.The new architecture is based on a direct hardware support of operations of discrete mathematics, which is reflected in using the special facilities for processing of sets and data structures. Within the framework of the project a special device was designed, i.e. a structure processor (SP, which improved the performance, without limiting the scope of applications of such a computer system.The previous works presented the basic principles of the computational process organization in MISD (Multiple Instructions, Single Data system, showed the structure and features of the structure processor and the general principles to solve discrete optimization problems on graphs.This paper examines two search algorithms of the minimum spanning tree, namely Kruskal's and Prim's algorithms. It studies the implementations of algorithms for two SP operation modes: coprocessor mode and MISD one. The paper presents results of experimental comparison of MISD system performance in coprocessor mode with mainframes.

  7. Development of a Mobile Robot Test Platform and Methods for Validation of Prognostics-Enabled Decision Making Algorithms

    Directory of Open Access Journals (Sweden)

    Jose R. Celaya

    2013-01-01

    Full Text Available As fault diagnosis and prognosis systems in aerospace applications become more capable, the ability to utilize information supplied by them becomes increasingly important. While certain types of vehicle health data can be effectively processed and acted upon by crew or support personnel, others, due to their complexity or time constraints, require either automated or semi-automated reasoning. Prognostics-enabled Decision Making (PDM is an emerging research area that aims to integrate prognostic health information and knowledge about the future operating conditions into the process of selecting subsequent actions for the system. The newly developed PDM algorithms require suitable software and hardware platforms for testing under realistic fault scenarios. The paper describes the development of such a platform, based on the K11 planetary rover prototype. A variety of injectable fault modes are being investigated for electrical, mechanical, and power subsystems of the testbed, along with methods for data collection and processing. In addition to the hardware platform, a software simulator with matching capabilities has been developed. The simulator allows for prototyping and initial validation of the algorithms prior to their deployment on the K11. The simulator is also available to the PDM algorithms to assist with the reasoning process. A reference set of diagnostic, prognostic, and decision making algorithms is also described, followed by an overview of the current test scenarios and the results of their execution on the simulator.

  8. Evaluation of Two Robot Vision Control Algorithms Developed Based on N-R and EKG Methods for Slender Bar Placement

    Energy Technology Data Exchange (ETDEWEB)

    Son, Jae Kyung; Jang, Wan Shik; Hong, Sung Mun [Gwangju (Korea, Republic of)

    2013-04-15

    Many problems need to be solved before vision systems can actually be applied in industry, such as the precision of the kinematics model of the robot control algorithm based on visual information, active compensation of the camera's focal length and orientation during the movement of the robot, and understanding the mapping of the physical 3-D space into 2-D camera coordinates. An algorithm is proposed to enable robot to move actively even if the relative positions between the camera and the robot is unknown. To solve the correction problem, this study proposes vision system model with six camera parameters. To develop the robot vision control algorithm, the N-R and EKG methods are applied to the vision system model. Finally, the position accuracy and processing time of the two algorithms developed based based on the EKG and the N-R methods are compared experimentally by making the robot perform slender bar placement task.

  9. Evaluation of Two Robot Vision Control Algorithms Developed Based on N-R and EKG Methods for Slender Bar Placement

    International Nuclear Information System (INIS)

    Son, Jae Kyung; Jang, Wan Shik; Hong, Sung Mun

    2013-01-01

    Many problems need to be solved before vision systems can actually be applied in industry, such as the precision of the kinematics model of the robot control algorithm based on visual information, active compensation of the camera's focal length and orientation during the movement of the robot, and understanding the mapping of the physical 3-D space into 2-D camera coordinates. An algorithm is proposed to enable robot to move actively even if the relative positions between the camera and the robot is unknown. To solve the correction problem, this study proposes vision system model with six camera parameters. To develop the robot vision control algorithm, the N-R and EKG methods are applied to the vision system model. Finally, the position accuracy and processing time of the two algorithms developed based based on the EKG and the N-R methods are compared experimentally by making the robot perform slender bar placement task

  10. Algorithm and simulation development in support of response strategies for contamination events in air and water systems.

    Energy Technology Data Exchange (ETDEWEB)

    Waanders, Bart Van Bloemen

    2006-01-01

    Chemical/Biological/Radiological (CBR) contamination events pose a considerable threat to our nation's infrastructure, especially in large internal facilities, external flows, and water distribution systems. Because physical security can only be enforced to a limited degree, deployment of early warning systems is being considered. However to achieve reliable and efficient functionality, several complex questions must be answered: (1) where should sensors be placed, (2) how can sparse sensor information be efficiently used to determine the location of the original intrusion, (3) what are the model and data uncertainties, (4) how should these uncertainties be handled, and (5) how can our algorithms and forward simulations be sufficiently improved to achieve real time performance? This report presents the results of a three year algorithmic and application development to support the identification, mitigation, and risk assessment of CBR contamination events. The main thrust of this investigation was to develop (1) computationally efficient algorithms for strategically placing sensors, (2) identification process of contamination events by using sparse observations, (3) characterization of uncertainty through developing accurate demands forecasts and through investigating uncertain simulation model parameters, (4) risk assessment capabilities, and (5) reduced order modeling methods. The development effort was focused on water distribution systems, large internal facilities, and outdoor areas.

  11. Performance and development plans for the Inner Detector trigger algorithms at ATLAS

    International Nuclear Information System (INIS)

    Martin-Haugh, Stewart

    2014-01-01

    A description of the algorithms and the performance of the ATLAS Inner Detector trigger for LHC Run 1 are presented, as well as prospects for a redesign of the tracking algorithms in Run 2. The Inner Detector trigger algorithms are vital for many trigger signatures at ATLAS. The performance of the algorithms for electrons is presented. The ATLAS trigger software will be restructured from two software levels into a single stage which poses a big challenge for the trigger algorithms in terms of execution time and maintaining the physics performance. Expected future improvements in the timing and efficiencies of the Inner Detector triggers are discussed, utilising the planned merging of the current two stages of the ATLAS trigger.

  12. Development and performance analysis of a lossless data reduction algorithm for voip

    International Nuclear Information System (INIS)

    Misbahuddin, S.; Boulejfen, N.

    2014-01-01

    VoIP (Voice Over IP) is becoming an alternative way of voice communications over the Internet. To better utilize voice call bandwidth, some standard compression algorithms are applied in VoIP systems. However, these algorithms affect the voice quality with high compression ratios. This paper presents a lossless data reduction technique to improve VoIP data transfer rate over the IP network. The proposed algorithm exploits the data redundancies in digitized VFs (Voice Frames) generated by VoIP systems. Performance of proposed data reduction algorithm has been presented in terms of compression ratio. The proposed algorithm will help retain the voice quality along with the improvement in VoIP data transfer rates. (author)

  13. Molecular descriptor subset selection in theoretical peptide quantitative structure-retention relationship model development using nature-inspired optimization algorithms.

    Science.gov (United States)

    Žuvela, Petar; Liu, J Jay; Macur, Katarzyna; Bączek, Tomasz

    2015-10-06

    In this work, performance of five nature-inspired optimization algorithms, genetic algorithm (GA), particle swarm optimization (PSO), artificial bee colony (ABC), firefly algorithm (FA), and flower pollination algorithm (FPA), was compared in molecular descriptor selection for development of quantitative structure-retention relationship (QSRR) models for 83 peptides that originate from eight model proteins. The matrix with 423 descriptors was used as input, and QSRR models based on selected descriptors were built using partial least squares (PLS), whereas root mean square error of prediction (RMSEP) was used as a fitness function for their selection. Three performance criteria, prediction accuracy, computational cost, and the number of selected descriptors, were used to evaluate the developed QSRR models. The results show that all five variable selection methods outperform interval PLS (iPLS), sparse PLS (sPLS), and the full PLS model, whereas GA is superior because of its lowest computational cost and higher accuracy (RMSEP of 5.534%) with a smaller number of variables (nine descriptors). The GA-QSRR model was validated initially through Y-randomization. In addition, it was successfully validated with an external testing set out of 102 peptides originating from Bacillus subtilis proteomes (RMSEP of 22.030%). Its applicability domain was defined, from which it was evident that the developed GA-QSRR exhibited strong robustness. All the sources of the model's error were identified, thus allowing for further application of the developed methodology in proteomics.

  14. Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs.

    Science.gov (United States)

    Gulshan, Varun; Peng, Lily; Coram, Marc; Stumpe, Martin C; Wu, Derek; Narayanaswamy, Arunachalam; Venugopalan, Subhashini; Widner, Kasumi; Madams, Tom; Cuadros, Jorge; Kim, Ramasamy; Raman, Rajiv; Nelson, Philip C; Mega, Jessica L; Webster, Dale R

    2016-12-13

    Deep learning is a family of computational methods that allow an algorithm to program itself by learning from a large set of examples that demonstrate the desired behavior, removing the need to specify rules explicitly. Application of these methods to medical imaging requires further assessment and validation. To apply deep learning to create an algorithm for automated detection of diabetic retinopathy and diabetic macular edema in retinal fundus photographs. A specific type of neural network optimized for image classification called a deep convolutional neural network was trained using a retrospective development data set of 128 175 retinal images, which were graded 3 to 7 times for diabetic retinopathy, diabetic macular edema, and image gradability by a panel of 54 US licensed ophthalmologists and ophthalmology senior residents between May and December 2015. The resultant algorithm was validated in January and February 2016 using 2 separate data sets, both graded by at least 7 US board-certified ophthalmologists with high intragrader consistency. Deep learning-trained algorithm. The sensitivity and specificity of the algorithm for detecting referable diabetic retinopathy (RDR), defined as moderate and worse diabetic retinopathy, referable diabetic macular edema, or both, were generated based on the reference standard of the majority decision of the ophthalmologist panel. The algorithm was evaluated at 2 operating points selected from the development set, one selected for high specificity and another for high sensitivity. The EyePACS-1 data set consisted of 9963 images from 4997 patients (mean age, 54.4 years; 62.2% women; prevalence of RDR, 683/8878 fully gradable images [7.8%]); the Messidor-2 data set had 1748 images from 874 patients (mean age, 57.6 years; 42.6% women; prevalence of RDR, 254/1745 fully gradable images [14.6%]). For detecting RDR, the algorithm had an area under the receiver operating curve of 0.991 (95% CI, 0.988-0.993) for EyePACS-1 and 0

  15. A Novel User Classification Method for Femtocell Network by Using Affinity Propagation Algorithm and Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Afaz Uddin Ahmed

    2014-01-01

    Full Text Available An artificial neural network (ANN and affinity propagation (AP algorithm based user categorization technique is presented. The proposed algorithm is designed for closed access femtocell network. ANN is used for user classification process and AP algorithm is used to optimize the ANN training process. AP selects the best possible training samples for faster ANN training cycle. The users are distinguished by using the difference of received signal strength in a multielement femtocell device. A previously developed directive microstrip antenna is used to configure the femtocell device. Simulation results show that, for a particular house pattern, the categorization technique without AP algorithm takes 5 indoor users and 10 outdoor users to attain an error-free operation. While integrating AP algorithm with ANN, the system takes 60% less training samples reducing the training time up to 50%. This procedure makes the femtocell more effective for closed access operation.

  16. A Novel User Classification Method for Femtocell Network by Using Affinity Propagation Algorithm and Artificial Neural Network

    Science.gov (United States)

    Ahmed, Afaz Uddin; Tariqul Islam, Mohammad; Ismail, Mahamod; Kibria, Salehin; Arshad, Haslina

    2014-01-01

    An artificial neural network (ANN) and affinity propagation (AP) algorithm based user categorization technique is presented. The proposed algorithm is designed for closed access femtocell network. ANN is used for user classification process and AP algorithm is used to optimize the ANN training process. AP selects the best possible training samples for faster ANN training cycle. The users are distinguished by using the difference of received signal strength in a multielement femtocell device. A previously developed directive microstrip antenna is used to configure the femtocell device. Simulation results show that, for a particular house pattern, the categorization technique without AP algorithm takes 5 indoor users and 10 outdoor users to attain an error-free operation. While integrating AP algorithm with ANN, the system takes 60% less training samples reducing the training time up to 50%. This procedure makes the femtocell more effective for closed access operation. PMID:25133214

  17. Development of a 3D muon disappearance algorithm for muon scattering tomography

    Science.gov (United States)

    Blackwell, T. B.; Kudryavtsev, V. A.

    2015-05-01

    Upon passing through a material, muons lose energy, scatter off nuclei and atomic electrons, and can stop in the material. Muons will more readily lose energy in higher density materials. Therefore multiple muon disappearances within a localized volume may signal the presence of high-density materials. We have developed a new technique that improves the sensitivity of standard muon scattering tomography. This technique exploits these muon disappearances to perform non-destructive assay of an inspected volume. Muons that disappear have their track evaluated using a 3D line extrapolation algorithm, which is in turn used to construct a 3D tomographic image of the inspected volume. Results of Monte Carlo simulations that measure muon disappearance in different types of target materials are presented. The ability to differentiate between different density materials using the 3D line extrapolation algorithm is established. Finally the capability of this new muon disappearance technique to enhance muon scattering tomography techniques in detecting shielded HEU in cargo containers has been demonstrated.

  18. THE APPROACHING TRAIN DETECTION ALGORITHM

    OpenAIRE

    S. V. Bibikov

    2015-01-01

    The paper deals with detection algorithm for rail vibroacoustic waves caused by approaching train on the background of increased noise. The urgency of algorithm development for train detection in view of increased rail noise, when railway lines are close to roads or road intersections is justified. The algorithm is based on the method of weak signals detection in a noisy environment. The information statistics ultimate expression is adjusted. We present the results of algorithm research and t...

  19. Novel search algorithms for a mid-infrared spectral library of cotton contaminants.

    Science.gov (United States)

    Loudermilk, J Brian; Himmelsbach, David S; Barton, Franklin E; de Haseth, James A

    2008-06-01

    During harvest, a variety of plant based contaminants are collected along with cotton lint. The USDA previously created a mid-infrared, attenuated total reflection (ATR), Fourier transform infrared (FT-IR) spectral library of cotton contaminants for contaminant identification as the contaminants have negative impacts on yarn quality. This library has shown impressive identification rates for extremely similar cellulose based contaminants in cases where the library was representative of the samples searched. When spectra of contaminant samples from crops grown in different geographic locations, seasons, and conditions and measured with a different spectrometer and accessories were searched, identification rates for standard search algorithms decreased significantly. Six standard algorithms were examined: dot product, correlation, sum of absolute values of differences, sum of the square root of the absolute values of differences, sum of absolute values of differences of derivatives, and sum of squared differences of derivatives. Four categories of contaminants derived from cotton plants were considered: leaf, stem, seed coat, and hull. Experiments revealed that the performance of the standard search algorithms depended upon the category of sample being searched and that different algorithms provided complementary information about sample identity. These results indicated that choosing a single standard algorithm to search the library was not possible. Three voting scheme algorithms based on result frequency, result rank, category frequency, or a combination of these factors for the results returned by the standard algorithms were developed and tested for their capability to overcome the unpredictability of the standard algorithms' performances. The group voting scheme search was based on the number of spectra from each category of samples represented in the library returned in the top ten results of the standard algorithms. This group algorithm was able to identify

  20. Experimental Validation of Advanced Dispersed Fringe Sensing (ADFS) Algorithm Using Advanced Wavefront Sensing and Correction Testbed (AWCT)

    Science.gov (United States)

    Wang, Xu; Shi, Fang; Sigrist, Norbert; Seo, Byoung-Joon; Tang, Hong; Bikkannavar, Siddarayappa; Basinger, Scott; Lay, Oliver

    2012-01-01

    Large aperture telescope commonly features segment mirrors and a coarse phasing step is needed to bring these individual segments into the fine phasing capture range. Dispersed Fringe Sensing (DFS) is a powerful coarse phasing technique and its alteration is currently being used for JWST.An Advanced Dispersed Fringe Sensing (ADFS) algorithm is recently developed to improve the performance and robustness of previous DFS algorithms with better accuracy and unique solution. The first part of the paper introduces the basic ideas and the essential features of the ADFS algorithm and presents the some algorithm sensitivity study results. The second part of the paper describes the full details of algorithm validation process through the advanced wavefront sensing and correction testbed (AWCT): first, the optimization of the DFS hardware of AWCT to ensure the data accuracy and reliability is illustrated. Then, a few carefully designed algorithm validation experiments are implemented, and the corresponding data analysis results are shown. Finally the fiducial calibration using Range-Gate-Metrology technique is carried out and a <10nm or <1% algorithm accuracy is demonstrated.

  1. Unsupervised learning algorithms

    CERN Document Server

    Aydin, Kemal

    2016-01-01

    This book summarizes the state-of-the-art in unsupervised learning. The contributors discuss how with the proliferation of massive amounts of unlabeled data, unsupervised learning algorithms, which can automatically discover interesting and useful patterns in such data, have gained popularity among researchers and practitioners. The authors outline how these algorithms have found numerous applications including pattern recognition, market basket analysis, web mining, social network analysis, information retrieval, recommender systems, market research, intrusion detection, and fraud detection. They present how the difficulty of developing theoretically sound approaches that are amenable to objective evaluation have resulted in the proposal of numerous unsupervised learning algorithms over the past half-century. The intended audience includes researchers and practitioners who are increasingly using unsupervised learning algorithms to analyze their data. Topics of interest include anomaly detection, clustering,...

  2. METHODOLOGICAL GROUNDS ABOUT ALGORITHM OF DEVELOPMENT ORGANIZATIONAL ANALYSIS OF RAILWAYS OPERATION

    Directory of Open Access Journals (Sweden)

    H. D. Eitutis

    2010-12-01

    Full Text Available It was established that the initial stage of reorganization is to run diagnostics of the enterprise, under which a decision on development of an algorithm for structural transformations shall be made. Organizational and management analysis is an important component of diagnostics and is aimed at defining the methods and principles for the enterprise management system. The results of the carried out organizational analysis allow defining the problems and «bottle necks» in the system of strategic management of Ukrainian railways as a whole and in different directions of their economic activities.

  3. Multidisciplinary Design, Analysis, and Optimization Tool Development Using a Genetic Algorithm

    Science.gov (United States)

    Pak, Chan-gi; Li, Wesley

    2009-01-01

    Multidisciplinary design, analysis, and optimization using a genetic algorithm is being developed at the National Aeronautics and Space Administration Dryden Flight Research Center (Edwards, California) to automate analysis and design process by leveraging existing tools to enable true multidisciplinary optimization in the preliminary design stage of subsonic, transonic, supersonic, and hypersonic aircraft. This is a promising technology, but faces many challenges in large-scale, real-world application. This report describes current approaches, recent results, and challenges for multidisciplinary design, analysis, and optimization as demonstrated by experience with the Ikhana fire pod design.!

  4. The Chandra Source Catalog: Algorithms

    Science.gov (United States)

    McDowell, Jonathan; Evans, I. N.; Primini, F. A.; Glotfelty, K. J.; McCollough, M. L.; Houck, J. C.; Nowak, M. A.; Karovska, M.; Davis, J. E.; Rots, A. H.; Siemiginowska, A. L.; Hain, R.; Evans, J. D.; Anderson, C. S.; Bonaventura, N. R.; Chen, J. C.; Doe, S. M.; Fabbiano, G.; Galle, E. C.; Gibbs, D. G., II; Grier, J. D.; Hall, D. M.; Harbo, P. N.; He, X.; Lauer, J.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Plummer, D. A.; Refsdal, B. L.; Sundheim, B. A.; Tibbetts, M. S.; van Stone, D. W.; Winkelman, S. L.; Zografou, P.

    2009-09-01

    Creation of the Chandra Source Catalog (CSC) required adjustment of existing pipeline processing, adaptation of existing interactive analysis software for automated use, and development of entirely new algorithms. Data calibration was based on the existing pipeline, but more rigorous data cleaning was applied and the latest calibration data products were used. For source detection, a local background map was created including the effects of ACIS source readout streaks. The existing wavelet source detection algorithm was modified and a set of post-processing scripts used to correct the results. To analyse the source properties we ran the SAO Traceray trace code for each source to generate a model point spread function, allowing us to find encircled energy correction factors and estimate source extent. Further algorithms were developed to characterize the spectral, spatial and temporal properties of the sources and to estimate the confidence intervals on count rates and fluxes. Finally, sources detected in multiple observations were matched, and best estimates of their merged properties derived. In this paper we present an overview of the algorithms used, with more detailed treatment of some of the newly developed algorithms presented in companion papers.

  5. Marshall Rosenbluth and the Metropolis algorithm

    International Nuclear Information System (INIS)

    Gubernatis, J.E.

    2005-01-01

    The 1953 publication, 'Equation of State Calculations by Very Fast Computing Machines' by N. Metropolis, A. W. Rosenbluth and M. N. Rosenbluth, and M. Teller and E. Teller [J. Chem. Phys. 21, 1087 (1953)] marked the beginning of the use of the Monte Carlo method for solving problems in the physical sciences. The method described in this publication subsequently became known as the Metropolis algorithm, undoubtedly the most famous and most widely used Monte Carlo algorithm ever published. As none of the authors made subsequent use of the algorithm, they became unknown to the large simulation physics community that grew from this publication and their roles in its development became the subject of mystery and legend. At a conference marking the 50th anniversary of the 1953 publication, Marshall Rosenbluth gave his recollections of the algorithm's development. The present paper describes the algorithm, reconstructs the historical context in which it was developed, and summarizes Marshall's recollections

  6. Parameter estimation by Differential Search Algorithm from horizontal loop electromagnetic (HLEM) data

    Science.gov (United States)

    Alkan, Hilal; Balkaya, Çağlayan

    2018-02-01

    We present an efficient inversion tool for parameter estimation from horizontal loop electromagnetic (HLEM) data using Differential Search Algorithm (DSA) which is a swarm-intelligence-based metaheuristic proposed recently. The depth, dip, and origin of a thin subsurface conductor causing the anomaly are the parameters estimated by the HLEM method commonly known as Slingram. The applicability of the developed scheme was firstly tested on two synthetically generated anomalies with and without noise content. Two control parameters affecting the convergence characteristic to the solution of the algorithm were tuned for the so-called anomalies including one and two conductive bodies, respectively. Tuned control parameters yielded more successful statistical results compared to widely used parameter couples in DSA applications. Two field anomalies measured over a dipping graphitic shale from Northern Australia were then considered, and the algorithm provided the depth estimations being in good agreement with those of previous studies and drilling information. Furthermore, the efficiency and reliability of the results obtained were investigated via probability density function. Considering the results obtained, we can conclude that DSA characterized by the simple algorithmic structure is an efficient and promising metaheuristic for the other relatively low-dimensional geophysical inverse problems. Finally, the researchers after being familiar with the content of developed scheme displaying an easy to use and flexible characteristic can easily modify and expand it for their scientific optimization problems.

  7. Conformational Space Annealing explained: A general optimization algorithm, with diverse applications

    Science.gov (United States)

    Joung, InSuk; Kim, Jong Yun; Gross, Steven P.; Joo, Keehyoung; Lee, Jooyoung

    2018-02-01

    Many problems in science and engineering can be formulated as optimization problems. One way to solve these problems is to develop tailored problem-specific approaches. As such development is challenging, an alternative is to develop good generally-applicable algorithms. Such algorithms are easy to apply, typically function robustly, and reduce development time. Here we provide a description for one such algorithm called Conformational Space Annealing (CSA) along with its python version, PyCSA. We previously applied it to many optimization problems including protein structure prediction and graph community detection. To demonstrate its utility, we have applied PyCSA to two continuous test functions, namely Ackley and Eggholder functions. In addition, in order to provide complete generality of PyCSA to any types of an objective function, we demonstrate the way PyCSA can be applied to a discrete objective function, namely a parameter optimization problem. Based on the benchmarking results of the three problems, the performance of CSA is shown to be better than or similar to the most popular optimization method, simulated annealing. For continuous objective functions, we found that, L-BFGS-B was the best performing local optimization method, while for a discrete objective function Nelder-Mead was the best. The current version of PyCSA can be run in parallel at the coarse grained level by calculating multiple independent local optimizations separately. The source code of PyCSA is available from http://lee.kias.re.kr.

  8. RFID Location Algorithm

    Directory of Open Access Journals (Sweden)

    Wang Zi Min

    2016-01-01

    Full Text Available With the development of social services, people’s living standards improve further requirements, there is an urgent need for a way to adapt to the complex situation of the new positioning technology. In recent years, RFID technology have a wide range of applications in all aspects of life and production, such as logistics tracking, car alarm, security and other items. The use of RFID technology to locate, it is a new direction in the eyes of the various research institutions and scholars. RFID positioning technology system stability, the error is small and low-cost advantages of its location algorithm is the focus of this study.This article analyzes the layers of RFID technology targeting methods and algorithms. First, RFID common several basic methods are introduced; Secondly, higher accuracy to political network location method; Finally, LANDMARC algorithm will be described. Through this it can be seen that advanced and efficient algorithms play an important role in increasing RFID positioning accuracy aspects.Finally, the algorithm of RFID location technology are summarized, pointing out the deficiencies in the algorithm, and put forward a follow-up study of the requirements, the vision of a better future RFID positioning technology.

  9. Multi-scale graph-cut algorithm for efficient water-fat separation.

    Science.gov (United States)

    Berglund, Johan; Skorpil, Mikael

    2017-09-01

    To improve the accuracy and robustness to noise in water-fat separation by unifying the multiscale and graph cut based approaches to B 0 -correction. A previously proposed water-fat separation algorithm that corrects for B 0 field inhomogeneity in 3D by a single quadratic pseudo-Boolean optimization (QPBO) graph cut was incorporated into a multi-scale framework, where field map solutions are propagated from coarse to fine scales for voxels that are not resolved by the graph cut. The accuracy of the single-scale and multi-scale QPBO algorithms was evaluated against benchmark reference datasets. The robustness to noise was evaluated by adding noise to the input data prior to water-fat separation. Both algorithms achieved the highest accuracy when compared with seven previously published methods, while computation times were acceptable for implementation in clinical routine. The multi-scale algorithm was more robust to noise than the single-scale algorithm, while causing only a small increase (+10%) of the reconstruction time. The proposed 3D multi-scale QPBO algorithm offers accurate water-fat separation, robustness to noise, and fast reconstruction. The software implementation is freely available to the research community. Magn Reson Med 78:941-949, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  10. Evaluation of nine HIV rapid test kits to develop a national HIV testing algorithm in Nigeria

    Directory of Open Access Journals (Sweden)

    Orji Bassey

    2015-05-01

    Full Text Available Background: Non-cold chain-dependent HIV rapid testing has been adopted in many resource-constrained nations as a strategy for reaching out to populations. HIV rapid test kits (RTKs have the advantage of ease of use, low operational cost and short turnaround times. Before 2005, different RTKs had been used in Nigeria without formal evaluation. Between 2005 and 2007, a study was conducted to formally evaluate a number of RTKs and construct HIV testing algorithms. Objectives: The objectives of this study were to assess and select HIV RTKs and develop national testing algorithms. Method: Nine RTKs were evaluated using 528 well-characterised plasma samples. These comprised 198 HIV-positive specimens (37.5% and 330 HIV-negative specimens (62.5%, collected nationally. Sensitivity and specificity were calculated with 95% confidence intervals for all nine RTKs singly and for serial and parallel combinations of six RTKs; and relative costs were estimated. Results: Six of the nine RTKs met the selection criteria, including minimum sensitivity and specificity (both ≥ 99.0% requirements. There were no significant differences in sensitivities or specificities of RTKs in the serial and parallel algorithms, but the cost of RTKs in parallel algorithms was twice that in serial algorithms. Consequently, three serial algorithms, comprising four test kits (BundiTM, DetermineTM, Stat-Pak® and Uni-GoldTM with 100.0% sensitivity and 99.1% – 100.0% specificity, were recommended and adopted as national interim testing algorithms in 2007. Conclusion: This evaluation provides the first evidence for reliable combinations of RTKs for HIV testing in Nigeria. However, these RTKs need further evaluation in the field (Phase II to re-validate their performance.

  11. A divide-and-conquer algorithm for large-scale de novo transcriptome assembly through combining small assemblies from existing algorithms.

    Science.gov (United States)

    Sze, Sing-Hoi; Parrott, Jonathan J; Tarone, Aaron M

    2017-12-06

    While the continued development of high-throughput sequencing has facilitated studies of entire transcriptomes in non-model organisms, the incorporation of an increasing amount of RNA-Seq libraries has made de novo transcriptome assembly difficult. Although algorithms that can assemble a large amount of RNA-Seq data are available, they are generally very memory-intensive and can only be used to construct small assemblies. We develop a divide-and-conquer strategy that allows these algorithms to be utilized, by subdividing a large RNA-Seq data set into small libraries. Each individual library is assembled independently by an existing algorithm, and a merging algorithm is developed to combine these assemblies by picking a subset of high quality transcripts to form a large transcriptome. When compared to existing algorithms that return a single assembly directly, this strategy achieves comparable or increased accuracy as memory-efficient algorithms that can be used to process a large amount of RNA-Seq data, and comparable or decreased accuracy as memory-intensive algorithms that can only be used to construct small assemblies. Our divide-and-conquer strategy allows memory-intensive de novo transcriptome assembly algorithms to be utilized to construct large assemblies.

  12. A Numerical Instability in an ADI Algorithm for Gyrokinetics

    International Nuclear Information System (INIS)

    Belli, E.A.; Hammett, G.W.

    2004-01-01

    We explore the implementation of an Alternating Direction Implicit (ADI) algorithm for a gyrokinetic plasma problem and its resulting numerical stability properties. This algorithm, which uses a standard ADI scheme to divide the field solve from the particle distribution function advance, has previously been found to work well for certain plasma kinetic problems involving one spatial and two velocity dimensions, including collisions and an electric field. However, for the gyrokinetic problem we find a severe stability restriction on the time step. Furthermore, we find that this numerical instability limitation also affects some other algorithms, such as a partially implicit Adams-Bashforth algorithm, where the parallel motion operator v parallel ∂/∂z is treated implicitly and the field terms are treated with an Adams-Bashforth explicit scheme. Fully explicit algorithms applied to all terms can be better at long wavelengths than these ADI or partially implicit algorithms

  13. Quantum algorithms for the ordered search problem via semidefinite programming

    International Nuclear Information System (INIS)

    Childs, Andrew M.; Landahl, Andrew J.; Parrilo, Pablo A.

    2007-01-01

    One of the most basic computational problems is the task of finding a desired item in an ordered list of N items. While the best classical algorithm for this problem uses log 2 N queries to the list, a quantum computer can solve the problem using a constant factor fewer queries. However, the precise value of this constant is unknown. By characterizing a class of quantum query algorithms for the ordered search problem in terms of a semidefinite program, we find quantum algorithms for small instances of the ordered search problem. Extending these algorithms to arbitrarily large instances using recursion, we show that there is an exact quantum ordered search algorithm using 4 log 605 N≅0.433 log 2 N queries, which improves upon the previously best known exact algorithm

  14. A Faster Algorithm for Computing Straight Skeletons

    KAUST Repository

    Cheng, Siu-Wing

    2014-09-01

    We present a new algorithm for computing the straight skeleton of a polygon. For a polygon with n vertices, among which r are reflex vertices, we give a deterministic algorithm that reduces the straight skeleton computation to a motorcycle graph computation in O(n (logn)logr) time. It improves on the previously best known algorithm for this reduction, which is randomized, and runs in expected O(n√h+1log2n) time for a polygon with h holes. Using known motorcycle graph algorithms, our result yields improved time bounds for computing straight skeletons. In particular, we can compute the straight skeleton of a non-degenerate polygon in O(n (logn) logr + r 4/3 + ε ) time for any ε > 0. On degenerate input, our time bound increases to O(n (logn) logr + r 17/11 + ε ).

  15. A Faster Algorithm for Computing Straight Skeletons

    KAUST Repository

    Mencel, Liam A.

    2014-05-06

    We present a new algorithm for computing the straight skeleton of a polygon. For a polygon with n vertices, among which r are reflex vertices, we give a deterministic algorithm that reduces the straight skeleton computation to a motorcycle graph computation in O(n (log n) log r) time. It improves on the previously best known algorithm for this reduction, which is randomised, and runs in expected O(n √(h+1) log² n) time for a polygon with h holes. Using known motorcycle graph algorithms, our result yields improved time bounds for computing straight skeletons. In particular, we can compute the straight skeleton of a non-degenerate polygon in O(n (log n) log r + r^(4/3 + ε)) time for any ε > 0. On degenerate input, our time bound increases to O(n (log n) log r + r^(17/11 + ε))

  16. A Faster Algorithm for Computing Straight Skeletons

    KAUST Repository

    Cheng, Siu-Wing; Mencel, Liam A.; Vigneron, Antoine E.

    2014-01-01

    We present a new algorithm for computing the straight skeleton of a polygon. For a polygon with n vertices, among which r are reflex vertices, we give a deterministic algorithm that reduces the straight skeleton computation to a motorcycle graph computation in O(n (logn)logr) time. It improves on the previously best known algorithm for this reduction, which is randomized, and runs in expected O(n√h+1log2n) time for a polygon with h holes. Using known motorcycle graph algorithms, our result yields improved time bounds for computing straight skeletons. In particular, we can compute the straight skeleton of a non-degenerate polygon in O(n (logn) logr + r 4/3 + ε ) time for any ε > 0. On degenerate input, our time bound increases to O(n (logn) logr + r 17/11 + ε ).

  17. Quantum signature scheme based on a quantum search algorithm

    International Nuclear Information System (INIS)

    Yoon, Chun Seok; Kang, Min Sung; Lim, Jong In; Yang, Hyung Jin

    2015-01-01

    We present a quantum signature scheme based on a two-qubit quantum search algorithm. For secure transmission of signatures, we use a quantum search algorithm that has not been used in previous quantum signature schemes. A two-step protocol secures the quantum channel, and a trusted center guarantees non-repudiation that is similar to other quantum signature schemes. We discuss the security of our protocol. (paper)

  18. Development of a stereolithography (STL input and computer numerical control (CNC output algorithm for an entry-level 3-D printer

    Directory of Open Access Journals (Sweden)

    Brown, Andrew

    2014-08-01

    Full Text Available This paper presents a prototype Stereolithography (STL file format slicing and tool-path generation algorithm, which serves as a data front-end for a Rapid Prototyping (RP entry- level three-dimensional (3-D printer. Used mainly in Additive Manufacturing (AM, 3-D printers are devices that apply plastic, ceramic, and metal, layer by layer, in all three dimensions on a flat surface (X, Y, and Z axis. 3-D printers, unfortunately, cannot print an object without a special algorithm that is required to create the Computer Numerical Control (CNC instructions for printing. An STL algorithm therefore forms a critical component for Layered Manufacturing (LM, also referred to as RP. The purpose of this study was to develop an algorithm that is capable of processing and slicing an STL file or multiple files, resulting in a tool-path, and finally compiling a CNC file for an entry-level 3- D printer. The prototype algorithm was implemented for an entry-level 3-D printer that utilises the Fused Deposition Modelling (FDM process or Solid Freeform Fabrication (SFF process; an AM technology. Following an experimental method, the full data flow path for the prototype algorithm was developed, starting with STL data files, and then processing the STL data file into a G-code file format by slicing the model and creating a tool-path. This layering method is used by most 3-D printers to turn a 2-D object into a 3-D object. The STL algorithm developed in this study presents innovative opportunities for LM, since it allows engineers and architects to transform their ideas easily into a solid model in a fast, simple, and cheap way. This is accomplished by allowing STL models to be sliced rapidly, effectively, and without error, and finally to be processed and prepared into a G-code print file.

  19. Developing Information Power Grid Based Algorithms and Software

    Science.gov (United States)

    Dongarra, Jack

    1998-01-01

    This was an exploratory study to enhance our understanding of problems involved in developing large scale applications in a heterogeneous distributed environment. It is likely that the large scale applications of the future will be built by coupling specialized computational modules together. For example, efforts now exist to couple ocean and atmospheric prediction codes to simulate a more complete climate system. These two applications differ in many respects. They have different grids, the data is in different unit systems and the algorithms for inte,-rating in time are different. In addition the code for each application is likely to have been developed on different architectures and tend to have poor performance when run on an architecture for which the code was not designed, if it runs at all. Architectural differences may also induce differences in data representation which effect precision and convergence criteria as well as data transfer issues. In order to couple such dissimilar codes some form of translation must be present. This translation should be able to handle interpolation from one grid to another as well as construction of the correct data field in the correct units from available data. Even if a code is to be developed from scratch, a modular approach will likely be followed in that standard scientific packages will be used to do the more mundane tasks such as linear algebra or Fourier transform operations. This approach allows the developers to concentrate on their science rather than becoming experts in linear algebra or signal processing. Problems associated with this development approach include difficulties associated with data extraction and translation from one module to another, module performance on different nodal architectures, and others. In addition to these data and software issues there exists operational issues such as platform stability and resource management.

  20. Development and validation of a risk prediction algorithm for the recurrence of suicidal ideation among general population with low mood.

    Science.gov (United States)

    Liu, Y; Sareen, J; Bolton, J M; Wang, J L

    2016-03-15

    Suicidal ideation is one of the strongest predictors of recent and future suicide attempt. This study aimed to develop and validate a risk prediction algorithm for the recurrence of suicidal ideation among population with low mood 3035 participants from U.S National Epidemiologic Survey on Alcohol and Related Conditions with suicidal ideation at their lowest mood at baseline were included. The Alcohol Use Disorder and Associated Disabilities Interview Schedule, based on the DSM-IV criteria was used. Logistic regression modeling was conducted to derive the algorithm. Discrimination and calibration were assessed in the development and validation cohorts. In the development data, the proportion of recurrent suicidal ideation over 3 years was 19.5 (95% CI: 17.7, 21.5). The developed algorithm consisted of 6 predictors: age, feelings of emptiness, sudden mood changes, self-harm history, depressed mood in the past 4 weeks, interference with social activities in the past 4 weeks because of physical health or emotional problems and emptiness was the most important risk factor. The model had good discriminative power (C statistic=0.8273, 95% CI: 0.8027, 0.8520). The C statistic was 0.8091 (95% CI: 0.7786, 0.8395) in the external validation dataset and was 0.8193 (95% CI: 0.8001, 0.8385) in the combined dataset. This study does not apply to people with suicidal ideation who are not depressed. The developed risk algorithm for predicting the recurrence of suicidal ideation has good discrimination and excellent calibration. Clinicians can use this algorithm to stratify the risk of recurrence in patients and thus improve personalized treatment approaches, make advice and further intensive monitoring. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Quantum random-walk search algorithm

    International Nuclear Information System (INIS)

    Shenvi, Neil; Whaley, K. Birgitta; Kempe, Julia

    2003-01-01

    Quantum random walks on graphs have been shown to display many interesting properties, including exponentially fast hitting times when compared with their classical counterparts. However, it is still unclear how to use these novel properties to gain an algorithmic speedup over classical algorithms. In this paper, we present a quantum search algorithm based on the quantum random-walk architecture that provides such a speedup. It will be shown that this algorithm performs an oracle search on a database of N items with O(√(N)) calls to the oracle, yielding a speedup similar to other quantum search algorithms. It appears that the quantum random-walk formulation has considerable flexibility, presenting interesting opportunities for development of other, possibly novel quantum algorithms

  2. Problem solving with genetic algorithms and Splicer

    Science.gov (United States)

    Bayer, Steven E.; Wang, Lui

    1991-01-01

    Genetic algorithms are highly parallel, adaptive search procedures (i.e., problem-solving methods) loosely based on the processes of population genetics and Darwinian survival of the fittest. Genetic algorithms have proven useful in domains where other optimization techniques perform poorly. The main purpose of the paper is to discuss a NASA-sponsored software development project to develop a general-purpose tool for using genetic algorithms. The tool, called Splicer, can be used to solve a wide variety of optimization problems and is currently available from NASA and COSMIC. This discussion is preceded by an introduction to basic genetic algorithm concepts and a discussion of genetic algorithm applications.

  3. Microstructure Reconstruction of Sheet Molding Composite Using a Random Chips Packing Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Tianyu; Xu, Hongyi; Chen, Wei

    2017-04-06

    Fiber-reinforced polymer composites are strong candidates for structural materials to replace steel and light alloys in lightweight vehicle design because of their low density and relatively high strength. In the integrated computational materials engineering (ICME) development of carbon fiber composites, microstructure reconstruction algorithms are needed to generate material microstructure representative volume element (RVE) based on the material processing information. The microstructure RVE reconstruction enables the material property prediction by finite element analysis (FEA)This paper presents an algorithm to reconstruct the microstructure of a chopped carbon fiber/epoxy laminate material system produced by compression molding, normally known as sheet molding compounds (SMC). The algorithm takes the result from material’s manufacturing process as inputs, such as the orientation tensor of fibers, the chopped fiber sheet geometry, and the fiber volume fraction. The chopped fiber sheets are treated as deformable rectangle chips and a random packing algorithm is developed to pack these chips into a square plate. The RVE is built in a layer-by-layer fashion until the desired number of lamina is reached, then a fine tuning process is applied to finalize the reconstruction. Compared to the previous methods, this new approach has the ability to model bended fibers by allowing limited amount of overlaps of rectangle chips. Furthermore, the method does not need SMC microstructure images, for which the image-based characterization techniques have not been mature enough, as inputs. Case studies are performed and the results show that the statistics of the reconstructed microstructures generated by the algorithm matches well with the target input parameters from processing.

  4. Clinical effectiveness of a Bayesian algorithm for the diagnosis and management of heparin-induced thrombocytopenia.

    Science.gov (United States)

    Raschke, R A; Gallo, T; Curry, S C; Whiting, T; Padilla-Jones, A; Warkentin, T E; Puri, A

    2017-08-01

    Essentials We previously published a diagnostic algorithm for heparin-induced thrombocytopenia (HIT). In this study, we validated the algorithm in an independent large healthcare system. The accuracy was 98%, sensitivity 82% and specificity 99%. The algorithm has potential to improve accuracy and efficiency in the diagnosis of HIT. Background Heparin-induced thrombocytopenia (HIT) is a life-threatening drug reaction caused by antiplatelet factor 4/heparin (anti-PF4/H) antibodies. Commercial tests to detect these antibodies have suboptimal operating characteristics. We previously developed a diagnostic algorithm for HIT that incorporated 'four Ts' (4Ts) scoring and a stratified interpretation of an anti-PF4/H enzyme-linked immunosorbent assay (ELISA) and yielded a discriminant accuracy of 0.97 (95% confidence interval [CI], 0.93-1.00). Objectives The purpose of this study was to validate the algorithm in an independent patient population and quantitate effects that algorithm adherence could have on clinical care. Methods A retrospective cohort comprised patients who had undergone anti-PF4/H ELISA and serotonin release assay (SRA) testing in our healthcare system from 2010 to 2014. We determined the algorithm recommendation for each patient, compared recommendations with the clinical care received, and enumerated consequences of discrepancies. Operating characteristics were calculated for algorithm recommendations using SRA as the reference standard. Results Analysis was performed on 181 patients, 10 of whom were ruled in for HIT. The algorithm accurately stratified 98% of patients (95% CI, 95-99%), ruling out HIT in 158, ruling in HIT in 10 and recommending an SRA in 13 patients. Algorithm adherence would have obviated 165 SRAs and prevented 30 courses of unnecessary antithrombotic therapy for HIT. Diagnostic sensitivity was 0.82 (95% CI, 0.48-0.98), specificity 0.99 (95% CI, 0.97-1.00), PPV 0.90 (95% CI, 0.56-0.99) and NPV 0.99 (95% CI, 0.96-1.00). Conclusions An

  5. Algorithm of developing competitive strategies and the trends of realizing them for agricultural enterprises

    Directory of Open Access Journals (Sweden)

    Viktoriia Boiko

    2016-02-01

    Full Text Available The paper specifies basic stages of developing and realizing the strategy of enhancing competitiveness of enterprises and represents an appropriate algorithm. The study analyzes the economic indexes and results of the activity of the agrarian enterprises in Kherson region and provides competitive strategies of efficient development of agrarian enterprises with different levels of competitiveness and specifies the ways of realizing them which will contribute to the optimal use of the available strategic potential

  6. The development of a 3D mesoscopic model of metallic foam based on an improved watershed algorithm

    Science.gov (United States)

    Zhang, Jinhua; Zhang, Yadong; Wang, Guikun; Fang, Qin

    2018-06-01

    The watershed algorithm has been used widely in the x-ray computed tomography (XCT) image segmentation. It provides a transformation defined on a grayscale image and finds the lines that separate adjacent images. However, distortion occurs in developing a mesoscopic model of metallic foam based on XCT image data. The cells are oversegmented at some events when the traditional watershed algorithm is used. The improved watershed algorithm presented in this paper can avoid oversegmentation and is composed of three steps. Firstly, it finds all of the connected cells and identifies the junctions of the corresponding cell walls. Secondly, the image segmentation is conducted to separate the adjacent cells. It generates the lost cell walls between the adjacent cells. Optimization is then performed on the segmentation image. Thirdly, this improved algorithm is validated when it is compared with the image of the metallic foam, which shows that it can avoid the image segmentation distortion. A mesoscopic model of metallic foam is thus formed based on the improved algorithm, and the mesoscopic characteristics of the metallic foam, such as cell size, volume and shape, are identified and analyzed.

  7. Autumn Algorithm-Computation of Hybridization Networks for Realistic Phylogenetic Trees.

    Science.gov (United States)

    Huson, Daniel H; Linz, Simone

    2018-01-01

    A minimum hybridization network is a rooted phylogenetic network that displays two given rooted phylogenetic trees using a minimum number of reticulations. Previous mathematical work on their calculation has usually assumed the input trees to be bifurcating, correctly rooted, or that they both contain the same taxa. These assumptions do not hold in biological studies and "realistic" trees have multifurcations, are difficult to root, and rarely contain the same taxa. We present a new algorithm for computing minimum hybridization networks for a given pair of "realistic" rooted phylogenetic trees. We also describe how the algorithm might be used to improve the rooting of the input trees. We introduce the concept of "autumn trees", a nice framework for the formulation of algorithms based on the mathematics of "maximum acyclic agreement forests". While the main computational problem is hard, the run-time depends mainly on how different the given input trees are. In biological studies, where the trees are reasonably similar, our parallel implementation performs well in practice. The algorithm is available in our open source program Dendroscope 3, providing a platform for biologists to explore rooted phylogenetic networks. We demonstrate the utility of the algorithm using several previously studied data sets.

  8. CiSE: a circular spring embedder layout algorithm.

    Science.gov (United States)

    Dogrusoz, Ugur; Belviranli, Mehmet E; Dilek, Alptug

    2013-06-01

    We present a new algorithm for automatic layout of clustered graphs using a circular style. The algorithm tries to determine optimal location and orientation of individual clusters intrinsically within a modified spring embedder. Heuristics such as reversal of the order of nodes in a cluster and swap of neighboring node pairs in the same cluster are employed intermittently to further relax the spring embedder system, resulting in reduced inter-cluster edge crossings. Unlike other algorithms generating circular drawings, our algorithm does not require the quotient graph to be acyclic, nor does it sacrifice the edge crossing number of individual clusters to improve respective positioning of the clusters. Moreover, it reduces the total area required by a cluster by using the space inside the associated circle. Experimental results show that the execution time and quality of the produced drawings with respect to commonly accepted layout criteria are quite satisfactory, surpassing previous algorithms. The algorithm has also been successfully implemented and made publicly available as part of a compound and clustered graph editing and layout tool named CHISIO.

  9. Selective epidemic vaccination under the performant routing algorithms

    Science.gov (United States)

    Bamaarouf, O.; Alweimine, A. Ould Baba; Rachadi, A.; EZ-Zahraouy, H.

    2018-04-01

    Despite the extensive research on traffic dynamics and epidemic spreading, the effect of the routing algorithms strategies on the traffic-driven epidemic spreading has not received an adequate attention. It is well known that more performant routing algorithm strategies are used to overcome the congestion problem. However, our main result shows unexpectedly that these algorithms favor the virus spreading more than the case where the shortest path based algorithm is used. In this work, we studied the virus spreading in a complex network using the efficient path and the global dynamic routing algorithms as compared to shortest path strategy. Some previous studies have tried to modify the routing rules to limit the virus spreading, but at the expense of reducing the traffic transport efficiency. This work proposed a solution to overcome this drawback by using a selective vaccination procedure instead of a random vaccination used often in the literature. We found that the selective vaccination succeeded in eradicating the virus better than a pure random intervention for the performant routing algorithm strategies.

  10. Development of a Smart Release Algorithm for Mid-Air Separation of Parachute Test Articles

    Science.gov (United States)

    Moore, James W.

    2011-01-01

    The Crew Exploration Vehicle Parachute Assembly System (CPAS) project is currently developing an autonomous method to separate a capsule-shaped parachute test vehicle from an air-drop platform for use in the test program to develop and validate the parachute system for the Orion spacecraft. The CPAS project seeks to perform air-drop tests of an Orion-like boilerplate capsule. Delivery of the boilerplate capsule to the test condition has proven to be a critical and complicated task. In the current concept, the boilerplate vehicle is extracted from an aircraft on top of a Type V pallet and then separated from the pallet in mid-air. The attitude of the vehicles at separation is critical to avoiding re-contact and successfully deploying the boilerplate into a heatshield-down orientation. Neither the pallet nor the boilerplate has an active control system. However, the attitude of the mated vehicle as a function of time is somewhat predictable. CPAS engineers have designed an avionics system to monitor the attitude of the mated vehicle as it is extracted from the aircraft and command a release when the desired conditions are met. The algorithm includes contingency capabilities designed to release the test vehicle before undesirable orientations occur. The algorithm was verified with simulation and ground testing. The pre-flight development and testing is discussed and limitations of ground testing are noted. The CPAS project performed a series of three drop tests as a proof-of-concept of the release technique. These tests helped to refine the attitude instrumentation and software algorithm to be used on future tests. The drop tests are described in detail and the evolution of the release system with each test is described.

  11. Development of regularized expectation maximization algorithms for fan-beam SPECT data

    International Nuclear Information System (INIS)

    Kim, Soo Mee; Lee, Jae Sung; Lee, Dong Soo; Lee, Soo Jin; Kim, Kyeong Min

    2005-01-01

    SPECT using a fan-beam collimator improves spatial resolution and sensitivity. For the reconstruction from fan-beam projections, it is necessary to implement direct fan-beam reconstruction methods without transforming the data into the parallel geometry. In this study, various fan-beam reconstruction algorithms were implemented and their performances were compared. The projector for fan-beam SPECT was implemented using a ray-tracing method. The direct reconstruction algorithms implemented for fan-beam projection data were FBP (filtered backprojection), EM (expectation maximization), OS-EM (ordered subsets EM) and MAP-EM OSL (maximum a posteriori EM using the one-step late method) with membrane and thin-plate models as priors. For comparison, the fan-beam projection data were also rebinned into the parallel data using various interpolation methods, such as the nearest neighbor, bilinear and bicubic interpolations, and reconstructed using the conventional EM algorithm for parallel data. Noiseless and noisy projection data from the digital Hoffman brain and Shepp/Logan phantoms were reconstructed using the above algorithms. The reconstructed images were compared in terms of a percent error metric. For the fan-beam data with Poisson noise, the MAP-EM OSL algorithm with the thin-plate prior showed the best result in both percent error and stability. Bilinear interpolation was the most effective method for rebinning from the fan-beam to parallel geometry when the accuracy and computation load were considered. Direct fan-beam EM reconstructions were more accurate than the standard EM reconstructions obtained from rebinned parallel data. Direct fan-beam reconstruction algorithms were implemented, which provided significantly improved reconstructions

  12. Development and Evaluation of Model Algorithms to Account for Chemical Transformation in the Nearroad Environment

    Science.gov (United States)

    We describe the development and evaluation of two new model algorithms for NOx chemistry in the R-LINE near-road dispersion model for traffic sources. With increased urbanization, there is increased mobility leading to higher amount of traffic related activity on a global scale. ...

  13. Faster Algorithms for Computing Longest Common Increasing Subsequences

    DEFF Research Database (Denmark)

    Kutz, Martin; Brodal, Gerth Stølting; Kaligosi, Kanela

    2011-01-01

    of the alphabet, and Sort is the time to sort each input sequence. For k⩾3 length-n sequences we present an algorithm which improves the previous best bound by more than a factor k for many inputs. In both cases, our algorithms are conceptually quite simple but rely on existing sophisticated data structures......We present algorithms for finding a longest common increasing subsequence of two or more input sequences. For two sequences of lengths n and m, where m⩾n, we present an algorithm with an output-dependent expected running time of and O(m) space, where ℓ is the length of an LCIS, σ is the size....... Finally, we introduce the problem of longest common weakly-increasing (or non-decreasing) subsequences (LCWIS), for which we present an -time algorithm for the 3-letter alphabet case. For the extensively studied longest common subsequence problem, comparable speedups have not been achieved for small...

  14. Geostationary Sensor Based Forest Fire Detection and Monitoring: An Improved Version of the SFIDE Algorithm

    Directory of Open Access Journals (Sweden)

    Valeria Di Biase

    2018-05-01

    Full Text Available The paper aims to present the results obtained in the development of a system allowing for the detection and monitoring of forest fires and the continuous comparison of their intensity when several events occur simultaneously—a common occurrence in European Mediterranean countries during the summer season. The system, called SFIDE (Satellite FIre DEtection, exploits a geostationary satellite sensor (SEVIRI, Spinning Enhanced Visible and InfraRed Imager, on board of MSG, Meteosat Second Generation, satellite series. The algorithm was developed several years ago in the framework of a project (SIGRI funded by the Italian Space Agency (ASI. This algorithm has been completely reviewed in order to enhance its efficiency by reducing false alarms rate preserving a high sensitivity. Due to the very low spatial resolution of SEVIRI images (4 × 4 km2 at Mediterranean latitude the sensitivity of the algorithm should be very high to detect even small fires. The improvement of the algorithm has been obtained by: introducing the sun elevation angle in the computation of the preliminary thresholds to identify potential thermal anomalies (hot spots, introducing a contextual analysis in the detection of clouds and in the detection of night-time fires. The results of the algorithm have been validated in the Sardinia region by using ground true data provided by the regional Corpo Forestale e di Vigilanza Ambientale (CFVA. A significant reduction of the commission error (less than 10% has been obtained with respect to the previous version of the algorithm and also with respect to fire-detection algorithms based on low earth orbit satellites.

  15. Mechanical Model of Geometric Cell and Topological Algorithm for Cell Dynamics from Single-Cell to Formation of Monolayered Tissues with Pattern

    KAUST Repository

    Kachalo, Së ma; Naveed, Hammad; Cao, Youfang; Zhao, Jieling; Liang, Jie

    2015-01-01

    development, and other emerging behavior. Here we describe a cell model and an efficient geometric algorithm for studying the dynamic process of tissue formation in 2D (e.g. epithelial tissues). Our approach improves upon previous methods by incorporating

  16. Mobility-Assisted on-Demand Routing Algorithm for MANETs in the Presence of Location Errors

    Directory of Open Access Journals (Sweden)

    Trung Kien Vu

    2014-01-01

    Full Text Available We propose a mobility-assisted on-demand routing algorithm for mobile ad hoc networks in the presence of location errors. Location awareness enables mobile nodes to predict their mobility and enhances routing performance by estimating link duration and selecting reliable routes. However, measured locations intrinsically include errors in measurement. Such errors degrade mobility prediction and have been ignored in previous work. To mitigate the impact of location errors on routing, we propose an on-demand routing algorithm taking into account location errors. To that end, we adopt the Kalman filter to estimate accurate locations and consider route confidence in discovering routes. Via simulations, we compare our algorithm and previous algorithms in various environments. Our proposed mobility prediction is robust to the location errors.

  17. Development of imaging and reconstructions algorithms on parallel processing architectures for applications in non-destructive testing

    International Nuclear Information System (INIS)

    Pedron, Antoine

    2013-01-01

    This thesis work is placed between the scientific domain of ultrasound non-destructive testing and algorithm-architecture adequation. Ultrasound non-destructive testing includes a group of analysis techniques used in science and industry to evaluate the properties of a material, component, or system without causing damage. In order to characterise possible defects, determining their position, size and shape, imaging and reconstruction tools have been developed at CEA-LIST, within the CIVA software platform. Evolution of acquisition sensors implies a continuous growth of datasets and consequently more and more computing power is needed to maintain interactive reconstructions. General purpose processors (GPP) evolving towards parallelism and emerging architectures such as GPU allow large acceleration possibilities than can be applied to these algorithms. The main goal of the thesis is to evaluate the acceleration than can be obtained for two reconstruction algorithms on these architectures. These two algorithms differ in their parallelization scheme. The first one can be properly parallelized on GPP whereas on GPU, an intensive use of atomic instructions is required. Within the second algorithm, parallelism is easier to express, but loop ordering on GPP, as well as thread scheduling and a good use of shared memory on GPU are necessary in order to obtain efficient results. Different API or libraries, such as OpenMP, CUDA and OpenCL are evaluated through chosen benchmarks. An integration of both algorithms in the CIVA software platform is proposed and different issues related to code maintenance and durability are discussed. (author) [fr

  18. Improved ultrashort pulse-retrieval algorithm for frequency-resolved optical gating

    International Nuclear Information System (INIS)

    DeLong, K.W.; Trebino, R.

    1994-01-01

    We report on significant improvements in the pulse-retrieval algorithm used to reconstruct the amplitude and the phase of ultrashort optical pulses from the experimental frequency-resolved optical gating trace data in the polarization-gate geometry. These improvements involve the use of an intensity constraint, an overcorrection technique, and a multidimensional minimization scheme. While the previously published, basic algorithm converged for most common ultrashort pulses, it failed to retrieve pulses with significant intensity substructure. The improved composite algorithm successfully converges for such pulses. It can now retrieve essentially all pulses of practical interest. We present examples of complex waveforms that were retrieved by the improved algorithm

  19. Zero-block mode decision algorithm for H.264/AVC.

    Science.gov (United States)

    Lee, Yu-Ming; Lin, Yinyi

    2009-03-01

    In the previous paper , we proposed a zero-block intermode decision algorithm for H.264 video coding based upon the number of zero-blocks of 4 x 4 DCT coefficients between the current macroblock and the co-located macroblock. The proposed algorithm can achieve significant improvement in computation, but the computation performance is limited for high bit-rate coding. To improve computation efficiency, in this paper, we suggest an enhanced zero-block decision algorithm, which uses an early zero-block detection method to compute the number of zero-blocks instead of direct DCT and quantization (DCT/Q) calculation and incorporates two adequate decision methods into semi-stationary and nonstationary regions of a video sequence. In addition, the zero-block decision algorithm is also applied to the intramode prediction in the P frame. The enhanced zero-block decision algorithm brings out a reduction of average 27% of total encoding time compared to the zero-block decision algorithm.

  20. Optimization and experimental realization of the quantum permutation algorithm

    Science.gov (United States)

    Yalçınkaya, I.; Gedik, Z.

    2017-12-01

    The quantum permutation algorithm provides computational speed-up over classical algorithms for determining the parity of a given cyclic permutation. For its n -qubit implementations, the number of required quantum gates scales quadratically with n due to the quantum Fourier transforms included. We show here for the n -qubit case that the algorithm can be simplified so that it requires only O (n ) quantum gates, which theoretically reduces the complexity of the implementation. To test our results experimentally, we utilize IBM's 5-qubit quantum processor to realize the algorithm by using the original and simplified recipes for the 2-qubit case. It turns out that the latter results in a significantly higher success probability which allows us to verify the algorithm more precisely than the previous experimental realizations. We also verify the algorithm for the first time for the 3-qubit case with a considerable success probability by taking the advantage of our simplified scheme.

  1. Approximate Quantum Adders with Genetic Algorithms: An IBM Quantum Experience

    Directory of Open Access Journals (Sweden)

    Li Rui

    2017-07-01

    Full Text Available It has been proven that quantum adders are forbidden by the laws of quantum mechanics. We analyze theoretical proposals for the implementation of approximate quantum adders and optimize them by means of genetic algorithms, improving previous protocols in terms of efficiency and fidelity. Furthermore, we experimentally realize a suitable approximate quantum adder with the cloud quantum computing facilities provided by IBM Quantum Experience. The development of approximate quantum adders enhances the toolbox of quantum information protocols, paving the way for novel applications in quantum technologies.

  2. A new hybrid metaheuristic algorithm for wind farm micrositing

    International Nuclear Information System (INIS)

    Massan, S.U.R.; Wagan, A.I.; Shaikh, M.M.

    2017-01-01

    This work focuses on proposing a new algorithm, referred as HMA (Hybrid Metaheuristic Algorithm) for the solution of the WTO (Wind Turbine Optimization) problem. It is well documented that turbines located behind one another face a power loss due to the obstruction of the wind due to wake loss. It is required to reduce this wake loss by the effective placement of turbines using a new HMA. This HMA is derived from the two basic algorithms i.e. DEA (Differential Evolution Algorithm) and the FA (Firefly Algorithm). The function of optimization is undertaken on the N.O. Jensen model. The blending of DEA and FA into HMA are discussed and the new algorithm HMA is implemented maximize power and minimize the cost in a WTO problem. The results by HMA have been compared with GA (Genetic Algorithm) used in some previous studies. The successfully calculated total power produced and cost per unit turbine for a wind farm by using HMA and its comparison with past approaches using single algorithms have shown that there is a significant advantage of using the HMA as compared to the use of single algorithms. The first time implementation of a new algorithm by blending two single algorithms is a significant step towards learning the behavior of algorithms and their added advantages by using them together. (author)

  3. A New Hybrid Metaheuristic Algorithm for Wind Farm Micrositing

    Directory of Open Access Journals (Sweden)

    SHAFIQ-UR-REHMAN MASSAN

    2017-07-01

    Full Text Available This work focuses on proposing a new algorithm, referred as HMA (Hybrid Metaheuristic Algorithm for the solution of the WTO (Wind Turbine Optimization problem. It is well documented that turbines located behind one another face a power loss due to the obstruction of the wind due to wake loss. It is required to reduce this wake loss by the effective placement of turbines using a new HMA. This HMA is derived from the two basic algorithms i.e. DEA (Differential Evolution Algorithm and the FA (Firefly Algorithm. The function of optimization is undertaken on the N.O. Jensen model. The blending of DEA and FA into HMA are discussed and the new algorithm HMA is implemented maximize power and minimize the cost in a WTO problem. The results by HMA have been compared with GA (Genetic Algorithm used in some previous studies. The successfully calculated total power produced and cost per unit turbine for a wind farm by using HMA and its comparison with past approaches using single algorithms have shown that there is a significant advantage of using the HMA as compared to the use of single algorithms. The first time implementation of a new algorithm by blending two single algorithms is a significant step towards learning the behavior of algorithms and their added advantages by using them together.

  4. Algorithm of Particle Data Association for SLAM Based on Improved Ant Algorithm

    Directory of Open Access Journals (Sweden)

    KeKe Gen

    2015-01-01

    Full Text Available The article considers a problem of data association algorithm for simultaneous localization and mapping guidelines in determining the route of unmanned aerial vehicles (UAVs. Currently, these equipments are already widely used, but mainly controlled from the remote operator. An urgent task is to develop a control system that allows for autonomous flight. Algorithm SLAM (simultaneous localization and mapping, which allows to predict the location, speed, the ratio of flight parameters and the coordinates of landmarks and obstacles in an unknown environment, is one of the key technologies to achieve real autonomous UAV flight. The aim of this work is to study the possibility of solving this problem by using an improved ant algorithm.The data association for SLAM algorithm is meant to establish a matching set of observed landmarks and landmarks in the state vector. Ant algorithm is one of the widely used optimization algorithms with positive feedback and the ability to search in parallel, so the algorithm is suitable for solving the problem of data association for SLAM. But the traditional ant algorithm in the process of finding routes easily falls into local optimum. Adding random perturbations in the process of updating the global pheromone to avoid local optima. Setting limits pheromone on the route can increase the search space with a reasonable amount of calculations for finding the optimal route.The paper proposes an algorithm of the local data association for SLAM algorithm based on an improved ant algorithm. To increase the speed of calculation, local data association is used instead of the global data association. The first stage of the algorithm defines targets in the matching space and the observed landmarks with the possibility of association by the criterion of individual compatibility (IC. The second stage defines the matched landmarks and their coordinates using improved ant algorithm. Simulation results confirm the efficiency and

  5. A theoretical analysis of the median LMF adaptive algorithm

    DEFF Research Database (Denmark)

    Bysted, Tommy Kristensen; Rusu, C.

    1999-01-01

    Higher order adaptive algorithms are sensitive to impulse interference. In the case of the LMF (Least Mean Fourth), an easy and effective way to reduce this is to median filter the instantaneous gradient of the LMF algorithm. Although previous published simulations have indicated that this reduces...... the speed of convergence, no analytical studies have yet been made to prove this. In order to enhance the usability, this paper presents a convergence and steady-state analysis of the median LMF adaptive algorithm. As expected this proves that the median LMF has a slower convergence and a lower steady...

  6. Development and validation of a prediction algorithm for the onset of common mental disorders in a working population.

    Science.gov (United States)

    Fernandez, Ana; Salvador-Carulla, Luis; Choi, Isabella; Calvo, Rafael; Harvey, Samuel B; Glozier, Nicholas

    2018-01-01

    Common mental disorders are the most common reason for long-term sickness absence in most developed countries. Prediction algorithms for the onset of common mental disorders may help target indicated work-based prevention interventions. We aimed to develop and validate a risk algorithm to predict the onset of common mental disorders at 12 months in a working population. We conducted a secondary analysis of the Household, Income and Labour Dynamics in Australia Survey, a longitudinal, nationally representative household panel in Australia. Data from the 6189 working participants who did not meet the criteria for a common mental disorders at baseline were non-randomly split into training and validation databases, based on state of residence. Common mental disorders were assessed with the mental component score of 36-Item Short Form Health Survey questionnaire (score ⩽45). Risk algorithms were constructed following recommendations made by the Transparent Reporting of a multivariable prediction model for Prevention Or Diagnosis statement. Different risk factors were identified among women and men for the final risk algorithms. In the training data, the model for women had a C-index of 0.73 and effect size (Hedges' g) of 0.91. In men, the C-index was 0.76 and the effect size was 1.06. In the validation data, the C-index was 0.66 for women and 0.73 for men, with positive predictive values of 0.28 and 0.26, respectively Conclusion: It is possible to develop an algorithm with good discrimination for the onset identifying overall and modifiable risks of common mental disorders among working men. Such models have the potential to change the way that prevention of common mental disorders at the workplace is conducted, but different models may be required for women.

  7. Development of an algorithm for heartbeats detection and classification in Holter records based on temporal and morphological features

    International Nuclear Information System (INIS)

    García, A; Romano, H; Laciar, E; Correa, R

    2011-01-01

    In this work a detection and classification algorithm for heartbeats analysis in Holter records was developed. First, a QRS complexes detector was implemented and their temporal and morphological characteristics were extracted. A vector was built with these features; this vector is the input of the classification module, based on discriminant analysis. The beats were classified in three groups: Premature Ventricular Contraction beat (PVC), Atrial Premature Contraction beat (APC) and Normal Beat (NB). These beat categories represent the most important groups of commercial Holter systems. The developed algorithms were evaluated in 76 ECG records of two validated open-access databases 'arrhythmias MIT BIH database' and M IT BIH supraventricular arrhythmias database . A total of 166343 beats were detected and analyzed, where the QRS detection algorithm provides a sensitivity of 99.69 % and a positive predictive value of 99.84 %. The classification stage gives sensitivities of 97.17% for NB, 97.67% for PCV and 92.78% for APC.

  8. Performance and development for the Inner Detector Trigger Algorithms at ATLAS

    CERN Document Server

    Penc, Ondrej; The ATLAS collaboration

    2015-01-01

    A redesign of the tracking algorithms for the ATLAS trigger for Run 2 starting in spring 2015 is in progress. The ATLAS HLT software has been restructured to run as a more flexible single stage HLT, instead of two separate stages (Level 2 and Event Filter) as in Run 1. The new tracking strategy employed for Run 2 will use a Fast Track Finder (FTF) algorithm to seed subsequent Precision Tracking, and will result in improved track parameter resolution and faster execution times than achieved during Run 1. The performance of the new algorithms has been evaluated to identify those aspects where code optimisation would be most beneficial. The performance and timing of the algorithms for electron and muon reconstruction in the trigger are presented. The profiling infrastructure, constructed to provide prompt feedback from the optimisation, is described, including the methods used to monitor the relative performance improvements as the code evolves.

  9. Accessing primary care Big Data: the development of a software algorithm to explore the rich content of consultation records.

    Science.gov (United States)

    MacRae, J; Darlow, B; McBain, L; Jones, O; Stubbe, M; Turner, N; Dowell, A

    2015-08-21

    To develop a natural language processing software inference algorithm to classify the content of primary care consultations using electronic health record Big Data and subsequently test the algorithm's ability to estimate the prevalence and burden of childhood respiratory illness in primary care. Algorithm development and validation study. To classify consultations, the algorithm is designed to interrogate clinical narrative entered as free text, diagnostic (Read) codes created and medications prescribed on the day of the consultation. Thirty-six consenting primary care practices from a mixed urban and semirural region of New Zealand. Three independent sets of 1200 child consultation records were randomly extracted from a data set of all general practitioner consultations in participating practices between 1 January 2008-31 December 2013 for children under 18 years of age (n=754,242). Each consultation record within these sets was independently classified by two expert clinicians as respiratory or non-respiratory, and subclassified according to respiratory diagnostic categories to create three 'gold standard' sets of classified records. These three gold standard record sets were used to train, test and validate the algorithm. Sensitivity, specificity, positive predictive value and F-measure were calculated to illustrate the algorithm's ability to replicate judgements of expert clinicians within the 1200 record gold standard validation set. The algorithm was able to identify respiratory consultations in the 1200 record validation set with a sensitivity of 0.72 (95% CI 0.67 to 0.78) and a specificity of 0.95 (95% CI 0.93 to 0.98). The positive predictive value of algorithm respiratory classification was 0.93 (95% CI 0.89 to 0.97). The positive predictive value of the algorithm classifying consultations as being related to specific respiratory diagnostic categories ranged from 0.68 (95% CI 0.40 to 1.00; other respiratory conditions) to 0.91 (95% CI 0.79 to 1

  10. Development of simulators algorithms of planar radioactive sources for use in computer models of exposure

    International Nuclear Information System (INIS)

    Vieira, Jose Wilson; Leal Neto, Viriato; Lima Filho, Jose de Melo; Lima, Fernando Roberto de Andrade

    2013-01-01

    This paper presents as algorithm of a planar and isotropic radioactive source and by rotating the probability density function (PDF) Gaussian standard subjected to a translatory method which displaces its maximum throughout its field changes its intensity and makes the dispersion around the mean right asymmetric. The algorithm was used to generate samples of photons emerging from a plane and reach a semicircle involving a phantom voxels. The PDF describing this problem is already known, but the generating function of random numbers (FRN) associated with it can not be deduced by direct MC techniques. This is a significant problem because it can be adjusted to simulations involving natural terrestrial radiation or accidents in medical establishments or industries where the radioactive material spreads in a plane. Some attempts to obtain a FRN for the PDF of the problem have already been implemented by the Research Group in Numerical Dosimetry (GND) from Recife-PE, Brazil, always using the technique rejection sampling MC. This article followed methodology of previous work, except on one point: The problem of the PDF was replaced by a normal PDF transferred. To perform dosimetric comparisons, we used two MCES: the MSTA (Mash standing, composed by the adult male voxel phantom in orthostatic position, MASH (male mesh), available from the Department of Nuclear Energy (DEN) of the Federal University of Pernambuco (UFPE), coupled to MC EGSnrc code and the GND planar source based on the rejection technique) and MSTA N T. The two MCES are similar in all but FRN used in planar source. The results presented and discussed in this paper establish the new algorithm for a planar source to be used by GND

  11. Development and validation of a novel algorithm based on the ECG magnet response for rapid identification of any unknown pacemaker.

    Science.gov (United States)

    Squara, Fabien; Chik, William W; Benhayon, Daniel; Maeda, Shingo; Latcu, Decebal Gabriel; Lacaze-Gadonneix, Jonathan; Tibi, Thierry; Thomas, Olivier; Cooper, Joshua M; Duthoit, Guillaume

    2014-08-01

    Pacemaker (PM) interrogation requires correct manufacturer identification. However, an unidentified PM is a frequent occurrence, requiring time-consuming steps to identify the device. The purpose of this study was to develop and validate a novel algorithm for PM manufacturer identification, using the ECG response to magnet application. Data on the magnet responses of all recent PM models (≤15 years) from the 5 major manufacturers were collected. An algorithm based on the ECG response to magnet application to identify the PM manufacturer was subsequently developed. Patients undergoing ECG during magnet application in various clinical situations were prospectively recruited in 7 centers. The algorithm was applied in the analysis of every ECG by a cardiologist blinded to PM information. A second blinded cardiologist analyzed a sample of randomly selected ECGs in order to assess the reproducibility of the results. A total of 250 ECGs were analyzed during magnet application. The algorithm led to the correct single manufacturer choice in 242 ECGs (96.8%), whereas 7 (2.8%) could only be narrowed to either 1 of 2 manufacturer possibilities. Only 2 (0.4%) incorrect manufacturer identifications occurred. The algorithm identified Medtronic and Sorin Group PMs with 100% sensitivity and specificity, Biotronik PMs with 100% sensitivity and 99.5% specificity, and St. Jude and Boston Scientific PMs with 92% sensitivity and 100% specificity. The results were reproducible between the 2 blinded cardiologists with 92% concordant findings. Unknown PM manufacturers can be accurately identified by analyzing the ECG magnet response using this newly developed algorithm. Copyright © 2014 Heart Rhythm Society. Published by Elsevier Inc. All rights reserved.

  12. Next Generation Suspension Dynamics Algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Schunk, Peter Randall [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Higdon, Jonathon [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Chen, Steven [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-12-01

    This research project has the objective to extend the range of application, improve the efficiency and conduct simulations with the Fast Lubrication Dynamics (FLD) algorithm for concentrated particle suspensions in a Newtonian fluid solvent. The research involves a combination of mathematical development, new computational algorithms, and application to processing flows of relevance in materials processing. The mathematical developments clarify the underlying theory, facilitate verification against classic monographs in the field and provide the framework for a novel parallel implementation optimized for an OpenMP shared memory environment. The project considered application to consolidation flows of major interest in high throughput materials processing and identified hitherto unforeseen challenges in the use of FLD in these applications. Extensions to the algorithm have been developed to improve its accuracy in these applications.

  13. RANdom SAmple Consensus (RANSAC) algorithm for material-informatics: application to photovoltaic solar cells.

    Science.gov (United States)

    Kaspi, Omer; Yosipof, Abraham; Senderowitz, Hanoch

    2017-06-06

    An important aspect of chemoinformatics and material-informatics is the usage of machine learning algorithms to build Quantitative Structure Activity Relationship (QSAR) models. The RANdom SAmple Consensus (RANSAC) algorithm is a predictive modeling tool widely used in the image processing field for cleaning datasets from noise. RANSAC could be used as a "one stop shop" algorithm for developing and validating QSAR models, performing outlier removal, descriptors selection, model development and predictions for test set samples using applicability domain. For "future" predictions (i.e., for samples not included in the original test set) RANSAC provides a statistical estimate for the probability of obtaining reliable predictions, i.e., predictions within a pre-defined number of standard deviations from the true values. In this work we describe the first application of RNASAC in material informatics, focusing on the analysis of solar cells. We demonstrate that for three datasets representing different metal oxide (MO) based solar cell libraries RANSAC-derived models select descriptors previously shown to correlate with key photovoltaic properties and lead to good predictive statistics for these properties. These models were subsequently used to predict the properties of virtual solar cells libraries highlighting interesting dependencies of PV properties on MO compositions.

  14. Behavioural modelling using the MOESP algorithm, dynamic neural networks and the Bartels-Stewart algorithm

    NARCIS (Netherlands)

    Schilders, W.H.A.; Meijer, P.B.L.; Ciggaar, E.

    2008-01-01

    In this paper we discuss the use of the state-space modelling MOESP algorithm to generate precise information about the number of neurons and hidden layers in dynamic neural networks developed for the behavioural modelling of electronic circuits. The Bartels–Stewart algorithm is used to transform

  15. Iterative algorithms for large sparse linear systems on parallel computers

    Science.gov (United States)

    Adams, L. M.

    1982-01-01

    Algorithms for assembling in parallel the sparse system of linear equations that result from finite difference or finite element discretizations of elliptic partial differential equations, such as those that arise in structural engineering are developed. Parallel linear stationary iterative algorithms and parallel preconditioned conjugate gradient algorithms are developed for solving these systems. In addition, a model for comparing parallel algorithms on array architectures is developed and results of this model for the algorithms are given.

  16. A Modularity Degree Based Heuristic Community Detection Algorithm

    Directory of Open Access Journals (Sweden)

    Dongming Chen

    2014-01-01

    Full Text Available A community in a complex network can be seen as a subgroup of nodes that are densely connected. Discovery of community structures is a basic problem of research and can be used in various areas, such as biology, computer science, and sociology. Existing community detection methods usually try to expand or collapse the nodes partitions in order to optimize a given quality function. These optimization function based methods share the same drawback of inefficiency. Here we propose a heuristic algorithm (MDBH algorithm based on network structure which employs modularity degree as a measure function. Experiments on both synthetic benchmarks and real-world networks show that our algorithm gives competitive accuracy with previous modularity optimization methods, even though it has less computational complexity. Furthermore, due to the use of modularity degree, our algorithm naturally improves the resolution limit in community detection.

  17. Algebraic Algorithm Design and Local Search

    National Research Council Canada - National Science Library

    Graham, Robert

    1996-01-01

    .... Algebraic techniques have been applied successfully to algorithm synthesis by the use of algorithm theories and design tactics, an approach pioneered in the Kestrel Interactive Development System (KIDS...

  18. Spatial updating grand canonical Monte Carlo algorithms for fluid simulation: generalization to continuous potentials and parallel implementation.

    Science.gov (United States)

    O'Keeffe, C J; Ren, Ruichao; Orkoulas, G

    2007-11-21

    Spatial updating grand canonical Monte Carlo algorithms are generalizations of random and sequential updating algorithms for lattice systems to continuum fluid models. The elementary steps, insertions or removals, are constructed by generating points in space either at random (random updating) or in a prescribed order (sequential updating). These algorithms have previously been developed only for systems of impenetrable spheres for which no particle overlap occurs. In this work, spatial updating grand canonical algorithms are generalized to continuous, soft-core potentials to account for overlapping configurations. Results on two- and three-dimensional Lennard-Jones fluids indicate that spatial updating grand canonical algorithms, both random and sequential, converge faster than standard grand canonical algorithms. Spatial algorithms based on sequential updating not only exhibit the fastest convergence but also are ideal for parallel implementation due to the absence of strict detailed balance and the nature of the updating that minimizes interprocessor communication. Parallel simulation results for three-dimensional Lennard-Jones fluids show a substantial reduction of simulation time for systems of moderate and large size. The efficiency improvement by parallel processing through domain decomposition is always in addition to the efficiency improvement by sequential updating.

  19. A numeric comparison of variable selection algorithms for supervised learning

    International Nuclear Information System (INIS)

    Palombo, G.; Narsky, I.

    2009-01-01

    Datasets in modern High Energy Physics (HEP) experiments are often described by dozens or even hundreds of input variables. Reducing a full variable set to a subset that most completely represents information about data is therefore an important task in analysis of HEP data. We compare various variable selection algorithms for supervised learning using several datasets such as, for instance, imaging gamma-ray Cherenkov telescope (MAGIC) data found at the UCI repository. We use classifiers and variable selection methods implemented in the statistical package StatPatternRecognition (SPR), a free open-source C++ package developed in the HEP community ( (http://sourceforge.net/projects/statpatrec/)). For each dataset, we select a powerful classifier and estimate its learning accuracy on variable subsets obtained by various selection algorithms. When possible, we also estimate the CPU time needed for the variable subset selection. The results of this analysis are compared with those published previously for these datasets using other statistical packages such as R and Weka. We show that the most accurate, yet slowest, method is a wrapper algorithm known as generalized sequential forward selection ('Add N Remove R') implemented in SPR.

  20. Algebraic dynamics algorithm: Numerical comparison with Runge-Kutta algorithm and symplectic geometric algorithm

    Institute of Scientific and Technical Information of China (English)

    WANG ShunJin; ZHANG Hua

    2007-01-01

    Based on the exact analytical solution of ordinary differential equations,a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm.A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models.The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision,and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.

  1. Algebraic dynamics algorithm:Numerical comparison with Runge-Kutta algorithm and symplectic geometric algorithm

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Based on the exact analytical solution of ordinary differential equations, a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm. A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models. The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision, and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.

  2. Algorithm comparison for schedule optimization in MR fingerprinting.

    Science.gov (United States)

    Cohen, Ouri; Rosen, Matthew S

    2017-09-01

    In MR Fingerprinting, the flip angles and repetition times are chosen according to a pseudorandom schedule. In previous work, we have shown that maximizing the discrimination between different tissue types by optimizing the acquisition schedule allows reductions in the number of measurements required. The ideal optimization algorithm for this application remains unknown, however. In this work we examine several different optimization algorithms to determine the one best suited for optimizing MR Fingerprinting acquisition schedules. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Implementation of Real-Time Feedback Flow Control Algorithms on a Canonical Testbed

    Science.gov (United States)

    Tian, Ye; Song, Qi; Cattafesta, Louis

    2005-01-01

    This report summarizes the activities on "Implementation of Real-Time Feedback Flow Control Algorithms on a Canonical Testbed." The work summarized consists primarily of two parts. The first part summarizes our previous work and the extensions to adaptive ID and control algorithms. The second part concentrates on the validation of adaptive algorithms by applying them to a vibration beam test bed. Extensions to flow control problems are discussed.

  4. Algorithmic correspondence and completeness in modal logic. V. Recursive extensions of SQEMA

    DEFF Research Database (Denmark)

    Conradie, Willem; Goranko, Valentin; Vakarelov, Dimiter

    2010-01-01

    The previously introduced algorithm SQEMA computes first-order frame equivalents for modal formulae and also proves their canonicity. Here we extend SQEMA with an additional rule based on a recursive version of Ackermann's lemma, which enables the algorithm to compute local frame equivalents...... on the class of ‘recursive formulae’. We also show that a certain version of this algorithm guarantees the canonicity of the formulae on which it succeeds....

  5. Description of ALARMA: the alarm algorithm developed for the Nuclear Car Wash

    International Nuclear Information System (INIS)

    Luu, T; Biltoft, P; Church, J; Descalle, M; Hall, J; Manatt, D; Mauger, J; Norman, E; Petersen, D; Pruet, J; Prussin, S; Slaughter, D

    2006-01-01

    The goal of any alarm algorithm should be that it provide the necessary tools to derive confidence limits on whether the existence of fissile materials is present in cargo containers. It should be able to extract these limits from (usually) noisy and/or weak data while maintaining a false alarm rate (FAR) that is economically suitable for port operations. It should also be able to perform its analysis within a reasonably short amount of time (i.e. ∼ seconds). To achieve this, it is essential that the algorithm be able to identify and subtract any interference signature that might otherwise be confused with a fissile signature. Lastly, the algorithm itself should be user-intuitive and user-friendly so that port operators with little or no experience with detection algorithms may use it with relative ease. In support of the Nuclear Car Wash project at Lawrence Livermore Laboratory, we have developed an alarm algorithm that satisfies the above requirements. The description of the this alarm algorithm, dubbed ALARMA, is the purpose of this technical report. The experimental setup of the nuclear car wash has been well documented [1, 2, 3]. The presence of fissile materials is inferred by examining the β-delayed gamma spectrum induced after a brief neutron irradiation of cargo, particularly in the high-energy region above approximately 2.5 MeV. In this region naturally occurring gamma rays are virtually non-existent. Thermal-neutron induced fission of 235 U and 239 P, on the other hand, leaves a unique β-delayed spectrum [4]. This spectrum comes from decays of fission products having half-lives as large as 30 seconds, many of which have high Q-values. Since high-energy photons penetrate matter more freely, it is natural to look for unique fissile signatures in this energy region after neutron irradiation. The goal of this interrogation procedure is a 95% success rate of detection of as little as 5 kilograms of fissile material while retaining at most .1% false alarm

  6. Application of the pessimistic pruning to increase the accuracy of C4.5 algorithm in diagnosing chronic kidney disease

    Science.gov (United States)

    Muslim, M. A.; Herowati, A. J.; Sugiharti, E.; Prasetiyo, B.

    2018-03-01

    A technique to dig valuable information buried or hidden in data collection which is so big to be found an interesting patterns that was previously unknown is called data mining. Data mining has been applied in the healthcare industry. One technique used data mining is classification. The decision tree included in the classification of data mining and algorithm developed by decision tree is C4.5 algorithm. A classifier is designed using applying pessimistic pruning in C4.5 algorithm in diagnosing chronic kidney disease. Pessimistic pruning use to identify and remove branches that are not needed, this is done to avoid overfitting the decision tree generated by the C4.5 algorithm. In this paper, the result obtained using these classifiers are presented and discussed. Using pessimistic pruning shows increase accuracy of C4.5 algorithm of 1.5% from 95% to 96.5% in diagnosing of chronic kidney disease.

  7. Comprehensive eye evaluation algorithm

    Science.gov (United States)

    Agurto, C.; Nemeth, S.; Zamora, G.; Vahtel, M.; Soliz, P.; Barriga, S.

    2016-03-01

    In recent years, several research groups have developed automatic algorithms to detect diabetic retinopathy (DR) in individuals with diabetes (DM), using digital retinal images. Studies have indicated that diabetics have 1.5 times the annual risk of developing primary open angle glaucoma (POAG) as do people without DM. Moreover, DM patients have 1.8 times the risk for age-related macular degeneration (AMD). Although numerous investigators are developing automatic DR detection algorithms, there have been few successful efforts to create an automatic algorithm that can detect other ocular diseases, such as POAG and AMD. Consequently, our aim in the current study was to develop a comprehensive eye evaluation algorithm that not only detects DR in retinal images, but also automatically identifies glaucoma suspects and AMD by integrating other personal medical information with the retinal features. The proposed system is fully automatic and provides the likelihood of each of the three eye disease. The system was evaluated in two datasets of 104 and 88 diabetic cases. For each eye, we used two non-mydriatic digital color fundus photographs (macula and optic disc centered) and, when available, information about age, duration of diabetes, cataracts, hypertension, gender, and laboratory data. Our results show that the combination of multimodal features can increase the AUC by up to 5%, 7%, and 8% in the detection of AMD, DR, and glaucoma respectively. Marked improvement was achieved when laboratory results were combined with retinal image features.

  8. Connectivity algorithm with depth first search (DFS) on simple graphs

    Science.gov (United States)

    Riansanti, O.; Ihsan, M.; Suhaimi, D.

    2018-01-01

    This paper discusses an algorithm to detect connectivity of a simple graph using Depth First Search (DFS). The DFS implementation in this paper differs than other research, that is, on counting the number of visited vertices. The algorithm obtains s from the number of vertices and visits source vertex, following by its adjacent vertices until the last vertex adjacent to the previous source vertex. Any simple graph is connected if s equals 0 and disconnected if s is greater than 0. The complexity of the algorithm is O(n2).

  9. Genetic algorithm optimization of atomic clusters

    International Nuclear Information System (INIS)

    Morris, J.R.; Deaven, D.M.; Ho, K.M.; Wang, C.Z.; Pan, B.C.; Wacker, J.G.; Turner, D.E.; Iowa State Univ., Ames, IA

    1996-01-01

    The authors have been using genetic algorithms to study the structures of atomic clusters and related problems. This is a problem where local minima are easy to locate, but barriers between the many minima are large, and the number of minima prohibit a systematic search. They use a novel mating algorithm that preserves some of the geometrical relationship between atoms, in order to ensure that the resultant structures are likely to inherit the best features of the parent clusters. Using this approach, they have been able to find lower energy structures than had been previously obtained. Most recently, they have been able to turn around the building block idea, using optimized structures from the GA to learn about systematic structural trends. They believe that an effective GA can help provide such heuristic information, and (conversely) that such information can be introduced back into the algorithm to assist in the search process

  10. Algorithm Development for Multi-Energy SXR based Electron Temperature Profile Reconstruction

    Science.gov (United States)

    Clayton, D. J.; Tritz, K.; Finkenthal, M.; Kumar, D.; Stutman, D.

    2012-10-01

    New techniques utilizing computational tools such as neural networks and genetic algorithms are being developed to infer plasma electron temperature profiles on fast time scales (> 10 kHz) from multi-energy soft-x-ray (ME-SXR) diagnostics. Traditionally, a two-foil SXR technique, using the ratio of filtered continuum emission measured by two SXR detectors, has been employed on fusion devices as an indirect method of measuring electron temperature. However, these measurements can be susceptible to large errors due to uncertainties in time-evolving impurity density profiles, leading to unreliable temperature measurements. To correct this problem, measurements using ME-SXR diagnostics, which use three or more filtered SXR arrays to distinguish line and continuum emission from various impurities, in conjunction with constraints from spectroscopic diagnostics, can be used to account for unknown or time evolving impurity profiles [K. Tritz et al, Bull. Am. Phys. Soc. Vol. 56, No. 12 (2011), PP9.00067]. On NSTX, ME-SXR diagnostics can be used for fast (10-100 kHz) temperature profile measurements, using a Thomson scattering diagnostic (60 Hz) for periodic normalization. The use of more advanced algorithms, such as neural network processing, can decouple the reconstruction of the temperature profile from spectral modeling.

  11. The Algorithm for Algorithms: An Evolutionary Algorithm Based on Automatic Designing of Genetic Operators

    Directory of Open Access Journals (Sweden)

    Dazhi Jiang

    2015-01-01

    Full Text Available At present there is a wide range of evolutionary algorithms available to researchers and practitioners. Despite the great diversity of these algorithms, virtually all of the algorithms share one feature: they have been manually designed. A fundamental question is “are there any algorithms that can design evolutionary algorithms automatically?” A more complete definition of the question is “can computer construct an algorithm which will generate algorithms according to the requirement of a problem?” In this paper, a novel evolutionary algorithm based on automatic designing of genetic operators is presented to address these questions. The resulting algorithm not only explores solutions in the problem space like most traditional evolutionary algorithms do, but also automatically generates genetic operators in the operator space. In order to verify the performance of the proposed algorithm, comprehensive experiments on 23 well-known benchmark optimization problems are conducted. The results show that the proposed algorithm can outperform standard differential evolution algorithm in terms of convergence speed and solution accuracy which shows that the algorithm designed automatically by computers can compete with the algorithms designed by human beings.

  12. Simulating prescribed particle densities in the grand canonical ensemble using iterative algorithms.

    Science.gov (United States)

    Malasics, Attila; Gillespie, Dirk; Boda, Dezso

    2008-03-28

    We present two efficient iterative Monte Carlo algorithms in the grand canonical ensemble with which the chemical potentials corresponding to prescribed (targeted) partial densities can be determined. The first algorithm works by always using the targeted densities in the kT log(rho(i)) (ideal gas) terms and updating the excess chemical potentials from the previous iteration. The second algorithm extrapolates the chemical potentials in the next iteration from the results of the previous iteration using a first order series expansion of the densities. The coefficients of the series, the derivatives of the densities with respect to the chemical potentials, are obtained from the simulations by fluctuation formulas. The convergence of this procedure is shown for the examples of a homogeneous Lennard-Jones mixture and a NaCl-CaCl(2) electrolyte mixture in the primitive model. The methods are quite robust under the conditions investigated. The first algorithm is less sensitive to initial conditions.

  13. Performance of a Real-time Multipurpose 2-Dimensional Clustering Algorithm Developed for the ATLAS Experiment

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00372074; The ATLAS collaboration; Sotiropoulou, Calliope Louisa; Annovi, Alberto; Kordas, Kostantinos

    2016-01-01

    In this paper the performance of the 2D pixel clustering algorithm developed for the Input Mezzanine card of the ATLAS Fast TracKer system is presented. Fast TracKer is an approved ATLAS upgrade that has the goal to provide a complete list of tracks to the ATLAS High Level Trigger for each level-1 accepted event, at up to 100 kHz event rate with a very small latency, in the order of 100µs. The Input Mezzanine card is the input stage of the Fast TracKer system. Its role is to receive data from the silicon detector and perform real time clustering, thus to reduce the amount of data propagated to the subsequent processing levels with minimal information loss. We focus on the most challenging component on the Input Mezzanine card, the 2D clustering algorithm executed on the pixel data. We compare two different implementations of the algorithm. The first is one called the ideal one which searches clusters of pixels in the whole silicon module at once and calculates the cluster centroids exploiting the whole avail...

  14. Performance of a Real-time Multipurpose 2-Dimensional Clustering Algorithm Developed for the ATLAS Experiment

    CERN Document Server

    Gkaitatzis, Stamatios; The ATLAS collaboration

    2016-01-01

    In this paper the performance of the 2D pixel clustering algorithm developed for the Input Mezzanine card of the ATLAS Fast TracKer system is presented. Fast TracKer is an approved ATLAS upgrade that has the goal to provide a complete list of tracks to the ATLAS High Level Trigger for each level-1 accepted event, at up to 100 kHz event rate with a very small latency, in the order of 100 µs. The Input Mezzanine card is the input stage of the Fast TracKer system. Its role is to receive data from the silicon detector and perform real time clustering, thus to reduce the amount of data propagated to the subsequent processing levels with minimal information loss. We focus on the most challenging component on the Input Mezzanine card, the 2D clustering algorithm executed on the pixel data. We compare two different implementations of the algorithm. The first is one called the ideal one which searches clusters of pixels in the whole silicon module at once and calculates the cluster centroids exploiting the whole avai...

  15. Artifact removal algorithms for stroke detection using a multistatic MIST beamforming algorithm.

    Science.gov (United States)

    Ricci, E; Di Domenico, S; Cianca, E; Rossi, T

    2015-01-01

    Microwave imaging (MWI) has been recently proved as a promising imaging modality for low-complexity, low-cost and fast brain imaging tools, which could play a fundamental role to efficiently manage emergencies related to stroke and hemorrhages. This paper focuses on the UWB radar imaging approach and in particular on the processing algorithms of the backscattered signals. Assuming the use of the multistatic version of the MIST (Microwave Imaging Space-Time) beamforming algorithm, developed by Hagness et al. for the early detection of breast cancer, the paper proposes and compares two artifact removal algorithms. Artifacts removal is an essential step of any UWB radar imaging system and currently considered artifact removal algorithms have been shown not to be effective in the specific scenario of brain imaging. First of all, the paper proposes modifications of a known artifact removal algorithm. These modifications are shown to be effective to achieve good localization accuracy and lower false positives. However, the main contribution is the proposal of an artifact removal algorithm based on statistical methods, which allows to achieve even better performance but with much lower computational complexity.

  16. Theoretical and numerical study of an optimum design algorithm

    International Nuclear Information System (INIS)

    Destuynder, Philippe.

    1976-08-01

    This work can be separated into two main parts. First, the behavior of the solution of an elliptic variational equation is analyzed when the domain is submitted to a small perturbation. The case of inequations is also considered. Secondly the previous results are used for deriving an optimum design algorithm. This algorithm was suggested by the center-method proposed by Huard. Numerical results show the superiority of the method on other different optimization techniques [fr

  17. The development of a line-scan imaging algorithm for the detection of fecal contamination on leafy geens

    Science.gov (United States)

    Yang, Chun-Chieh; Kim, Moon S.; Chuang, Yung-Kun; Lee, Hoyoung

    2013-05-01

    This paper reports the development of a multispectral algorithm, using the line-scan hyperspectral imaging system, to detect fecal contamination on leafy greens. Fresh bovine feces were applied to the surfaces of washed loose baby spinach leaves. A hyperspectral line-scan imaging system was used to acquire hyperspectral fluorescence images of the contaminated leaves. Hyperspectral image analysis resulted in the selection of the 666 nm and 688 nm wavebands for a multispectral algorithm to rapidly detect feces on leafy greens, by use of the ratio of fluorescence intensities measured at those two wavebands (666 nm over 688 nm). The algorithm successfully distinguished most of the lowly diluted fecal spots (0.05 g feces/ml water and 0.025 g feces/ml water) and some of the highly diluted spots (0.0125 g feces/ml water and 0.00625 g feces/ml water) from the clean spinach leaves. The results showed the potential of the multispectral algorithm with line-scan imaging system for application to automated food processing lines for food safety inspection of leafy green vegetables.

  18. Chinese handwriting recognition an algorithmic perspective

    CERN Document Server

    Su, Tonghua

    2013-01-01

    This book provides an algorithmic perspective on the recent development of Chinese handwriting recognition. Two technically sound strategies, the segmentation-free and integrated segmentation-recognition strategy, are investigated and algorithms that have worked well in practice are primarily focused on. Baseline systems are initially presented for these strategies and are subsequently expanded on and incrementally improved. The sophisticated algorithms covered include: 1) string sample expansion algorithms which synthesize string samples from isolated characters or distort realistic string samples; 2) enhanced feature representation algorithms, e.g. enhanced four-plane features and Delta features; 3) novel learning algorithms, such as Perceptron learning with dynamic margin, MPE training and distributed training; and lastly 4) ensemble algorithms, that is, combining the two strategies using both parallel structure and serial structure. All the while, the book moves from basic to advanced algorithms, helping ...

  19. A Comprehensive Training Data Set for the Development of Satellite-Based Volcanic Ash Detection Algorithms

    Science.gov (United States)

    Schmidl, Marius

    2017-04-01

    We present a comprehensive training data set covering a large range of atmospheric conditions, including disperse volcanic ash and desert dust layers. These data sets contain all information required for the development of volcanic ash detection algorithms based on artificial neural networks, urgently needed since volcanic ash in the airspace is a major concern of aviation safety authorities. Selected parts of the data are used to train the volcanic ash detection algorithm VADUGS. They contain atmospheric and surface-related quantities as well as the corresponding simulated satellite data for the channels in the infrared spectral range of the SEVIRI instrument on board MSG-2. To get realistic results, ECMWF, IASI-based, and GEOS-Chem data are used to calculate all parameters describing the environment, whereas the software package libRadtran is used to perform radiative transfer simulations returning the brightness temperatures for each atmospheric state. As optical properties are a prerequisite for radiative simulations accounting for aerosol layers, the development also included the computation of optical properties for a set of different aerosol types from different sources. A description of the developed software and the used methods is given, besides an overview of the resulting data sets.

  20. Feasibility study of the iterative x-ray phase retrieval algorithm

    International Nuclear Information System (INIS)

    Meng Fanbo; Liu Hong; Wu Xizeng

    2009-01-01

    An iterative phase retrieval algorithm was previously investigated for in-line x-ray phase imaging. Through detailed theoretical analysis and computer simulations, we now discuss the limitations, robustness, and efficiency of the algorithm. The iterative algorithm was proved robust against imaging noise but sensitive to the variations of several system parameters. It is also efficient in terms of calculation time. It was shown that the algorithm can be applied to phase retrieval based on one phase-contrast image and one attenuation image, or two phase-contrast images; in both cases, the two images can be obtained either by one detector in two exposures, or by two detectors in only one exposure as in the dual-detector scheme

  1. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-10-01

    The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.

  2. Development of an algorithm for assessing the risk to food safety posed by a new animal disease.

    Science.gov (United States)

    Parker, E M; Jenson, I; Jordan, D; Ward, M P

    2012-05-01

    An algorithm was developed as a tool to rapidly assess the potential for a new or emerging disease of livestock to adversely affect humans via consumption or handling of meat product, so that the risks and uncertainties can be understood and appropriate risk management and communication implemented. An algorithm describing the sequence of events from occurrence of the disease in livestock, release of the causative agent from an infected animal, contamination of fresh meat and then possible adverse effects in humans following meat handling and consumption was created. A list of questions complements the algorithm to help the assessors address the issues of concern at each step of the decision pathway. The algorithm was refined and validated through consultation with a panel of experts and a review group of animal health and food safety policy advisors via five case studies of potential emerging diseases of cattle. Tasks for model validation included describing the path taken in the algorithm and stating an outcome. Twenty-nine per cent of the 62 experts commented on the model, and one-third of those responding also completed the tasks required for model validation. The feedback from the panel of experts and the review group was used to further develop the tool and remove redundancies and ambiguities. There was agreement in the pathways and assessments for diseases in which the causative agent was well understood (for example, bovine pneumonia due to Mycoplasma bovis). The stated pathways and assessments of other diseases (for example, bovine Johne's disease) were not as consistent. The framework helps to promote objectivity by requiring questions to be answered sequentially and providing the opportunity to record consensus or differences of opinion. Areas for discussion and future investigation are highlighted by the points of diversion on the pathway taken by different assessors. © 2011 Blackwell Verlag GmbH.

  3. Fast algorithms for transport models. Final report

    International Nuclear Information System (INIS)

    Manteuffel, T.A.

    1994-01-01

    This project has developed a multigrid in space algorithm for the solution of the S N equations with isotropic scattering in slab geometry. The algorithm was developed for the Modified Linear Discontinuous (MLD) discretization in space which is accurate in the thick diffusion limit. It uses a red/black two-cell μ-line relaxation. This relaxation solves for all angles on two adjacent spatial cells simultaneously. It takes advantage of the rank-one property of the coupling between angles and can perform this inversion in O(N) operations. A version of the multigrid in space algorithm was programmed on the Thinking Machines Inc. CM-200 located at LANL. It was discovered that on the CM-200 a block Jacobi type iteration was more efficient than the block red/black iteration. Given sufficient processors all two-cell block inversions can be carried out simultaneously with a small number of parallel steps. The bottleneck is the need for sums of N values, where N is the number of discrete angles, each from a different processor. These are carried out by machine intrinsic functions and are well optimized. The overall algorithm has computational complexity O(log(M)), where M is the number of spatial cells. The algorithm is very efficient and represents the state-of-the-art for isotropic problems in slab geometry. For anisotropic scattering in slab geometry, a multilevel in angle algorithm was developed. A parallel version of the multilevel in angle algorithm has also been developed. Upon first glance, the shifted transport sweep has limited parallelism. Once the right-hand-side has been computed, the sweep is completely parallel in angle, becoming N uncoupled initial value ODE's. The author has developed a cyclic reduction algorithm that renders it parallel with complexity O(log(M)). The multilevel in angle algorithm visits log(N) levels, where shifted transport sweeps are performed. The overall complexity is O(log(N)log(M))

  4. FPSoC-Based Architecture for a Fast Motion Estimation Algorithm in H.264/AVC

    Directory of Open Access Journals (Sweden)

    Obianuju Ndili

    2009-01-01

    Full Text Available There is an increasing need for high quality video on low power, portable devices. Possible target applications range from entertainment and personal communications to security and health care. While H.264/AVC answers the need for high quality video at lower bit rates, it is significantly more complex than previous coding standards and thus results in greater power consumption in practical implementations. In particular, motion estimation (ME, in H.264/AVC consumes the largest power in an H.264/AVC encoder. It is therefore critical to speed-up integer ME in H.264/AVC via fast motion estimation (FME algorithms and hardware acceleration. In this paper, we present our hardware oriented modifications to a hybrid FME algorithm, our architecture based on the modified algorithm, and our implementation and prototype on a PowerPC-based Field Programmable System on Chip (FPSoC. Our results show that the modified hybrid FME algorithm on average, outperforms previous state-of-the-art FME algorithms, while its losses when compared with FSME, in terms of PSNR performance and computation time, are insignificant. We show that although our implementation platform is FPGA-based, our implementation results compare favourably with previous architectures implemented on ASICs. Finally we also show an improvement over some existing architectures implemented on FPGAs.

  5. Enhancing Breast Cancer Recurrence Algorithms Through Selective Use of Medical Record Data.

    Science.gov (United States)

    Kroenke, Candyce H; Chubak, Jessica; Johnson, Lisa; Castillo, Adrienne; Weltzien, Erin; Caan, Bette J

    2016-03-01

    The utility of data-based algorithms in research has been questioned because of errors in identification of cancer recurrences. We adapted previously published breast cancer recurrence algorithms, selectively using medical record (MR) data to improve classification. We evaluated second breast cancer event (SBCE) and recurrence-specific algorithms previously published by Chubak and colleagues in 1535 women from the Life After Cancer Epidemiology (LACE) and 225 women from the Women's Health Initiative cohorts and compared classification statistics to published values. We also sought to improve classification with minimal MR examination. We selected pairs of algorithms-one with high sensitivity/high positive predictive value (PPV) and another with high specificity/high PPV-using MR information to resolve discrepancies between algorithms, properly classifying events based on review; we called this "triangulation." Finally, in LACE, we compared associations between breast cancer survival risk factors and recurrence using MR data, single Chubak algorithms, and triangulation. The SBCE algorithms performed well in identifying SBCE and recurrences. Recurrence-specific algorithms performed more poorly than published except for the high-specificity/high-PPV algorithm, which performed well. The triangulation method (sensitivity = 81.3%, specificity = 99.7%, PPV = 98.1%, NPV = 96.5%) improved recurrence classification over two single algorithms (sensitivity = 57.1%, specificity = 95.5%, PPV = 71.3%, NPV = 91.9%; and sensitivity = 74.6%, specificity = 97.3%, PPV = 84.7%, NPV = 95.1%), with 10.6% MR review. Triangulation performed well in survival risk factor analyses vs analyses using MR-identified recurrences. Use of multiple recurrence algorithms in administrative data, in combination with selective examination of MR data, may improve recurrence data quality and reduce research costs. © The Author 2015. Published by Oxford University Press. All rights reserved. For

  6. Developing a NIR multispectral imaging for prediction and visualization of peanut protein content using variable selection algorithms

    Science.gov (United States)

    Cheng, Jun-Hu; Jin, Huali; Liu, Zhiwei

    2018-01-01

    The feasibility of developing a multispectral imaging method using important wavelengths from hyperspectral images selected by genetic algorithm (GA), successive projection algorithm (SPA) and regression coefficient (RC) methods for modeling and predicting protein content in peanut kernel was investigated for the first time. Partial least squares regression (PLSR) calibration model was established between the spectral data from the selected optimal wavelengths and the reference measured protein content ranged from 23.46% to 28.43%. The RC-PLSR model established using eight key wavelengths (1153, 1567, 1972, 2143, 2288, 2339, 2389 and 2446 nm) showed the best predictive results with the coefficient of determination of prediction (R2P) of 0.901, and root mean square error of prediction (RMSEP) of 0.108 and residual predictive deviation (RPD) of 2.32. Based on the obtained best model and image processing algorithms, the distribution maps of protein content were generated. The overall results of this study indicated that developing a rapid and online multispectral imaging system using the feature wavelengths and PLSR analysis is potential and feasible for determination of the protein content in peanut kernels.

  7. Game-based programming towards developing algorithmic thinking skills in primary education

    Directory of Open Access Journals (Sweden)

    Hariklia Tsalapatas

    2012-06-01

    Full Text Available This paper presents cMinds, a learning intervention that deploys game-based visual programming towards building analytical, computational, and critical thinking skills in primary education. The proposed learning method exploits the structured nature of programming, which is inherently logical and transcends cultural barriers, towards inclusive learning that exposes learners to algorithmic thinking. A visual programming environment, entitled ‘cMinds Learning Suite’, has been developed aimed for classroom use. Feedback from the deployment of the learning methods and tools in classrooms in several European countries demonstrates elevated learner motivation for engaging in logical learning activities, fostering of creativity and an entrepreneurial spirit, and promotion of problem-solving capacity

  8. Development of test algorithm for semiconductor package with defects by using probabilistic neural network

    International Nuclear Information System (INIS)

    Kim, Jae Yeol; Sim, Jae Gi; Ko, Myoung Soo; Kim, Chang Hyun; Kim, Hun Cho

    2001-01-01

    In this study, researchers developing the estimative algorithm for artificial defects in semiconductor packages and performing it by pattern recognition technology. For this purpose, the estimative algorithm was included that researchers made software with MATLAB. The software consists of some procedures including ultrasonic image acquisition, equalization filtering, Self-Organizing Map and Probabilistic Neural Network. Self-Organizing Map and Probabilistic Neural Network are belong to methods of Neural Networks. And the pattern recognition technology has applied to classify three kinds of detective patterns in semiconductor packages. This study presumes probability density function from a sample of learning and present which is automatically determine method. PNN can distinguish flaws very difficult distinction as well as. This can do parallel process to stand in a row we confirm that is very efficiently classifier if we applied many data real the process.

  9. A Coulomb collision algorithm for weighted particle simulations

    Science.gov (United States)

    Miller, Ronald H.; Combi, Michael R.

    1994-01-01

    A binary Coulomb collision algorithm is developed for weighted particle simulations employing Monte Carlo techniques. Charged particles within a given spatial grid cell are pair-wise scattered, explicitly conserving momentum and implicitly conserving energy. A similar algorithm developed by Takizuka and Abe (1977) conserves momentum and energy provided the particles are unweighted (each particle representing equal fractions of the total particle density). If applied as is to simulations incorporating weighted particles, the plasma temperatures equilibrate to an incorrect temperature, as compared to theory. Using the appropriate pairing statistics, a Coulomb collision algorithm is developed for weighted particles. The algorithm conserves energy and momentum and produces the appropriate relaxation time scales as compared to theoretical predictions. Such an algorithm is necessary for future work studying self-consistent multi-species kinetic transport.

  10. A distributed scheduling algorithm for heterogeneous real-time systems

    Science.gov (United States)

    Zeineldine, Osman; El-Toweissy, Mohamed; Mukkamala, Ravi

    1991-01-01

    Much of the previous work on load balancing and scheduling in distributed environments was concerned with homogeneous systems and homogeneous loads. Several of the results indicated that random policies are as effective as other more complex load allocation policies. The effects of heterogeneity on scheduling algorithms for hard real time systems is examined. A distributed scheduler specifically to handle heterogeneities in both nodes and node traffic is proposed. The performance of the algorithm is measured in terms of the percentage of jobs discarded. While a random task allocation is very sensitive to heterogeneities, the algorithm is shown to be robust to such non-uniformities in system components and load.

  11. A Cultural Algorithm for Optimal Design of Truss Structures

    Directory of Open Access Journals (Sweden)

    Shahin Jalili

    Full Text Available Abstract A cultural algorithm was utilized in this study to solve optimal design of truss structures problem achieving minimum weight objective under stress and deflection constraints. The algorithm is inspired by principles of human social evolution. It simulates the social interaction between the peoples and their beliefs in a belief space. Cultural Algorithm (CA utilizes the belief space and population space which affects each other based on acceptance and influence functions. The belief space of CA consists of different knowledge components. In this paper, only situational and normative knowledge components are used within the belief space. The performance of the method is demonstrated through four benchmark design examples. Comparison of the obtained results with those of some previous studies demonstrates the efficiency of this algorithm.

  12. Considerations and Algorithms for Compression of Sets

    DEFF Research Database (Denmark)

    Larsson, Jesper

    We consider compression of unordered sets of distinct elements. After a discus- sion of the general problem, we focus on compressing sets of fixed-length bitstrings in the presence of statistical information. We survey techniques from previous work, suggesting some adjustments, and propose a novel...... compression algorithm that allows transparent incorporation of various estimates for probability distribution. Our experimental results allow the conclusion that set compression can benefit from incorporat- ing statistics, using our method or variants of previously known techniques....

  13. Advanced entry guidance algorithm with landing footprint computation

    Science.gov (United States)

    Leavitt, James Aaron

    The design and performance evaluation of an entry guidance algorithm for future space transportation vehicles is presented. The algorithm performs two functions: on-board trajectory planning and trajectory tracking. The planned longitudinal path is followed by tracking drag acceleration, as is done by the Space Shuttle entry guidance. Unlike the Shuttle entry guidance, lateral path curvature is also planned and followed. A new trajectory planning function for the guidance algorithm is developed that is suitable for suborbital entry and that significantly enhances the overall performance of the algorithm for both orbital and suborbital entry. In comparison with the previous trajectory planner, the new planner produces trajectories that are easier to track, especially near the upper and lower drag boundaries and for suborbital entry. The new planner accomplishes this by matching the vehicle's initial flight path angle and bank angle, and by enforcing the full three-degree-of-freedom equations of motion with control derivative limits. Insights gained from trajectory optimization results contribute to the design of the new planner, giving it near-optimal downrange and crossrange capabilities. Planned trajectories and guidance simulation results are presented that demonstrate the improved performance. Based on the new planner, a method is developed for approximating the landing footprint for entry vehicles in near real-time, as would be needed for an on-board flight management system. The boundary of the footprint is constructed from the endpoints of extreme downrange and crossrange trajectories generated by the new trajectory planner. The footprint algorithm inherently possesses many of the qualities of the new planner, including quick execution, the ability to accurately approximate the vehicle's glide capabilities, and applicability to a wide range of entry conditions. Footprints can be generated for orbital and suborbital entry conditions using a pre

  14. Scalable Nearest Neighbor Algorithms for High Dimensional Data.

    Science.gov (United States)

    Muja, Marius; Lowe, David G

    2014-11-01

    For many computer vision and machine learning problems, large training sets are key for good performance. However, the most computationally expensive part of many computer vision and machine learning algorithms consists of finding nearest neighbor matches to high dimensional vectors that represent the training data. We propose new algorithms for approximate nearest neighbor matching and evaluate and compare them with previous algorithms. For matching high dimensional features, we find two algorithms to be the most efficient: the randomized k-d forest and a new algorithm proposed in this paper, the priority search k-means tree. We also propose a new algorithm for matching binary features by searching multiple hierarchical clustering trees and show it outperforms methods typically used in the literature. We show that the optimal nearest neighbor algorithm and its parameters depend on the data set characteristics and describe an automated configuration procedure for finding the best algorithm to search a particular data set. In order to scale to very large data sets that would otherwise not fit in the memory of a single machine, we propose a distributed nearest neighbor matching framework that can be used with any of the algorithms described in the paper. All this research has been released as an open source library called fast library for approximate nearest neighbors (FLANN), which has been incorporated into OpenCV and is now one of the most popular libraries for nearest neighbor matching.

  15. Development of accurate standardized algorithms for conversion between SRP grid coordinates and latitude/longitude

    International Nuclear Information System (INIS)

    Looney, B.B.; Marsh, J.T. Jr.; Hayes, D.W.

    1987-01-01

    The Savannah Rive Plant (SRP) is a nuclear production facility operated by E.I. du Pont de Nemours and Co. for the United States Department of Energy. SRP is located along the Savannah River in South Carolina. Construction of SRP began in the early 1950's. At the time the plant was built, a local coordinate system was developed to assist in defining the locations of plant facilities. Over the years, large quantities of data have been developed using ''SRP Coordinates.'' These data include: building locations, plant boundaries, environmental sampling locations, waste disposal area locations, and a wide range of other geographical information. Currently, staff persons at SRP are organizing these data into automated information systems to allow more rapid, more robust and higher quality interpretation, interchange and presentation of spatial data. A key element in this process is the ability to incorporate outside data bases into the systems, as well as to share SRP data with interested organizations outside as SRP. Most geographical information outside of SRP is organized using latitude and longitude. Thus, straightforward, accurate and consistent algorithms to convert SRP Coordinates to/from latitude and longitude are needed. Appropriate algorithms are presented in this document

  16. DEVELOPMENT OF A PEDESTRIAN INDOOR NAVIGATION SYSTEM BASED ON MULTI-SENSOR FUSION AND FUZZY LOGIC ESTIMATION ALGORITHMS

    Directory of Open Access Journals (Sweden)

    Y. C. Lai

    2015-05-01

    Full Text Available This paper presents a pedestrian indoor navigation system based on the multi-sensor fusion and fuzzy logic estimation algorithms. The proposed navigation system is a self-contained dead reckoning navigation that means no other outside signal is demanded. In order to achieve the self-contained capability, a portable and wearable inertial measure unit (IMU has been developed. Its adopted sensors are the low-cost inertial sensors, accelerometer and gyroscope, based on the micro electro-mechanical system (MEMS. There are two types of the IMU modules, handheld and waist-mounted. The low-cost MEMS sensors suffer from various errors due to the results of manufacturing imperfections and other effects. Therefore, a sensor calibration procedure based on the scalar calibration and the least squares methods has been induced in this study to improve the accuracy of the inertial sensors. With the calibrated data acquired from the inertial sensors, the step length and strength of the pedestrian are estimated by multi-sensor fusion and fuzzy logic estimation algorithms. The developed multi-sensor fusion algorithm provides the amount of the walking steps and the strength of each steps in real-time. Consequently, the estimated walking amount and strength per step are taken into the proposed fuzzy logic estimation algorithm to estimates the step lengths of the user. Since the walking length and direction are both the required information of the dead reckoning navigation, the walking direction is calculated by integrating the angular rate acquired by the gyroscope of the developed IMU module. Both the walking length and direction are calculated on the IMU module and transmit to a smartphone with Bluetooth to perform the dead reckoning navigation which is run on a self-developed APP. Due to the error accumulating of dead reckoning navigation, a particle filter and a pre-loaded map of indoor environment have been applied to the APP of the proposed navigation system

  17. Development of a Pedestrian Indoor Navigation System Based on Multi-Sensor Fusion and Fuzzy Logic Estimation Algorithms

    Science.gov (United States)

    Lai, Y. C.; Chang, C. C.; Tsai, C. M.; Lin, S. Y.; Huang, S. C.

    2015-05-01

    This paper presents a pedestrian indoor navigation system based on the multi-sensor fusion and fuzzy logic estimation algorithms. The proposed navigation system is a self-contained dead reckoning navigation that means no other outside signal is demanded. In order to achieve the self-contained capability, a portable and wearable inertial measure unit (IMU) has been developed. Its adopted sensors are the low-cost inertial sensors, accelerometer and gyroscope, based on the micro electro-mechanical system (MEMS). There are two types of the IMU modules, handheld and waist-mounted. The low-cost MEMS sensors suffer from various errors due to the results of manufacturing imperfections and other effects. Therefore, a sensor calibration procedure based on the scalar calibration and the least squares methods has been induced in this study to improve the accuracy of the inertial sensors. With the calibrated data acquired from the inertial sensors, the step length and strength of the pedestrian are estimated by multi-sensor fusion and fuzzy logic estimation algorithms. The developed multi-sensor fusion algorithm provides the amount of the walking steps and the strength of each steps in real-time. Consequently, the estimated walking amount and strength per step are taken into the proposed fuzzy logic estimation algorithm to estimates the step lengths of the user. Since the walking length and direction are both the required information of the dead reckoning navigation, the walking direction is calculated by integrating the angular rate acquired by the gyroscope of the developed IMU module. Both the walking length and direction are calculated on the IMU module and transmit to a smartphone with Bluetooth to perform the dead reckoning navigation which is run on a self-developed APP. Due to the error accumulating of dead reckoning navigation, a particle filter and a pre-loaded map of indoor environment have been applied to the APP of the proposed navigation system to extend its

  18. A DVH-guided IMRT optimization algorithm for automatic treatment planning and adaptive radiotherapy replanning

    International Nuclear Information System (INIS)

    Zarepisheh, Masoud; Li, Nan; Long, Troy; Romeijn, H. Edwin; Tian, Zhen; Jia, Xun; Jiang, Steve B.

    2014-01-01

    Purpose: To develop a novel algorithm that incorporates prior treatment knowledge into intensity modulated radiation therapy optimization to facilitate automatic treatment planning and adaptive radiotherapy (ART) replanning. Methods: The algorithm automatically creates a treatment plan guided by the DVH curves of a reference plan that contains information on the clinician-approved dose-volume trade-offs among different targets/organs and among different portions of a DVH curve for an organ. In ART, the reference plan is the initial plan for the same patient, while for automatic treatment planning the reference plan is selected from a library of clinically approved and delivered plans of previously treated patients with similar medical conditions and geometry. The proposed algorithm employs a voxel-based optimization model and navigates the large voxel-based Pareto surface. The voxel weights are iteratively adjusted to approach a plan that is similar to the reference plan in terms of the DVHs. If the reference plan is feasible but not Pareto optimal, the algorithm generates a Pareto optimal plan with the DVHs better than the reference ones. If the reference plan is too restricting for the new geometry, the algorithm generates a Pareto plan with DVHs close to the reference ones. In both cases, the new plans have similar DVH trade-offs as the reference plans. Results: The algorithm was tested using three patient cases and found to be able to automatically adjust the voxel-weighting factors in order to generate a Pareto plan with similar DVH trade-offs as the reference plan. The algorithm has also been implemented on a GPU for high efficiency. Conclusions: A novel prior-knowledge-based optimization algorithm has been developed that automatically adjust the voxel weights and generate a clinical optimal plan at high efficiency. It is found that the new algorithm can significantly improve the plan quality and planning efficiency in ART replanning and automatic treatment

  19. New developments in iterated rounding

    NARCIS (Netherlands)

    Bansal, N.; Raman, V.; Suresh, S.P.

    2014-01-01

    Iterated rounding is a relatively recent technique in algorithm design, that despite its simplicity has led to several remarkable new results and also simpler proofs of many previous results. We will briefly survey some applications of the method, including some recent developments and giving a high

  20. Blind Source Separation Based on Covariance Ratio and Artificial Bee Colony Algorithm

    Directory of Open Access Journals (Sweden)

    Lei Chen

    2014-01-01

    Full Text Available The computation amount in blind source separation based on bioinspired intelligence optimization is high. In order to solve this problem, we propose an effective blind source separation algorithm based on the artificial bee colony algorithm. In the proposed algorithm, the covariance ratio of the signals is utilized as the objective function and the artificial bee colony algorithm is used to solve it. The source signal component which is separated out, is then wiped off from mixtures using the deflation method. All the source signals can be recovered successfully by repeating the separation process. Simulation experiments demonstrate that significant improvement of the computation amount and the quality of signal separation is achieved by the proposed algorithm when compared to previous algorithms.

  1. Algorithmic alternatives

    International Nuclear Information System (INIS)

    Creutz, M.

    1987-11-01

    A large variety of Monte Carlo algorithms are being used for lattice gauge simulations. For purely bosonic theories, present approaches are generally adequate; nevertheless, overrelaxation techniques promise savings by a factor of about three in computer time. For fermionic fields the situation is more difficult and less clear. Algorithms which involve an extrapolation to a vanishing step size are all quite closely related. Methods which do not require such an approximation tend to require computer time which grows as the square of the volume of the system. Recent developments combining global accept/reject stages with Langevin or microcanonical updatings promise to reduce this growth to V/sup 4/3/

  2. Exact Algorithm for the Capacitated Team Orienteering Problem with Time Windows

    Directory of Open Access Journals (Sweden)

    Junhyuk Park

    2017-01-01

    Full Text Available The capacitated team orienteering problem with time windows (CTOPTW is a problem to determine players’ paths that have the maximum rewards while satisfying the constraints. In this paper, we present the exact solution approach for the CTOPTW which has not been done in previous literature. We show that the branch-and-price (B&P scheme which was originally developed for the team orienteering problem can be applied to the CTOPTW. To solve pricing problems, we used implicit enumeration acceleration techniques, heuristic algorithms, and ng-route relaxations.

  3. STREAMFINDER I: A New Algorithm for detecting Stellar Streams

    Science.gov (United States)

    Malhan, Khyati; Ibata, Rodrigo A.

    2018-04-01

    We have designed a powerful new algorithm to detect stellar streams in an automated and systematic way. The algorithm, which we call the STREAMFINDER, is well suited for finding dynamically cold and thin stream structures that may lie along any simple or complex orbits in Galactic stellar surveys containing any combination of positional and kinematic information. In the present contribution we introduce the algorithm, lay out the ideas behind it, explain the methodology adopted to detect streams and detail its workings by running it on a suite of simulations of mock Galactic survey data of similar quality to that expected from the ESA/Gaia mission. We show that our algorithm is able to detect even ultra-faint stream features lying well below previous detection limits. Tests show that our algorithm will be able to detect distant halo stream structures >10° long containing as few as ˜15 members (ΣG ˜ 33.6 mag arcsec-2) in the Gaia dataset.

  4. Decoding algorithm for vortex communications receiver

    Science.gov (United States)

    Kupferman, Judy; Arnon, Shlomi

    2018-01-01

    Vortex light beams can provide a tremendous alphabet for encoding information. We derive a symbol decoding algorithm for a direct detection matrix detector vortex beam receiver using Laguerre Gauss (LG) modes, and develop a mathematical model of symbol error rate (SER) for this receiver. We compare SER as a function of signal to noise ratio (SNR) for our algorithm and for the Pearson correlation algorithm. To our knowledge, this is the first comprehensive treatment of a decoding algorithm of a matrix detector for an LG receiver.

  5. Cache-Oblivious Algorithms and Data Structures

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting

    2004-01-01

    Frigo, Leiserson, Prokop and Ramachandran in 1999 introduced the ideal-cache model as a formal model of computation for developing algorithms in environments with multiple levels of caching, and coined the terminology of cache-oblivious algorithms. Cache-oblivious algorithms are described...... as standard RAM algorithms with only one memory level, i.e. without any knowledge about memory hierarchies, but are analyzed in the two-level I/O model of Aggarwal and Vitter for an arbitrary memory and block size and an optimal off-line cache replacement strategy. The result are algorithms that automatically...... apply to multi-level memory hierarchies. This paper gives an overview of the results achieved on cache-oblivious algorithms and data structures since the seminal paper by Frigo et al....

  6. Crytosystem Program Planning for Securing Data/Information of the Results of Research and Development using Triple DES Algorithm

    International Nuclear Information System (INIS)

    Tumpal P; Naga, Dali S.; David

    2004-01-01

    This software is a cryptosystem that uses triple DES algorithm and uses ECB (Electronic Code Book) mode. This cryptosystem can send a file with any extension whether it is encrypted or not, encrypt the data that representing the picture of bitmap file or text, as well as view the calculation that can be written. Triple DES is an efficient and effective developments of DES because same algorithm but the three times repeated operation causing the key become 168 bit from 56 bit. (author)

  7. Mentoring to develop research selfefficacy, with particular reference to previously disadvantaged individuals

    Directory of Open Access Journals (Sweden)

    S. Schulze

    2010-07-01

    Full Text Available The development of inexperienced researchers is crucial. In response to the lack of research self-efficacy of many previously disadvantaged individuals, the article examines how mentoring can enhance the research self-efficacy of mentees. The study is grounded in the self-efficacy theory (SET – an aspect of the social cognitive theory (SCT. Insights were gained from an in-depth study of SCT, SET and mentoring, and from a completed mentoring project. This led to the formulation of three basic principles. Firstly, institutions need to provide supportive environmental conditions that facilitate research selfefficacy. This implies a supportive and efficient collective system. The possible effects of performance ratings and reward systems at the institution also need to be considered. Secondly, mentoring needs to create opportunities for young researchers to experience successful learning as a result of appropriate action. To this end, mentees need to be involved in actual research projects in small groups. At the same time the mentor needs to facilitate skills development by coaching and encouragement. Thirdly, mentors need to encourage mentees to believe in their ability to successfully complete research projects. This implies encouraging positive emotional states, stimulating self-reflection and self-comparison with others in the group, giving positive evaluative feedback and being an intentional role model.

  8. Algorithms and data structures for automated change detection and classification of sidescan sonar imagery

    Science.gov (United States)

    Gendron, Marlin Lee

    During Mine Warfare (MIW) operations, MIW analysts perform change detection by visually comparing historical sidescan sonar imagery (SSI) collected by a sidescan sonar with recently collected SSI in an attempt to identify objects (which might be explosive mines) placed at sea since the last time the area was surveyed. This dissertation presents a data structure and three algorithms, developed by the author, that are part of an automated change detection and classification (ACDC) system. MIW analysts at the Naval Oceanographic Office, to reduce the amount of time to perform change detection, are currently using ACDC. The dissertation introductory chapter gives background information on change detection, ACDC, and describes how SSI is produced from raw sonar data. Chapter 2 presents the author's Geospatial Bitmap (GB) data structure, which is capable of storing information geographically and is utilized by the three algorithms. This chapter shows that a GB data structure used in a polygon-smoothing algorithm ran between 1.3--48.4x faster than a sparse matrix data structure. Chapter 3 describes the GB clustering algorithm, which is the author's repeatable, order-independent method for clustering. Results from tests performed in this chapter show that the time to cluster a set of points is not affected by the distribution or the order of the points. In Chapter 4, the author presents his real-time computer-aided detection (CAD) algorithm that automatically detects mine-like objects on the seafloor in SSI. The author ran his GB-based CAD algorithm on real SSI data, and results of these tests indicate that his real-time CAD algorithm performs comparably to or better than other non-real-time CAD algorithms. The author presents his computer-aided search (CAS) algorithm in Chapter 5. CAS helps MIW analysts locate mine-like features that are geospatially close to previously detected features. A comparison between the CAS and a great circle distance algorithm shows that the

  9. Integrative multicellular biological modeling: a case study of 3D epidermal development using GPU algorithms

    Directory of Open Access Journals (Sweden)

    Christley Scott

    2010-08-01

    Full Text Available Abstract Background Simulation of sophisticated biological models requires considerable computational power. These models typically integrate together numerous biological phenomena such as spatially-explicit heterogeneous cells, cell-cell interactions, cell-environment interactions and intracellular gene networks. The recent advent of programming for graphical processing units (GPU opens up the possibility of developing more integrative, detailed and predictive biological models while at the same time decreasing the computational cost to simulate those models. Results We construct a 3D model of epidermal development and provide a set of GPU algorithms that executes significantly faster than sequential central processing unit (CPU code. We provide a parallel implementation of the subcellular element method for individual cells residing in a lattice-free spatial environment. Each cell in our epidermal model includes an internal gene network, which integrates cellular interaction of Notch signaling together with environmental interaction of basement membrane adhesion, to specify cellular state and behaviors such as growth and division. We take a pedagogical approach to describing how modeling methods are efficiently implemented on the GPU including memory layout of data structures and functional decomposition. We discuss various programmatic issues and provide a set of design guidelines for GPU programming that are instructive to avoid common pitfalls as well as to extract performance from the GPU architecture. Conclusions We demonstrate that GPU algorithms represent a significant technological advance for the simulation of complex biological models. We further demonstrate with our epidermal model that the integration of multiple complex modeling methods for heterogeneous multicellular biological processes is both feasible and computationally tractable using this new technology. We hope that the provided algorithms and source code will be a

  10. A parallel row-based algorithm with error control for standard-cell replacement on a hypercube multiprocessor

    Science.gov (United States)

    Sargent, Jeff Scott

    1988-01-01

    A new row-based parallel algorithm for standard-cell placement targeted for execution on a hypercube multiprocessor is presented. Key features of this implementation include a dynamic simulated-annealing schedule, row-partitioning of the VLSI chip image, and two novel new approaches to controlling error in parallel cell-placement algorithms; Heuristic Cell-Coloring and Adaptive (Parallel Move) Sequence Control. Heuristic Cell-Coloring identifies sets of noninteracting cells that can be moved repeatedly, and in parallel, with no buildup of error in the placement cost. Adaptive Sequence Control allows multiple parallel cell moves to take place between global cell-position updates. This feedback mechanism is based on an error bound derived analytically from the traditional annealing move-acceptance profile. Placement results are presented for real industry circuits and the performance is summarized of an implementation on the Intel iPSC/2 Hypercube. The runtime of this algorithm is 5 to 16 times faster than a previous program developed for the Hypercube, while producing equivalent quality placement. An integrated place and route program for the Intel iPSC/2 Hypercube is currently being developed.

  11. Tau reconstruction and identification algorithm

    Indian Academy of Sciences (India)

    CMS has developed sophisticated tau identification algorithms for tau hadronic decay modes. Production of tau lepton decaying to hadrons are studied at 7 TeV centre-of-mass energy with 2011 collision data collected by CMS detector and has been used to measure the performance of tau identification algorithms by ...

  12. Towards a computational- and algorithmic-level account of concept blending using analogies and amalgams

    Science.gov (United States)

    Besold, Tarek R.; Kühnberger, Kai-Uwe; Plaza, Enric

    2017-10-01

    Concept blending - a cognitive process which allows for the combination of certain elements (and their relations) from originally distinct conceptual spaces into a new unified space combining these previously separate elements, and enables reasoning and inference over the combination - is taken as a key element of creative thought and combinatorial creativity. In this article, we summarise our work towards the development of a computational-level and algorithmic-level account of concept blending, combining approaches from computational analogy-making and case-based reasoning (CBR). We present the theoretical background, as well as an algorithmic proposal integrating higher-order anti-unification matching and generalisation from analogy with amalgams from CBR. The feasibility of the approach is then exemplified in two case studies.

  13. A New Adaptive Hungarian Mating Scheme in Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Chanju Jung

    2016-01-01

    Full Text Available In genetic algorithms, selection or mating scheme is one of the important operations. In this paper, we suggest an adaptive mating scheme using previously suggested Hungarian mating schemes. Hungarian mating schemes consist of maximizing the sum of mating distances, minimizing the sum, and random matching. We propose an algorithm to elect one of these Hungarian mating schemes. Every mated pair of solutions has to vote for the next generation mating scheme. The distance between parents and the distance between parent and offspring are considered when they vote. Well-known combinatorial optimization problems, the traveling salesperson problem, and the graph bisection problem are used for the test bed of our method. Our adaptive strategy showed better results than not only pure and previous hybrid schemes but also existing distance-based mating schemes.

  14. Object-Oriented Implementation of Adaptive Mesh Refinement Algorithms

    Directory of Open Access Journals (Sweden)

    William Y. Crutchfield

    1993-01-01

    Full Text Available We describe C++ classes that simplify development of adaptive mesh refinement (AMR algorithms. The classes divide into two groups, generic classes that are broadly useful in adaptive algorithms, and application-specific classes that are the basis for our AMR algorithm. We employ two languages, with C++ responsible for the high-level data structures, and Fortran responsible for low-level numerics. The C++ implementation is as fast as the original Fortran implementation. Use of inheritance has allowed us to extend the original AMR algorithm to other problems with greatly reduced development time.

  15. A fast algorithm for 3D azimuthally anisotropic velocity scan

    KAUST Repository

    Hu, Jingwei; Fomel, Sergey; Ying, Lexing

    2014-01-01

    © 2014 European Association of Geoscientists & Engineers. The conventional velocity scan can be computationally expensive for large-scale seismic data sets, particularly when the presence of anisotropy requires multiparameter scanning. We introduce a fast algorithm for 3D azimuthally anisotropic velocity scan by generalizing the previously proposed 2D butterfly algorithm for hyperbolic Radon transforms. To compute semblance in a two-parameter residual moveout domain, the numerical complexity of our algorithm is roughly O(N3logN) as opposed to O(N5) of the straightforward velocity scan, with N being the representative of the number of points in a particular dimension of either data space or parameter space. Synthetic and field data examples demonstrate the superior efficiency of the proposed algorithm.

  16. An Improved Recovery Algorithm for Decayed AES Key Schedule Images

    Science.gov (United States)

    Tsow, Alex

    A practical algorithm that recovers AES key schedules from decayed memory images is presented. Halderman et al. [1] established this recovery capability, dubbed the cold-boot attack, as a serious vulnerability for several widespread software-based encryption packages. Our algorithm recovers AES-128 key schedules tens of millions of times faster than the original proof-of-concept release. In practice, it enables reliable recovery of key schedules at 70% decay, well over twice the decay capacity of previous methods. The algorithm is generalized to AES-256 and is empirically shown to recover 256-bit key schedules that have suffered 65% decay. When solutions are unique, the algorithm efficiently validates this property and outputs the solution for memory images decayed up to 60%.

  17. A fast algorithm for 3D azimuthally anisotropic velocity scan

    KAUST Repository

    Hu, Jingwei

    2014-11-11

    © 2014 European Association of Geoscientists & Engineers. The conventional velocity scan can be computationally expensive for large-scale seismic data sets, particularly when the presence of anisotropy requires multiparameter scanning. We introduce a fast algorithm for 3D azimuthally anisotropic velocity scan by generalizing the previously proposed 2D butterfly algorithm for hyperbolic Radon transforms. To compute semblance in a two-parameter residual moveout domain, the numerical complexity of our algorithm is roughly O(N3logN) as opposed to O(N5) of the straightforward velocity scan, with N being the representative of the number of points in a particular dimension of either data space or parameter space. Synthetic and field data examples demonstrate the superior efficiency of the proposed algorithm.

  18. Algorithm of axial fuel optimization based in progressive steps of turned search

    International Nuclear Information System (INIS)

    Martin del Campo, C.; Francois, J.L.

    2003-01-01

    The development of an algorithm for the axial optimization of fuel of boiling water reactors (BWR) is presented. The algorithm is based in a serial optimizations process in the one that the best solution in each stage is the starting point of the following stage. The objective function of each stage adapts to orient the search toward better values of one or two parameters leaving the rest like restrictions. Conform to it advances in those optimization stages, it is increased the fineness of the evaluation of the investigated designs. The algorithm is based on three stages, in the first one are used Genetic algorithms and in the two following Tabu Search. The objective function of the first stage it looks for to minimize the average enrichment of the one it assembles and to fulfill with the generation of specified energy for the operation cycle besides not violating none of the limits of the design base. In the following stages the objective function looks for to minimize the power factor peak (PPF) and to maximize the margin of shutdown (SDM), having as restrictions the one average enrichment obtained for the best design in the first stage and those other restrictions. The third stage, very similar to the previous one, it begins with the design of the previous stage but it carries out a search of the margin of shutdown to different exhibition steps with calculations in three dimensions (3D). An application to the case of the design of the fresh assemble for the fourth fuel reload of the Unit 1 reactor of the Laguna Verde power plant (U1-CLV) is presented. The obtained results show an advance in the handling of optimization methods and in the construction of the objective functions that should be used for the different design stages of the fuel assemblies. (Author)

  19. A New Missing Values Estimation Algorithm in Wireless Sensor Networks Based on Convolution

    Directory of Open Access Journals (Sweden)

    Feng Liu

    2013-04-01

    Full Text Available Nowadays, with the rapid development of Internet of Things (IoT applications, data missing phenomenon becomes very common in wireless sensor networks. This problem can greatly and directly threaten the stability and usability of the Internet of things applications which are constructed based on wireless sensor networks. How to estimate the missing value has attracted wide interest, and some solutions have been proposed. Different with the previous works, in this paper, we proposed a new convolution based missing value estimation algorithm. The convolution theory, which is usually used in the area of signal and image processing, can also be a practical and efficient way to estimate the missing sensor data. The results show that the proposed algorithm in this paper is practical and effective, and can estimate the missing value accurately.

  20. Development of a hybrid energy storage sizing algorithm associated with the evaluation of power management in different driving cycles

    International Nuclear Information System (INIS)

    Masoud, Masih Tehrani; Mohammad Reza, Ha'iri Yazdi; Esfahanian, Vahid; Sagha, Hossein

    2012-01-01

    In this paper, a hybrid energy storage sizing algorithm for electric vehicles is developed to achieve a semi optimum cost effective design. Using the developed algorithm, a driving cycle is divided into its micro-trips and the power and energy demands in each micro trip are determined. The battery size is estimated because the battery fulfills the power demands. Moreover, the ultra capacitor (UC) energy (or the number of UC modules) is assessed because the UC delivers the maximum energy demands of the different micro trips of a driving cycle. Finally, a design factor, which shows the power of the hybrid energy storage control strategy, is utilized to evaluate the newly designed control strategies. Using the developed algorithm, energy saving loss, driver satisfaction criteria, and battery life criteria are calculated using a feed forward dynamic modeling software program and are utilized for comparison among different energy storage candidates. This procedure is applied to the hybrid energy storage sizing of a series hybrid electric city bus in Manhattan and to the Tehran driving cycle. Results show that a higher aggressive driving cycle (Manhattan) requires more expensive energy storage system and more sophisticated energy management strategy

  1. Increasing feasibility of the field-programmable gate array implementation of an iterative image registration using a kernel-warping algorithm

    Science.gov (United States)

    Nguyen, An Hung; Guillemette, Thomas; Lambert, Andrew J.; Pickering, Mark R.; Garratt, Matthew A.

    2017-09-01

    Image registration is a fundamental image processing technique. It is used to spatially align two or more images that have been captured at different times, from different sensors, or from different viewpoints. There have been many algorithms proposed for this task. The most common of these being the well-known Lucas-Kanade (LK) and Horn-Schunck approaches. However, the main limitation of these approaches is the computational complexity required to implement the large number of iterations necessary for successful alignment of the images. Previously, a multi-pass image interpolation algorithm (MP-I2A) was developed to considerably reduce the number of iterations required for successful registration compared with the LK algorithm. This paper develops a kernel-warping algorithm (KWA), a modified version of the MP-I2A, which requires fewer iterations to successfully register two images and less memory space for the field-programmable gate array (FPGA) implementation than the MP-I2A. These reductions increase feasibility of the implementation of the proposed algorithm on FPGAs with very limited memory space and other hardware resources. A two-FPGA system rather than single FPGA system is successfully developed to implement the KWA in order to compensate insufficiency of hardware resources supported by one FPGA, and increase parallel processing ability and scalability of the system.

  2. ADORE-GA: Genetic algorithm variant of the ADORE algorithm for ROP detector layout optimization in CANDU reactors

    International Nuclear Information System (INIS)

    Kastanya, Doddy

    2012-01-01

    Highlights: ► ADORE is an algorithm for CANDU ROP Detector Layout Optimization. ► ADORE-GA is a Genetic Algorithm variant of the ADORE algorithm. ► Robustness test of ADORE-GA algorithm is presented in this paper. - Abstract: The regional overpower protection (ROP) systems protect CANDU® reactors against overpower in the fuel that could reduce the safety margin-to-dryout. The overpower could originate from a localized power peaking within the core or a general increase in the global core power level. The design of the detector layout for ROP systems is a challenging discrete optimization problem. In recent years, two algorithms have been developed to find a quasi optimal solution to this detector layout optimization problem. Both of these algorithms utilize the simulated annealing (SA) algorithm as their optimization engine. In the present paper, an alternative optimization algorithm, namely the genetic algorithm (GA), has been implemented as the optimization engine. The implementation is done within the ADORE algorithm. Results from evaluating the effects of using various mutation rates and crossover parameters are presented in this paper. It has been demonstrated that the algorithm is sufficiently robust in producing similar quality solutions.

  3. Neurient: An Algorithm for Automatic Tracing of Confluent Neuronal Images to Determine Alignment

    Science.gov (United States)

    Mitchel, J.A.; Martin, I.S.

    2013-01-01

    A goal of neural tissue engineering is the development and evaluation of materials that guide neuronal growth and alignment. However, the methods available to quantitatively evaluate the response of neurons to guidance materials are limited and/or expensive, and may require manual tracing to be performed by the researcher. We have developed an open source, automated Matlab-based algorithm, building on previously published methods, to trace and quantify alignment of fluorescent images of neurons in culture. The algorithm is divided into three phases, including computation of a lookup table which contains directional information for each image, location of a set of seed points which may lie along neurite centerlines, and tracing neurites starting with each seed point and indexing into the lookup table. This method was used to obtain quantitative alignment data for complex images of densely cultured neurons. Complete automation of tracing allows for unsupervised processing of large numbers of images. Following image processing with our algorithm, available metrics to quantify neurite alignment include angular histograms, percent of neurite segments in a given direction, and mean neurite angle. The alignment information obtained from traced images can be used to compare the response of neurons to a range of conditions. This tracing algorithm is freely available to the scientific community under the name Neurient, and its implementation in Matlab allows a wide range of researchers to use a standardized, open source method to quantitatively evaluate the alignment of dense neuronal cultures. PMID:23384629

  4. System engineering approach to GPM retrieval algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Rose, C. R. (Chris R.); Chandrasekar, V.

    2004-01-01

    System engineering principles and methods are very useful in large-scale complex systems for developing the engineering requirements from end-user needs. Integrating research into system engineering is a challenging task. The proposed Global Precipitation Mission (GPM) satellite will use a dual-wavelength precipitation radar to measure and map global precipitation with unprecedented accuracy, resolution and areal coverage. The satellite vehicle, precipitation radars, retrieval algorithms, and ground validation (GV) functions are all critical subsystems of the overall GPM system and each contributes to the success of the mission. Errors in the radar measurements and models can adversely affect the retrieved output values. Ground validation (GV) systems are intended to provide timely feedback to the satellite and retrieval algorithms based on measured data. These GV sites will consist of radars and DSD measurement systems and also have intrinsic constraints. One of the retrieval algorithms being studied for use with GPM is the dual-wavelength DSD algorithm that does not use the surface reference technique (SRT). The underlying microphysics of precipitation structures and drop-size distributions (DSDs) dictate the types of models and retrieval algorithms that can be used to estimate precipitation. Many types of dual-wavelength algorithms have been studied. Meneghini (2002) analyzed the performance of single-pass dual-wavelength surface-reference-technique (SRT) based algorithms. Mardiana (2003) demonstrated that a dual-wavelength retrieval algorithm could be successfully used without the use of the SRT. It uses an iterative approach based on measured reflectivities at both wavelengths and complex microphysical models to estimate both No and Do at each range bin. More recently, Liao (2004) proposed a solution to the Do ambiguity problem in rain within the dual-wavelength algorithm and showed a possible melting layer model based on stratified spheres. With the No and Do

  5. Structure-Based Algorithms for Microvessel Classification

    KAUST Repository

    Smith, Amy F.

    2015-02-01

    © 2014 The Authors. Microcirculation published by John Wiley & Sons Ltd. Objective: Recent developments in high-resolution imaging techniques have enabled digital reconstruction of three-dimensional sections of microvascular networks down to the capillary scale. To better interpret these large data sets, our goal is to distinguish branching trees of arterioles and venules from capillaries. Methods: Two novel algorithms are presented for classifying vessels in microvascular anatomical data sets without requiring flow information. The algorithms are compared with a classification based on observed flow directions (considered the gold standard), and with an existing resistance-based method that relies only on structural data. Results: The first algorithm, developed for networks with one arteriolar and one venular tree, performs well in identifying arterioles and venules and is robust to parameter changes, but incorrectly labels a significant number of capillaries as arterioles or venules. The second algorithm, developed for networks with multiple inlets and outlets, correctly identifies more arterioles and venules, but is more sensitive to parameter changes. Conclusions: The algorithms presented here can be used to classify microvessels in large microvascular data sets lacking flow information. This provides a basis for analyzing the distinct geometrical properties and modelling the functional behavior of arterioles, capillaries, and venules.

  6. A comparison between physicians and computer algorithms for form CMS-2728 data reporting.

    Science.gov (United States)

    Malas, Mohammed Said; Wish, Jay; Moorthi, Ranjani; Grannis, Shaun; Dexter, Paul; Duke, Jon; Moe, Sharon

    2017-01-01

    CMS-2728 form (Medical Evidence Report) assesses 23 comorbidities chosen to reflect poor outcomes and increased mortality risk. Previous studies questioned the validity of physician reporting on forms CMS-2728. We hypothesize that reporting of comorbidities by computer algorithms identifies more comorbidities than physician completion, and, therefore, is more reflective of underlying disease burden. We collected data from CMS-2728 forms for all 296 patients who had incident ESRD diagnosis and received chronic dialysis from 2005 through 2014 at Indiana University outpatient dialysis centers. We analyzed patients' data from electronic medical records systems that collated information from multiple health care sources. Previously utilized algorithms or natural language processing was used to extract data on 10 comorbidities for a period of up to 10 years prior to ESRD incidence. These algorithms incorporate billing codes, prescriptions, and other relevant elements. We compared the presence or unchecked status of these comorbidities on the forms to the presence or absence according to the algorithms. Computer algorithms had higher reporting of comorbidities compared to forms completion by physicians. This remained true when decreasing data span to one year and using only a single health center source. The algorithms determination was well accepted by a physician panel. Importantly, algorithms use significantly increased the expected deaths and lowered the standardized mortality ratios. Using computer algorithms showed superior identification of comorbidities for form CMS-2728 and altered standardized mortality ratios. Adapting similar algorithms in available EMR systems may offer more thorough evaluation of comorbidities and improve quality reporting. © 2016 International Society for Hemodialysis.

  7. Development of Gis Tool for the Solution of Minimum Spanning Tree Problem using Prim's Algorithm

    Science.gov (United States)

    Dutta, S.; Patra, D.; Shankar, H.; Alok Verma, P.

    2014-11-01

    minimum spanning tree (MST) of a connected, undirected and weighted network is a tree of that network consisting of all its nodes and the sum of weights of all its edges is minimum among all such possible spanning trees of the same network. In this study, we have developed a new GIS tool using most commonly known rudimentary algorithm called Prim's algorithm to construct the minimum spanning tree of a connected, undirected and weighted road network. This algorithm is based on the weight (adjacency) matrix of a weighted network and helps to solve complex network MST problem easily, efficiently and effectively. The selection of the appropriate algorithm is very essential otherwise it will be very hard to get an optimal result. In case of Road Transportation Network, it is very essential to find the optimal results by considering all the necessary points based on cost factor (time or distance). This paper is based on solving the Minimum Spanning Tree (MST) problem of a road network by finding it's minimum span by considering all the important network junction point. GIS technology is usually used to solve the network related problems like the optimal path problem, travelling salesman problem, vehicle routing problems, location-allocation problems etc. Therefore, in this study we have developed a customized GIS tool using Python script in ArcGIS software for the solution of MST problem for a Road Transportation Network of Dehradun city by considering distance and time as the impedance (cost) factors. It has a number of advantages like the users do not need a greater knowledge of the subject as the tool is user-friendly and that allows to access information varied and adapted the needs of the users. This GIS tool for MST can be applied for a nationwide plan called Prime Minister Gram Sadak Yojana in India to provide optimal all weather road connectivity to unconnected villages (points). This tool is also useful for constructing highways or railways spanning several

  8. Advances in metaheuristic algorithms for optimal design of structures

    CERN Document Server

    Kaveh, A

    2017-01-01

    This book presents efficient metaheuristic algorithms for optimal design of structures. Many of these algorithms are developed by the author and his colleagues, consisting of Democratic Particle Swarm Optimization, Charged System Search, Magnetic Charged System Search, Field of Forces Optimization, Dolphin Echolocation Optimization, Colliding Bodies Optimization, Ray Optimization. These are presented together with algorithms which were developed by other authors and have been successfully applied to various optimization problems. These consist of Particle Swarm Optimization, Big Bang-Big Crunch Algorithm, Cuckoo Search Optimization, Imperialist Competitive Algorithm, and Chaos Embedded Metaheuristic Algorithms. Finally a multi-objective optimization method is presented to solve large-scale structural problems based on the Charged System Search algorithm. The concepts and algorithms presented in this book are not only applicable to optimization of skeletal structures and finite element models, but can equally ...

  9. Advances in metaheuristic algorithms for optimal design of structures

    CERN Document Server

    Kaveh, A

    2014-01-01

    This book presents efficient metaheuristic algorithms for optimal design of structures. Many of these algorithms are developed by the author and his colleagues, consisting of Democratic Particle Swarm Optimization, Charged System Search, Magnetic Charged System Search, Field of Forces Optimization, Dolphin Echolocation Optimization, Colliding Bodies Optimization, Ray Optimization. These are presented together with algorithms which were developed by other authors and have been successfully applied to various optimization problems. These consist of Particle Swarm Optimization, Big Bang-Big Crunch Algorithm, Cuckoo Search Optimization, Imperialist Competitive Algorithm, and Chaos Embedded Metaheuristic Algorithms. Finally a multi-objective optimization method is presented to solve large-scale structural problems based on the Charged System Search algorithm. The concepts and algorithms presented in this book are not only applicable to optimization of skeletal structures and finite element models, but can equally ...

  10. Algorithm Design and Validation for Adaptive Nonlinear Control Enhancement (ADVANCE) Technology Development for Resilient Flight Control, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — SSCI proposes to develop and test a framework referred to as the ADVANCE (Algorithm Design and Validation for Adaptive Nonlinear Control Enhancement), within which...

  11. A report on the study of algorithms to enhance Vector computer performance for the discretized one-dimensional time-dependent heat conduction equation: EPIC research, Phase 1

    International Nuclear Information System (INIS)

    Majumdar, A.; Makowitz, H.

    1987-10-01

    With the development of modern vector/parallel supercomputers and their lower performance clones it has become possible to increase computational performance by several orders of magnitude when comparing to the previous generation of scalar computers. These performance gains are not observed when production versions of current thermal-hydraulic codes are implemented on modern supercomputers. It is our belief that this is due in part to the inappropriateness of using old thermal-hydraulic algorithms with these new computer architectures. We believe that a new generation of algorithms needs to be developed for thermal-hydraulics simulation that is optimized for vector/parallel architectures, and not the scalar computers of the previous generation. We have begun a study that will investigate several approaches for designing such optimal algorithms. These approaches are based on the following concepts: minimize recursion; utilize predictor-corrector iterative methods; maximize the convergence rate of iterative methods used; use physical approximations as well as numerical means to accelerate convergence; utilize explicit methods (i.e., marching) where stability will permit. We call this approach the ''EPIC'' methodology (i.e., Explicit Predictor Iterative Corrector methods). Utilizing the above ideas, we have begun our work by investigating the one-dimensional transient heat conduction equation. We have developed several algorithms based on variations of the Hopscotch concept, which we discuss in the body of this report. 14 refs

  12. Development of Quantum Devices and Algorithms for Radiation Detection and Radiation Signal Processing

    International Nuclear Information System (INIS)

    El Tokhy, M.E.S.M.E.S.

    2012-01-01

    The main functions of spectroscopy system are signal detection, filtering and amplification, pileup detection and recovery, dead time correction, amplitude analysis and energy spectrum analysis. Safeguards isotopic measurements require the best spectrometer systems with excellent resolution, stability, efficiency and throughput. However, the resolution and throughput, which depend mainly on the detector, amplifier and the analog-to-digital converter (ADC), can still be improved. These modules have been in continuous development and improvement. For this reason we are interested with both the development of quantum detectors and efficient algorithms of the digital processing measurement. Therefore, the main objective of this thesis is concentrated on both 1. Study quantum dot (QD) devices behaviors under gamma radiation 2. Development of efficient algorithms for handling problems of gamma-ray spectroscopy For gamma radiation detection, a detailed study of nanotechnology QD sources and infrared photodetectors (QDIP) for gamma radiation detection is introduced. There are two different types of quantum scintillator detectors, which dominate the area of ionizing radiation measurements. These detectors are QD scintillator detectors and QDIP scintillator detectors. By comparison with traditional systems, quantum systems have less mass, require less volume, and consume less power. These factors are increasing the need for efficient detector for gamma-ray applications such as gamma-ray spectroscopy. Consequently, the nanocomposite materials based on semiconductor quantum dots has potential for radiation detection via scintillation was demonstrated in the literature. Therefore, this thesis presents a theoretical analysis for the characteristics of QD sources and infrared photodetectors (QDIPs). A model of QD sources under incident gamma radiation detection is developed. A novel methodology is introduced to characterize the effect of gamma radiation on QD devices. The rate

  13. UTV Expansion Pack: Special-Purpose Rank-Revealing Algorithms

    DEFF Research Database (Denmark)

    Fierro, Ricardo D.; Hansen, Per Christian

    2005-01-01

    This collection of Matlab 7.0 software supplements and complements the package UTV Tools from 1999, and includes implementations of special-purpose rank-revealing algorithms developed since the publication of the original package. We provide algorithms for computing and modifying symmetric rank-r...... values of a sparse or structured matrix. These new algorithms have applications in signal processing, optimization and LSI information retrieval.......This collection of Matlab 7.0 software supplements and complements the package UTV Tools from 1999, and includes implementations of special-purpose rank-revealing algorithms developed since the publication of the original package. We provide algorithms for computing and modifying symmetric rank......-revealing VSV decompositions, we expand the algorithms for the ULLV decomposition of a matrix pair to handle interference-type problems with a rank-deficient covariance matrix, and we provide a robust and reliable Lanczos algorithm which - despite its simplicity - is able to capture all the dominant singular...

  14. Knowing 'something is not right' is beyond intuition: development of a clinical algorithm to enhance surveillance and assist nurses to organise and communicate clinical findings.

    Science.gov (United States)

    Brier, Jessica; Carolyn, Moalem; Haverly, Marsha; Januario, Mary Ellen; Padula, Cynthia; Tal, Ahuva; Triosh, Henia

    2015-03-01

    To develop a clinical algorithm to guide nurses' critical thinking through systematic surveillance, assessment, actions required and communication strategies. To achieve this, an international, multiphase project was initiated. Patients receive hospital care postoperatively because they require the skilled surveillance of nurses. Effective assessment of postoperative patients is essential for early detection of clinical deterioration and optimal care management. Despite the significant amount of time devoted to surveillance activities, there is lack of evidence that nurses use a consistent, systematic approach in surveillance, management and communication, potentially leading to less optimal outcomes. Several explanations for the lack of consistency have been suggested in the literature. Mixed methods approach. Retrospective chart review; semi-structured interviews conducted with expert nurses (n = 10); algorithm development. Themes developed from the semi-structured interviews, including (1) complete, systematic assessment, (2) something is not right (3) validating with others, (4) influencing factors and (5) frustration with lack of response when communicating findings were used as the basis for development of the Surveillance Algorithm for Post-Surgical Patients. The algorithm proved beneficial based on limited use in clinical settings. Further work is needed to fully test it in education and practice. The Surveillance Algorithm for Post-Surgical Patients represents the approach of expert nurses, and serves to guide less expert nurses' observations, critical thinking, actions and communication. Based on this approach, the algorithm assists nurses to develop skills promoting early detection, intervention and communication in cases of patient deterioration. © 2014 John Wiley & Sons Ltd.

  15. Actuator Placement Via Genetic Algorithm for Aircraft Morphing

    Science.gov (United States)

    Crossley, William A.; Cook, Andrea M.

    2001-01-01

    This research continued work that began under the support of NASA Grant NAG1-2119. The focus of this effort was to continue investigations of Genetic Algorithm (GA) approaches that could be used to solve an actuator placement problem by treating this as a discrete optimization problem. In these efforts, the actuators are assumed to be "smart" devices that change the aerodynamic shape of an aircraft wing to alter the flow past the wing, and, as a result, provide aerodynamic moments that could provide flight control. The earlier work investigated issued for the problem statement, developed the appropriate actuator modeling, recognized the importance of symmetry for this problem, modified the aerodynamic analysis routine for more efficient use with the genetic algorithm, and began a problem size study to measure the impact of increasing problem complexity. The research discussed in this final summary further investigated the problem statement to provide a "combined moment" problem statement to simultaneously address roll, pitch and yaw. Investigations of problem size using this new problem statement provided insight into performance of the GA as the number of possible actuator locations increased. Where previous investigations utilized a simple wing model to develop the GA approach for actuator placement, this research culminated with application of the GA approach to a high-altitude unmanned aerial vehicle concept to demonstrate that the approach is valid for an aircraft configuration.

  16. UV Reconstruction Algorithm And Diurnal Cycle Variability

    Science.gov (United States)

    Curylo, Aleksander; Litynska, Zenobia; Krzyscin, Janusz; Bogdanska, Barbara

    2009-03-01

    UV reconstruction is a method of estimation of surface UV with the use of available actinometrical and aerological measurements. UV reconstruction is necessary for the study of long-term UV change. A typical series of UV measurements is not longer than 15 years, which is too short for trend estimation. The essential problem in the reconstruction algorithm is the good parameterization of clouds. In our previous algorithm we used an empirical relation between Cloud Modification Factor (CMF) in global radiation and CMF in UV. The CMF is defined as the ratio between measured and modelled irradiances. Clear sky irradiance was calculated with a solar radiative transfer model. In the proposed algorithm, the time variability of global radiation during the diurnal cycle is used as an additional source of information. For elaborating an improved reconstruction algorithm relevant data from Legionowo [52.4 N, 21.0 E, 96 m a.s.l], Poland were collected with the following instruments: NILU-UV multi channel radiometer, Kipp&Zonen pyranometer, radiosonde profiles of ozone, humidity and temperature. The proposed algorithm has been used for reconstruction of UV at four Polish sites: Mikolajki, Kolobrzeg, Warszawa-Bielany and Zakopane since the early 1960s. Krzyscin's reconstruction of total ozone has been used in the calculations.

  17. Simple sorting algorithm test based on CUDA

    OpenAIRE

    Meng, Hongyu; Guo, Fangjin

    2015-01-01

    With the development of computing technology, CUDA has become a very important tool. In computer programming, sorting algorithm is widely used. There are many simple sorting algorithms such as enumeration sort, bubble sort and merge sort. In this paper, we test some simple sorting algorithm based on CUDA and draw some useful conclusions.

  18. SIFT based algorithm for point feature tracking

    Directory of Open Access Journals (Sweden)

    Adrian BURLACU

    2007-12-01

    Full Text Available In this paper a tracking algorithm for SIFT features in image sequences is developed. For each point feature extracted using SIFT algorithm a descriptor is computed using information from its neighborhood. Using an algorithm based on minimizing the distance between two descriptors tracking point features throughout image sequences is engaged. Experimental results, obtained from image sequences that capture scaling of different geometrical type object, reveal the performances of the tracking algorithm.

  19. Consensus algorithm in smart grid and communication networks

    Science.gov (United States)

    Alfagee, Husain Abdulaziz

    On a daily basis, consensus theory attracts more and more researches from different areas of interest, to apply its techniques to solve technical problems in a way that is faster, more reliable, and even more precise than ever before. A power system network is one of those fields that consensus theory employs extensively. The use of the consensus algorithm to solve the Economic Dispatch and Load Restoration Problems is a good example. Instead of a conventional central controller, some researchers have explored an algorithm to solve the above mentioned problems, in a distribution manner, using the consensus algorithm, which is based on calculation methods, i.e., non estimation methods, for updating the information consensus matrix. Starting from this point of solving these types of problems mentioned, specifically, in a distribution fashion, using the consensus algorithm, we have implemented a new advanced consensus algorithm. It is based on the adaptive estimation techniques, such as the Gradient Algorithm and the Recursive Least Square Algorithm, to solve the same problems. This advanced work was tested on different case studies that had formerly been explored, as seen in references 5, 7, and 18. Three and five generators, or agents, with different topologies, correspond to the Economic Dispatch Problem and the IEEE 16-Bus power system corresponds to the Load Restoration Problem. In all the cases we have studied, the results met our expectations with extreme accuracy, and completely matched the results of the previous researchers. There is little question that this research proves the capability and dependability of using the consensus algorithm, based on the estimation methods as the Gradient Algorithm and the Recursive Least Square Algorithm to solve such power problems.

  20. Hybrid employment recommendation algorithm based on Spark

    Science.gov (United States)

    Li, Zuoquan; Lin, Yubei; Zhang, Xingming

    2017-08-01

    Aiming at the real-time application of collaborative filtering employment recommendation algorithm (CF), a clustering collaborative filtering recommendation algorithm (CCF) is developed, which applies hierarchical clustering to CF and narrows the query range of neighbour items. In addition, to solve the cold-start problem of content-based recommendation algorithm (CB), a content-based algorithm with users’ information (CBUI) is introduced for job recommendation. Furthermore, a hybrid recommendation algorithm (HRA) which combines CCF and CBUI algorithms is proposed, and implemented on Spark platform. The experimental results show that HRA can overcome the problems of cold start and data sparsity, and achieve good recommendation accuracy and scalability for employment recommendation.

  1. The performance and development for the Inner Detector Trigger algorithms at ATLAS

    International Nuclear Information System (INIS)

    Penc, Ondrej

    2015-01-01

    A redesign of the tracking algorithms for the ATLAS trigger for LHC's Run 2 starting in 2015 is in progress. The ATLAS HLT software has been restructured to run as a more flexible single stage HLT, instead of two separate stages (Level 2 and Event Filter) as in Run 1. The new tracking strategy employed for Run 2 will use a Fast Track Finder (FTF) algorithm to seed subsequent Precision Tracking, and will result in improved track parameter resolution and faster execution times than achieved during Run 1. The performance of the new algorithms has been evaluated to identify those aspects where code optimisation would be most beneficial. The performance and timing of the algorithms for electron and muon reconstruction in the trigger are presented. The profiling infrastructure, constructed to provide prompt feedback from the optimisation, is described, including the methods used to monitor the relative performance improvements as the code evolves. (paper)

  2. The performance and development for the Inner Detector Trigger algorithms at ATLAS

    CERN Document Server

    Penc, O; The ATLAS collaboration

    2015-01-01

    A redesign of the tracking algorithms for the ATLAS trigger for LHC's Run 2 starting in 2015 is in progress. The ATLAS HLT software has been restructured to run as a more flexible single stage HLT, instead of two separate stages (Level 2 and Event Filter) as in Run 1. The new tracking strategy employed for Run 2 will use a Fast Track Finder (FTF) algorithm to seed subsequent Precision Tracking, and will result in improved track parameter resolution and faster execution times than achieved during Run 1. The performance of the new algorithms has been evaluated to identify those aspects where code optimisation would be most beneficial. The performance and timing of the algorithms for electron and muon reconstruction in the trigger are presented. The profiling infrastructure, constructed to provide prompt feedback from the optimisation, is described, including the methods used to monitor the relative performance improvements as the code evolves.

  3. A Hybrid Algorithm for Optimizing Multi- Modal Functions

    Institute of Scientific and Technical Information of China (English)

    Li Qinghua; Yang Shida; Ruan Youlin

    2006-01-01

    A new genetic algorithm is presented based on the musical performance. The novelty of this algorithm is that a new genetic algorithm, mimicking the musical process of searching for a perfect state of harmony, which increases the robustness of it greatly and gives a new meaning of it in the meantime, has been developed. Combining the advantages of the new genetic algorithm, simplex algorithm and tabu search, a hybrid algorithm is proposed. In order to verify the effectiveness of the hybrid algorithm, it is applied to solving some typical numerical function optimization problems which are poorly solved by traditional genetic algorithms. The experimental results show that the hybrid algorithm is fast and reliable.

  4. A BPF-FBP tandem algorithm for image reconstruction in reverse helical cone-beam CT

    International Nuclear Information System (INIS)

    Cho, Seungryong; Xia, Dan; Pellizzari, Charles A.; Pan Xiaochuan

    2010-01-01

    Purpose: Reverse helical cone-beam computed tomography (CBCT) is a scanning configuration for potential applications in image-guided radiation therapy in which an accurate anatomic image of the patient is needed for image-guidance procedures. The authors previously developed an algorithm for image reconstruction from nontruncated data of an object that is completely within the reverse helix. The purpose of this work is to develop an image reconstruction approach for reverse helical CBCT of a long object that extends out of the reverse helix and therefore constitutes data truncation. Methods: The proposed approach comprises of two reconstruction steps. In the first step, a chord-based backprojection-filtration (BPF) algorithm reconstructs a volumetric image of an object from the original cone-beam data. Because there exists a chordless region in the middle of the reverse helix, the image obtained in the first step contains an unreconstructed central-gap region. In the second step, the gap region is reconstructed by use of a Pack-Noo-formula-based filteredbackprojection (FBP) algorithm from the modified cone-beam data obtained by subtracting from the original cone-beam data the reprojection of the image reconstructed in the first step. Results: The authors have performed numerical studies to validate the proposed approach in image reconstruction from reverse helical cone-beam data. The results confirm that the proposed approach can reconstruct accurate images of a long object without suffering from data-truncation artifacts or cone-angle artifacts. Conclusions: They developed and validated a BPF-FBP tandem algorithm to reconstruct images of a long object from reverse helical cone-beam data. The chord-based BPF algorithm was utilized for converting the long-object problem into a short-object problem. The proposed approach is applicable to other scanning configurations such as reduced circular sinusoidal trajectories.

  5. Development of CD3 cell quantitation algorithms for renal allograft biopsy rejection assessment utilizing open source image analysis software.

    Science.gov (United States)

    Moon, Andres; Smith, Geoffrey H; Kong, Jun; Rogers, Thomas E; Ellis, Carla L; Farris, Alton B Brad

    2018-02-01

    Renal allograft rejection diagnosis depends on assessment of parameters such as interstitial inflammation; however, studies have shown interobserver variability regarding interstitial inflammation assessment. Since automated image analysis quantitation can be reproducible, we devised customized analysis methods for CD3+ T-cell staining density as a measure of rejection severity and compared them with established commercial methods along with visual assessment. Renal biopsy CD3 immunohistochemistry slides (n = 45), including renal allografts with various degrees of acute cellular rejection (ACR) were scanned for whole slide images (WSIs). Inflammation was quantitated in the WSIs using pathologist visual assessment, commercial algorithms (Aperio nuclear algorithm for CD3+ cells/mm 2 and Aperio positive pixel count algorithm), and customized open source algorithms developed in ImageJ with thresholding/positive pixel counting (custom CD3+%) and identification of pixels fulfilling "maxima" criteria for CD3 expression (custom CD3+ cells/mm 2 ). Based on visual inspections of "markup" images, CD3 quantitation algorithms produced adequate accuracy. Additionally, CD3 quantitation algorithms correlated between each other and also with visual assessment in a statistically significant manner (r = 0.44 to 0.94, p = 0.003 to algorithms presents salient correlations with established methods of CD3 quantitation. These analysis techniques are promising and highly customizable, providing a form of on-slide "flow cytometry" that can facilitate additional diagnostic accuracy in tissue-based assessments.

  6. An investigation of genetic algorithms

    International Nuclear Information System (INIS)

    Douglas, S.R.

    1995-04-01

    Genetic algorithms mimic biological evolution by natural selection in their search for better individuals within a changing population. they can be used as efficient optimizers. This report discusses the developing field of genetic algorithms. It gives a simple example of the search process and introduces the concept of schema. It also discusses modifications to the basic genetic algorithm that result in species and niche formation, in machine learning and artificial evolution of computer programs, and in the streamlining of human-computer interaction. (author). 3 refs., 1 tab., 2 figs

  7. An advanced algorithm for fetal heart rate estimation from non-invasive low electrode density recordings

    International Nuclear Information System (INIS)

    Dessì, Alessia; Pani, Danilo; Raffo, Luigi

    2014-01-01

    Non-invasive fetal electrocardiography is still an open research issue. The recent publication of an annotated dataset on Physionet providing four-channel non-invasive abdominal ECG traces promoted an international challenge on the topic. Starting from that dataset, an algorithm for the identification of the fetal QRS complexes from a reduced number of electrodes and without any a priori information about the electrode positioning has been developed, entering into the top ten best-performing open-source algorithms presented at the challenge. In this paper, an improved version of that algorithm is presented and evaluated exploiting the same challenge metrics. It is mainly based on the subtraction of the maternal QRS complexes in every lead, obtained by synchronized averaging of morphologically similar complexes, the filtering of the maternal P and T waves and the enhancement of the fetal QRS through independent component analysis (ICA) applied on the processed signals before a final fetal QRS detection stage. The RR time series of both the mother and the fetus are analyzed to enhance pseudoperiodicity with the aim of correcting wrong annotations. The algorithm has been designed and extensively evaluated on the open dataset A (N = 75), and finally evaluated on datasets B (N = 100) and C (N = 272) to have the mean scores over data not used during the algorithm development. Compared to the results achieved by the previous version of the algorithm, the current version would mark the 5th and 4th position in the final ranking related to the events 1 and 2, reserved to the open-source challenge entries, taking into account both official and unofficial entrants. On dataset A, the algorithm achieves 0.982 median sensitivity and 0.976 median positive predictivity. (paper)

  8. Distributed k-Means Algorithm and Fuzzy c-Means Algorithm for Sensor Networks Based on Multiagent Consensus Theory.

    Science.gov (United States)

    Qin, Jiahu; Fu, Weiming; Gao, Huijun; Zheng, Wei Xing

    2016-03-03

    This paper is concerned with developing a distributed k-means algorithm and a distributed fuzzy c-means algorithm for wireless sensor networks (WSNs) where each node is equipped with sensors. The underlying topology of the WSN is supposed to be strongly connected. The consensus algorithm in multiagent consensus theory is utilized to exchange the measurement information of the sensors in WSN. To obtain a faster convergence speed as well as a higher possibility of having the global optimum, a distributed k-means++ algorithm is first proposed to find the initial centroids before executing the distributed k-means algorithm and the distributed fuzzy c-means algorithm. The proposed distributed k-means algorithm is capable of partitioning the data observed by the nodes into measure-dependent groups which have small in-group and large out-group distances, while the proposed distributed fuzzy c-means algorithm is capable of partitioning the data observed by the nodes into different measure-dependent groups with degrees of membership values ranging from 0 to 1. Simulation results show that the proposed distributed algorithms can achieve almost the same results as that given by the centralized clustering algorithms.

  9. Assessing the effectiveness of Landsat 8 chlorophyll a retrieval algorithms for regional freshwater monitoring.

    Science.gov (United States)

    Boucher, Jonah; Weathers, Kathleen C; Norouzi, Hamid; Steele, Bethel

    2018-06-01

    Predicting algal blooms has become a priority for scientists, municipalities, businesses, and citizens. Remote sensing offers solutions to the spatial and temporal challenges facing existing lake research and monitoring programs that rely primarily on high-investment, in situ measurements. Techniques to remotely measure chlorophyll a (chl a) as a proxy for algal biomass have been limited to specific large water bodies in particular seasons and narrow chl a ranges. Thus, a first step toward prediction of algal blooms is generating regionally robust algorithms using in situ and remote sensing data. This study explores the relationship between in-lake measured chl a data from Maine and New Hampshire, USA lakes and remotely sensed chl a retrieval algorithm outputs. Landsat 8 images were obtained and then processed after required atmospheric and radiometric corrections. Six previously developed algorithms were tested on a regional scale on 11 scenes from 2013 to 2015 covering 192 lakes. The best performing algorithm across data from both states had a 0.16 correlation coefficient (R 2 ) and P ≤ 0.05 when Landsat 8 images within 5 d, and improved to R 2 of 0.25 when data from Maine only were used. The strength of the correlation varied with the specificity of the time window in relation to the in-situ sampling date, explaining up to 27% of the variation in the data across several scenes. Two previously published algorithms using Landsat 8's Bands 1-4 were best correlated with chl a, and for particular late-summer scenes, they accounted for up to 69% of the variation in in-situ measurements. A sensitivity analysis revealed that a longer time difference between in situ measurements and the satellite image increased uncertainty in the models, and an effect of the time of year on several indices was demonstrated. A regional model based on the best performing remote sensing algorithm was developed and was validated using independent in situ measurements and satellite

  10. Novel density-based and hierarchical density-based clustering algorithms for uncertain data.

    Science.gov (United States)

    Zhang, Xianchao; Liu, Han; Zhang, Xiaotong

    2017-09-01

    Uncertain data has posed a great challenge to traditional clustering algorithms. Recently, several algorithms have been proposed for clustering uncertain data, and among them density-based techniques seem promising for handling data uncertainty. However, some issues like losing uncertain information, high time complexity and nonadaptive threshold have not been addressed well in the previous density-based algorithm FDBSCAN and hierarchical density-based algorithm FOPTICS. In this paper, we firstly propose a novel density-based algorithm PDBSCAN, which improves the previous FDBSCAN from the following aspects: (1) it employs a more accurate method to compute the probability that the distance between two uncertain objects is less than or equal to a boundary value, instead of the sampling-based method in FDBSCAN; (2) it introduces new definitions of probability neighborhood, support degree, core object probability, direct reachability probability, thus reducing the complexity and solving the issue of nonadaptive threshold (for core object judgement) in FDBSCAN. Then, we modify the algorithm PDBSCAN to an improved version (PDBSCANi), by using a better cluster assignment strategy to ensure that every object will be assigned to the most appropriate cluster, thus solving the issue of nonadaptive threshold (for direct density reachability judgement) in FDBSCAN. Furthermore, as PDBSCAN and PDBSCANi have difficulties for clustering uncertain data with non-uniform cluster density, we propose a novel hierarchical density-based algorithm POPTICS by extending the definitions of PDBSCAN, adding new definitions of fuzzy core distance and fuzzy reachability distance, and employing a new clustering framework. POPTICS can reveal the cluster structures of the datasets with different local densities in different regions better than PDBSCAN and PDBSCANi, and it addresses the issues in FOPTICS. Experimental results demonstrate the superiority of our proposed algorithms over the existing

  11. Where are the parallel algorithms?

    Science.gov (United States)

    Voigt, R. G.

    1985-01-01

    Four paradigms that can be useful in developing parallel algorithms are discussed. These include computational complexity analysis, changing the order of computation, asynchronous computation, and divide and conquer. Each is illustrated with an example from scientific computation, and it is shown that computational complexity must be used with great care or an inefficient algorithm may be selected.

  12. Development of a computational algorithm for the linearization of decay and transmutation chains

    International Nuclear Information System (INIS)

    Cruz L, C. A.; Francois L, J. L.

    2017-09-01

    One of the most used methodologies to solve Bate man equations, in the problem of burning, is the Tta (Transmutation Trajectory Analysis) method. In this method, a network of decays is broken down into linear elements known as trajectories, through a process known as linearization. In this work an alternative algorithm is shown to find and construct these trajectories, which considers three aspects of linearization: the information -a priori- about the elements that make up decay and transmutation network, the use of a new notation, and in the functions for the treatment of text strings (which are common in most programming languages). One of the main advantages of the algorithm is that can condense the information of a decay and transmutation network into only two vectors. From these is possible to determine how many linear chains can be extracted from the network and even their length (in the case they are not cyclical). Unlike the Deep First Search method, which is widely used for the linearization process, the method proposed in the present work does not have a backward routine and instead occupies a process of compilation, since completes fragments chain instead of going back to the beginning of the trajectories. The developed algorithm can be applied in a general way to the information search and to the linearization of the computational data structures known as trees. It can also be applied to engineering problems where one seeks to calculate the concentration of some substance as a function of time, starting from linear differential equations of balance. (Author)

  13. Analysis of longitudinal variations in North Pacific alkalinity to improve predictive algorithms

    Science.gov (United States)

    Fry, Claudia H.; Tyrrell, Toby; Achterberg, Eric P.

    2016-10-01

    The causes of natural variation in alkalinity in the North Pacific surface ocean need to be investigated to understand the carbon cycle and to improve predictive algorithms. We used GLODAPv2 to test hypotheses on the causes of three longitudinal phenomena in Alk*, a tracer of calcium carbonate cycling. These phenomena are (a) an increase from east to west between 45°N and 55°N, (b) an increase from west to east between 25°N and 40°N, and (c) a minor increase from west to east in the equatorial upwelling region. Between 45°N and 55°N, Alk* is higher on the western than on the eastern side, and this is associated with denser isopycnals with higher Alk* lying at shallower depths. Between 25°N and 40°N, upwelling along the North American continental shelf causes higher Alk* in the east. Along the equator, a strong east-west trend was not observed, even though the upwelling on the eastern side of the basin is more intense, because the water brought to the surface is not high in Alk*. We created two algorithms to predict alkalinity, one for the entire Pacific Ocean north of 30°S and one for the eastern margin. The Pacific Ocean algorithm is more accurate than the commonly used algorithm published by Lee et al. (2006), of similar accuracy to the best previously published algorithm by Sasse et al. (2013), and is less biased with longitude than other algorithms in the subpolar North Pacific. Our eastern margin algorithm is more accurate than previously published algorithms.

  14. Development of a Grapevine Pruning Algorithm for Using in Pruning

    Directory of Open Access Journals (Sweden)

    S. M Hosseini

    2017-10-01

    Full Text Available Introduction Great areas of the orchards in the world are dedicated to cultivation of the grapevine. Normally grape vineyards are pruned twice a year. Among the operations of grape production, winter pruning of the bushes is the only operation that still has not been fully mechanized while it is known as the most laborious jobs in the farm. Some of the grape producing countries use various mechanical machines to prune the grapevines, but in most cases, these machines do not have a good performance. Therefore intelligent pruning machine seems to be necessary in this regard and this intelligent pruning machines can reduce the labor required to prune the vineyards. It this study in was attempted to develop an algorithm that uses image processing techniques to identify which parts of the grapevine should be cut. Stereo vision technique was used to obtain three dimensional images from the bare bushes whose leaves were fallen in autumn. Stereo vision systems are used to determine the depth from two images taken at the same time but from slightly different viewpoints using two cameras. Each pair of images of a common scene is related by a popular geometry, and corresponding points in the images pairs are constrained to lie on pairs of conjugate popular lines. Materials and Methods Photos were taken from gardens of the Research Center for Agriculture and Natural Resources of Fars province, Iran. At first, the distance between the plants and the cameras should be determined. The distance between the plants and cameras can be obtained by using the stereo vision techniques. Therefore, this method was used in this paper by two pictures taken from each plant with the left and right cameras. The algorithm was written in MATLAB. To facilitate the segmentation of the branches from the rows at the back, a blue plate with dimensions of 2×2 m2 were used at the background. After invoking the images, branches were segmented from the background to produce the binary

  15. Quantitative x-ray photoelectron spectroscopy: Simple algorithm to determine the amount of atoms in the outermost few nanometers

    International Nuclear Information System (INIS)

    Tougaard, Sven

    2003-01-01

    It is well known that due to inelastic electron scattering, the measured x-ray photoelectron spectroscopy peak intensity depends strongly on the in-depth atom distribution. Quantification based only on the peak intensity can therefore give large errors. The problem was basically solved by developing algorithms for the detailed analysis of the energy distribution of emitted electrons. These algorithms have been extensively tested experimentally and found to be able to determine the depth distribution of atoms with nanometer resolution. Practical application of these algorithms has increased after ready-to-use software packages were made available and they are now being used in laboratories worldwide. These software packages are easy to use but they need operator interaction. They are not well suited for automatic data processing and there is an additional need for simplified quantification strategies that can be automated. In this article we report on a very simple algorithm. It is a slightly more accurate version of our previous algorithm. The algorithm gives the amount of atoms within the outermost three inelastic mean free paths and it also gives a rough estimate for the in-depth distribution. An experimental example of its application is also presented

  16. Energy demand forecasting in Iranian metal industry using linear and nonlinear models based on evolutionary algorithms

    International Nuclear Information System (INIS)

    Piltan, Mehdi; Shiri, Hiva; Ghaderi, S.F.

    2012-01-01

    Highlights: ► Investigating different fitness functions for evolutionary algorithms in energy forecasting. ► Energy forecasting of Iranian metal industry by value added, energy prices, investment and employees. ► Using real-coded instead of binary-coded genetic algorithm decreases energy forecasting error. - Abstract: Developing energy-forecasting models is known as one of the most important steps in long-term planning. In order to achieve sustainable energy supply toward economic development and social welfare, it is required to apply precise forecasting model. Applying artificial intelligent models for estimation complex economic and social functions is growing up considerably in many researches recently. In this paper, energy consumption in industrial sector as one of the critical sectors in the consumption of energy has been investigated. Two linear and three nonlinear functions have been used in order to forecast and analyze energy in the Iranian metal industry, Particle Swarm Optimization (PSO) and Genetic Algorithms (GAs) are applied to attain parameters of the models. The Real-Coded Genetic Algorithm (RCGA) has been developed based on real numbers, which is introduced as a new approach in the field of energy forecasting. In the proposed model, electricity consumption has been considered as a function of different variables such as electricity tariff, manufacturing value added, prevailing fuel prices, the number of employees, the investment in equipment and consumption in the previous years. Mean Square Error (MSE), Root Mean Square Error (RMSE), Mean Absolute Deviation (MAD) and Mean Absolute Percent Error (MAPE) are the four functions which have been used as the fitness function in the evolutionary algorithms. The results show that the logarithmic nonlinear model using PSO algorithm with 1.91 error percentage has the best answer. Furthermore, the prediction of electricity consumption in industrial sector of Turkey and also Turkish industrial sector

  17. Parallel conjugate gradient algorithms for manipulator dynamic simulation

    Science.gov (United States)

    Fijany, Amir; Scheld, Robert E.

    1989-01-01

    Parallel conjugate gradient algorithms for the computation of multibody dynamics are developed for the specialized case of a robot manipulator. For an n-dimensional positive-definite linear system, the Classical Conjugate Gradient (CCG) algorithms are guaranteed to converge in n iterations, each with a computation cost of O(n); this leads to a total computational cost of O(n sq) on a serial processor. A conjugate gradient algorithms is presented that provide greater efficiency using a preconditioner, which reduces the number of iterations required, and by exploiting parallelism, which reduces the cost of each iteration. Two Preconditioned Conjugate Gradient (PCG) algorithms are proposed which respectively use a diagonal and a tridiagonal matrix, composed of the diagonal and tridiagonal elements of the mass matrix, as preconditioners. Parallel algorithms are developed to compute the preconditioners and their inversions in O(log sub 2 n) steps using n processors. A parallel algorithm is also presented which, on the same architecture, achieves the computational time of O(log sub 2 n) for each iteration. Simulation results for a seven degree-of-freedom manipulator are presented. Variants of the proposed algorithms are also developed which can be efficiently implemented on the Robot Mathematics Processor (RMP).

  18. Advancements to the planogram frequency–distance rebinning algorithm

    International Nuclear Information System (INIS)

    Champley, Kyle M; Kinahan, Paul E; Raylman, Raymond R

    2010-01-01

    In this paper we consider the task of image reconstruction in positron emission tomography (PET) with the planogram frequency–distance rebinning (PFDR) algorithm. The PFDR algorithm is a rebinning algorithm for PET systems with panel detectors. The algorithm is derived in the planogram coordinate system which is a native data format for PET systems with panel detectors. A rebinning algorithm averages over the redundant four-dimensional set of PET data to produce a three-dimensional set of data. Images can be reconstructed from this rebinned three-dimensional set of data. This process enables one to reconstruct PET images more quickly than reconstructing directly from the four-dimensional PET data. The PFDR algorithm is an approximate rebinning algorithm. We show that implementing the PFDR algorithm followed by the (ramp) filtered backprojection (FBP) algorithm in linogram coordinates from multiple views reconstructs a filtered version of our image. We develop an explicit formula for this filter which can be used to achieve exact reconstruction by means of a modified FBP algorithm applied to the stack of rebinned linograms and can also be used to quantify the errors introduced by the PFDR algorithm. This filter is similar to the filter in the planogram filtered backprojection algorithm derived by Brasse et al. The planogram filtered backprojection and exact reconstruction with the PFDR algorithm require complete projections which can be completed with a reprojection algorithm. The PFDR algorithm is similar to the rebinning algorithm developed by Kao et al. By expressing the PFDR algorithm in detector coordinates, we provide a comparative analysis between the two algorithms. Numerical experiments using both simulated data and measured data from a positron emission mammography/tomography (PEM/PET) system are performed. Images are reconstructed by PFDR+FBP (PFDR followed by 2D FBP reconstruction), PFDRX (PFDR followed by the modified FBP algorithm for exact

  19. Hardware Acceleration of Adaptive Neural Algorithms.

    Energy Technology Data Exchange (ETDEWEB)

    James, Conrad D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-11-01

    As tradit ional numerical computing has faced challenges, researchers have turned towards alternative computing approaches to reduce power - per - computation metrics and improve algorithm performance. Here, we describe an approach towards non - conventional computing that strengthens the connection between machine learning and neuroscience concepts. The Hardware Acceleration of Adaptive Neural Algorithms (HAANA) project ha s develop ed neural machine learning algorithms and hardware for applications in image processing and cybersecurity. While machine learning methods are effective at extracting relevant features from many types of data, the effectiveness of these algorithms degrades when subjected to real - world conditions. Our team has generated novel neural - inspired approa ches to improve the resiliency and adaptability of machine learning algorithms. In addition, we have also designed and fabricated hardware architectures and microelectronic devices specifically tuned towards the training and inference operations of neural - inspired algorithms. Finally, our multi - scale simulation framework allows us to assess the impact of microelectronic device properties on algorithm performance.

  20. The theory of hybrid stochastic algorithms

    International Nuclear Information System (INIS)

    Duane, S.; Kogut, J.B.

    1986-01-01

    The theory of hybrid stochastic algorithms is developed. A generalized Fokker-Planck equation is derived and is used to prove that the correct equilibrium distribution is generated by the algorithm. Systematic errors following from the discrete time-step used in the numerical implementation of the scheme are computed. Hybrid algorithms which simulate lattice gauge theory with dynamical fermions are presented. They are optimized in computer simulations and their systematic errors and efficiencies are studied. (orig.)

  1. Efficient sequential and parallel algorithms for finding edit distance based motifs.

    Science.gov (United States)

    Pal, Soumitra; Xiao, Peng; Rajasekaran, Sanguthevar

    2016-08-18

    Motif search is an important step in extracting meaningful patterns from biological data. The general problem of motif search is intractable and there is a pressing need to develop efficient, exact and approximation algorithms to solve this problem. In this paper, we present several novel, exact, sequential and parallel algorithms for solving the (l,d) Edit-distance-based Motif Search (EMS) problem: given two integers l,d and n biological strings, find all strings of length l that appear in each input string with atmost d errors of types substitution, insertion and deletion. One popular technique to solve the problem is to explore for each input string the set of all possible l-mers that belong to the d-neighborhood of any substring of the input string and output those which are common for all input strings. We introduce a novel and provably efficient neighborhood exploration technique. We show that it is enough to consider the candidates in neighborhood which are at a distance exactly d. We compactly represent these candidate motifs using wildcard characters and efficiently explore them with very few repetitions. Our sequential algorithm uses a trie based data structure to efficiently store and sort the candidate motifs. Our parallel algorithm in a multi-core shared memory setting uses arrays for storing and a novel modification of radix-sort for sorting the candidate motifs. The algorithms for EMS are customarily evaluated on several challenging instances such as (8,1), (12,2), (16,3), (20,4), and so on. The best previously known algorithm, EMS1, is sequential and in estimated 3 days solves up to instance (16,3). Our sequential algorithms are more than 20 times faster on (16,3). On other hard instances such as (9,2), (11,3), (13,4), our algorithms are much faster. Our parallel algorithm has more than 600 % scaling performance while using 16 threads. Our algorithms have pushed up the state-of-the-art of EMS solvers and we believe that the techniques introduced in

  2. Analysis and Improvement of Fireworks Algorithm

    Directory of Open Access Journals (Sweden)

    Xi-Guang Li

    2017-02-01

    Full Text Available The Fireworks Algorithm is a recently developed swarm intelligence algorithm to simulate the explosion process of fireworks. Based on the analysis of each operator of Fireworks Algorithm (FWA, this paper improves the FWA and proves that the improved algorithm converges to the global optimal solution with probability 1. The proposed algorithm improves the goal of further boosting performance and achieving global optimization where mainly include the following strategies. Firstly using the opposition-based learning initialization population. Secondly a new explosion amplitude mechanism for the optimal firework is proposed. In addition, the adaptive t-distribution mutation for non-optimal individuals and elite opposition-based learning for the optimal individual are used. Finally, a new selection strategy, namely Disruptive Selection, is proposed to reduce the running time of the algorithm compared with FWA. In our simulation, we apply the CEC2013 standard functions and compare the proposed algorithm (IFWA with SPSO2011, FWA, EFWA and dynFWA. The results show that the proposed algorithm has better overall performance on the test functions.

  3. InfoRoute: the CISMeF Context-specific Search Algorithm.

    Science.gov (United States)

    Merabti, Tayeb; Lelong, Romain; Darmoni, Stefan

    2015-01-01

    The aim of this paper was to present a practical InfoRoute algorithm and applications developed by CISMeF to perform a contextual information retrieval across multiple medical websites in different health domains. The algorithm was developed to treat multiple types of queries: natural, Boolean and advanced. The algorithm also generates multiple types of queries: Boolean query, PubMed query or Advanced query. Each query can be extended via an inter alignments relationship from UMLS and HeTOP portal. A web service and two web applications have been developed based on the InfoRoute algorithm to generate links-query across multiple websites, i.e.: "PubMed" or "ClinicalTrials.org". The InfoRoute algorithm is a useful tool to perform contextual information retrieval across multiple medical websites in both English and French.

  4. Modified automatic term selection v2: A faster algorithm to calculate inelastic scattering cross-sections

    Energy Technology Data Exchange (ETDEWEB)

    Rusz, Ján, E-mail: jan.rusz@fysik.uu.se

    2017-06-15

    Highlights: • New algorithm for calculating double differential scattering cross-section. • Shown good convergence properties. • Outperforms older MATS algorithm, particularly in zone axis calculations. - Abstract: We present a new algorithm for calculating inelastic scattering cross-section for fast electrons. Compared to the previous Modified Automatic Term Selection (MATS) algorithm (Rusz et al. [18]), it has far better convergence properties in zone axis calculations and it allows to identify contributions of individual atoms. One can think of it as a blend of MATS algorithm and a method described by Weickenmeier and Kohl [10].

  5. An implicit flux-split algorithm to calculate hypersonic flowfields in chemical equilibrium

    Science.gov (United States)

    Palmer, Grant

    1987-01-01

    An implicit, finite-difference, shock-capturing algorithm that calculates inviscid, hypersonic flows in chemical equilibrium is presented. The flux vectors and flux Jacobians are differenced using a first-order, flux-split technique. The equilibrium composition of the gas is determined by minimizing the Gibbs free energy at every node point. The code is validated by comparing results over an axisymmetric hemisphere against previously published results. The algorithm is also applied to more practical configurations. The accuracy, stability, and versatility of the algorithm have been promising.

  6. Algorithms

    Indian Academy of Sciences (India)

    polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming.

  7. Evaluation of Static JavaScript Call Graph Algorithms

    NARCIS (Netherlands)

    J.-J. Dijkstra (Jorryt-Jan)

    2014-01-01

    htmlabstractThis thesis consists of a replication study in which two algorithms to compute JavaScript call graphs have been implemented and evaluated. Existing IDE support for JavaScript is hampered due to the dynamic nature of the language. Previous studies partially solve call graph computation

  8. A bidirectional brain-machine interface algorithm that approximates arbitrary force-fields.

    Directory of Open Access Journals (Sweden)

    Alessandro Vato

    Full Text Available We examine bidirectional brain-machine interfaces that control external devices in a closed loop by decoding motor cortical activity to command the device and by encoding the state of the device by delivering electrical stimuli to sensory areas. Although it is possible to design this artificial sensory-motor interaction while maintaining two independent channels of communication, here we propose a rule that closes the loop between flows of sensory and motor information in a way that approximates a desired dynamical policy expressed as a field of forces acting upon the controlled external device. We previously developed a first implementation of this approach based on linear decoding of neural activity recorded from the motor cortex into a set of forces (a force field applied to a point mass, and on encoding of position of the point mass into patterns of electrical stimuli delivered to somatosensory areas. However, this previous algorithm had the limitation that it only worked in situations when the position-to-force map to be implemented is invertible. Here we overcome this limitation by developing a new non-linear form of the bidirectional interface that can approximate a virtually unlimited family of continuous fields. The new algorithm bases both the encoding of position information and the decoding of motor cortical activity on an explicit map between spike trains and the state space of the device computed with Multi-Dimensional-Scaling. We present a detailed computational analysis of the performance of the interface and a validation of its robustness by using synthetic neural responses in a simulated sensory-motor loop.

  9. Capataz: a framework for distributing algorithms via the World Wide Web

    Directory of Open Access Journals (Sweden)

    Gonzalo J. Martínez

    2015-08-01

    Full Text Available In recent years, some scientists have embraced the distributed computing paradigm. As experiments and simulations demand ever more computing power, coordinating the efforts of many different processors is often the only reasonable resort. We developed an open-source distributed computing framework based on web technologies, and named it Capataz. Acting as an HTTP server, web browsers running on many different devices can connect to it to contribute in the execution of distributed algorithms written in Javascript. Capataz takes advantage of architectures with many cores using web workers. This paper presents an improvement in Capataz´ usability and why it was needed. In previous experiments the total time of distributed algorithms proved to be susceptible to changes in the execution time of the jobs. The system now adapts by bundling jobs together if they are too simple. The computational experiment to test the solution is a brute force estimation of pi. The benchmark results show that by bundling jobs, the overall perfomance is greatly increased.

  10. Development of a BWR loading pattern design system based on modified genetic algorithms and knowledge

    International Nuclear Information System (INIS)

    Martin-del-Campo, Cecilia; Francois, Juan Luis; Avendano, Linda; Gonzalez, Mario

    2004-01-01

    An optimization system based on Genetic Algorithms (GAs), in combination with expert knowledge coded in heuristics rules, was developed for the design of optimized boiling water reactor (BWR) fuel loading patterns. The system was coded in a computer program named Loading Pattern Optimization System based on Genetic Algorithms, in which the optimization code uses GAs to select candidate solutions, and the core simulator code CM-PRESTO to evaluate them. A multi-objective function was built to maximize the cycle energy length while satisfying power and reactivity constraints used as BWR design parameters. Heuristic rules were applied to satisfy standard fuel management recommendations as the Control Cell Core and Low Leakage loading strategies, and octant symmetry. To test the system performance, an optimized cycle was designed and compared against an actual operating cycle of Laguna Verde Nuclear Power Plant, Unit I

  11. Effects of visualization on algorithm comprehension

    Science.gov (United States)

    Mulvey, Matthew

    Computer science students are expected to learn and apply a variety of core algorithms which are an essential part of the field. Any one of these algorithms by itself is not necessarily extremely complex, but remembering the large variety of algorithms and the differences between them is challenging. To address this challenge, we present a novel algorithm visualization tool designed to enhance students understanding of Dijkstra's algorithm by allowing them to discover the rules of the algorithm for themselves. It is hoped that a deeper understanding of the algorithm will help students correctly select, adapt and apply the appropriate algorithm when presented with a problem to solve, and that what is learned here will be applicable to the design of other visualization tools designed to teach different algorithms. Our visualization tool is currently in the prototype stage, and this thesis will discuss the pedagogical approach that informs its design, as well as the results of some initial usability testing. Finally, to clarify the direction for further development of the tool, four different variations of the prototype were implemented, and the instructional effectiveness of each was assessed by having a small sample participants use the different versions of the prototype and then take a quiz to assess their comprehension of the algorithm.

  12. Secure image encryption algorithm design using a novel chaos based S-Box

    International Nuclear Information System (INIS)

    Çavuşoğlu, Ünal; Kaçar, Sezgin; Pehlivan, Ihsan; Zengin, Ahmet

    2017-01-01

    Highlights: • A new chaotic system is developed for creating S-Box and image encryption algorithm. • Chaos based random number generator is designed with the help of the new chaotic system. NIST tests are run on generated random numbers to verify randomness. • A new S-Box design algorithm is developed to create the chaos based S-Box to be utilized in encryption algorithm and performance tests are made. • The new developed S-Box based image encryption algorithm is introduced and image encryption application is carried out. • To show the quality and strong of the encryption process, security analysis are performed and compared with the AES and chaos algorithms. - Abstract: In this study, an encryption algorithm that uses chaos based S-BOX is developed for secure and speed image encryption. First of all, a new chaotic system is developed for creating S-Box and image encryption algorithm. Chaos based random number generator is designed with the help of the new chaotic system. Then, NIST tests are run on generated random numbers to verify randomness. A new S-Box design algorithm is developed to create the chaos based S-Box to be utilized in encryption algorithm and performance tests are made. As the next step, the new developed S-Box based image encryption algorithm is introduced in detail. Finally, image encryption application is carried out. To show the quality and strong of the encryption process, security analysis are performed. Proposed algorithm is compared with the AES and chaos algorithms. According to tests results, the proposed image encryption algorithm is secure and speed for image encryption application.

  13. CAT-PUMA: CME Arrival Time Prediction Using Machine learning Algorithms

    Science.gov (United States)

    Liu, Jiajia; Ye, Yudong; Shen, Chenglong; Wang, Yuming; Erdélyi, Robert

    2018-04-01

    CAT-PUMA (CME Arrival Time Prediction Using Machine learning Algorithms) quickly and accurately predicts the arrival of Coronal Mass Ejections (CMEs) of CME arrival time. The software was trained via detailed analysis of CME features and solar wind parameters using 182 previously observed geo-effective partial-/full-halo CMEs and uses algorithms of the Support Vector Machine (SVM) to make its predictions, which can be made within minutes of providing the necessary input parameters of a CME.

  14. Petri nets SM-cover-based on heuristic coloring algorithm

    Science.gov (United States)

    Tkacz, Jacek; Doligalski, Michał

    2015-09-01

    In the paper, coloring heuristic algorithm of interpreted Petri nets is presented. Coloring is used to determine the State Machines (SM) subnets. The present algorithm reduces the Petri net in order to reduce the computational complexity and finds one of its possible State Machines cover. The proposed algorithm uses elements of interpretation of Petri nets. The obtained result may not be the best, but it is sufficient for use in rapid prototyping of logic controllers. Found SM-cover will be also used in the development of algorithms for decomposition, and modular synthesis and implementation of parallel logic controllers. Correctness developed heuristic algorithm was verified using Gentzen formal reasoning system.

  15. Development of reversible jump Markov Chain Monte Carlo algorithm in the Bayesian mixture modeling for microarray data in Indonesia

    Science.gov (United States)

    Astuti, Ani Budi; Iriawan, Nur; Irhamah, Kuswanto, Heri

    2017-12-01

    In the Bayesian mixture modeling requires stages the identification number of the most appropriate mixture components thus obtained mixture models fit the data through data driven concept. Reversible Jump Markov Chain Monte Carlo (RJMCMC) is a combination of the reversible jump (RJ) concept and the Markov Chain Monte Carlo (MCMC) concept used by some researchers to solve the problem of identifying the number of mixture components which are not known with certainty number. In its application, RJMCMC using the concept of the birth/death and the split-merge with six types of movement, that are w updating, θ updating, z updating, hyperparameter β updating, split-merge for components and birth/death from blank components. The development of the RJMCMC algorithm needs to be done according to the observed case. The purpose of this study is to know the performance of RJMCMC algorithm development in identifying the number of mixture components which are not known with certainty number in the Bayesian mixture modeling for microarray data in Indonesia. The results of this study represent that the concept RJMCMC algorithm development able to properly identify the number of mixture components in the Bayesian normal mixture model wherein the component mixture in the case of microarray data in Indonesia is not known for certain number.

  16. Transmission dose estimation algorithm for in vivo dosimetry

    International Nuclear Information System (INIS)

    Yun, Hyong Geun; Shin, Kyo Chul; Huh, Soon Nyung; Woo, Hong Gyun; Ha, Sung Whan; Lee, Hyoung Koo

    2002-01-01

    Measurement of transmission dose is useful for in vivo dosimetry of QA purpose. The objective of this study is to develope an algorithm for estimation of tumor dose using measured transmission dose for open radiation field. Transmission dose was measured with various field size (FS), phantom thickness (Tp), and phantom chamber distance (PCD) with an acrylic phantom for 6 MV and 10 MV X-ray. Source to chamber distance (SCD) was set to 150 cm. Measurement was conducted with a 0.6 cc Farmer type ion chamber. Using measured data and regression analysis, an algorithm was developed for estimation of expected reading of transmission dose. Accuracy of the algorithm was tested with flat solid phantom with various settings. The algorithm consisted of quadratic function of log(A/P) (where A/P is area-perimeter ratio) and tertiary function of PCD. The algorithm could estimate dose with very high accuracy for open square field, with errors within ±0.5%. For elongated radiation field, the errors were limited to ±1.0%. The developed algorithm can accurately estimate the transmission dose in open radiation fields with various treatment settings

  17. Transmission dose estimation algorithm for in vivo dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Yun, Hyong Geun; Shin, Kyo Chul [Dankook Univ., Seoul (Korea, Republic of); Huh, Soon Nyung; Woo, Hong Gyun; Ha, Sung Whan [Seoul National Univ., Seoul (Korea, Republic of); Lee, Hyoung Koo [Catholic Univ., Seoul (Korea, Republic of)

    2002-07-01

    Measurement of transmission dose is useful for in vivo dosimetry of QA purpose. The objective of this study is to develope an algorithm for estimation of tumor dose using measured transmission dose for open radiation field. Transmission dose was measured with various field size (FS), phantom thickness (Tp), and phantom chamber distance (PCD) with an acrylic phantom for 6 MV and 10 MV X-ray. Source to chamber distance (SCD) was set to 150 cm. Measurement was conducted with a 0.6 cc Farmer type ion chamber. Using measured data and regression analysis, an algorithm was developed for estimation of expected reading of transmission dose. Accuracy of the algorithm was tested with flat solid phantom with various settings. The algorithm consisted of quadratic function of log(A/P) (where A/P is area-perimeter ratio) and tertiary function of PCD. The algorithm could estimate dose with very high accuracy for open square field, with errors within {+-}0.5%. For elongated radiation field, the errors were limited to {+-}1.0%. The developed algorithm can accurately estimate the transmission dose in open radiation fields with various treatment settings.

  18. A Learning Algorithm for Multimodal Grammar Inference.

    Science.gov (United States)

    D'Ulizia, A; Ferri, F; Grifoni, P

    2011-12-01

    The high costs of development and maintenance of multimodal grammars in integrating and understanding input in multimodal interfaces lead to the investigation of novel algorithmic solutions in automating grammar generation and in updating processes. Many algorithms for context-free grammar inference have been developed in the natural language processing literature. An extension of these algorithms toward the inference of multimodal grammars is necessary for multimodal input processing. In this paper, we propose a novel grammar inference mechanism that allows us to learn a multimodal grammar from its positive samples of multimodal sentences. The algorithm first generates the multimodal grammar that is able to parse the positive samples of sentences and, afterward, makes use of two learning operators and the minimum description length metrics in improving the grammar description and in avoiding the over-generalization problem. The experimental results highlight the acceptable performances of the algorithm proposed in this paper since it has a very high probability of parsing valid sentences.

  19. An efficient quantum algorithm for spectral estimation

    Science.gov (United States)

    Steffens, Adrian; Rebentrost, Patrick; Marvian, Iman; Eisert, Jens; Lloyd, Seth

    2017-03-01

    We develop an efficient quantum implementation of an important signal processing algorithm for line spectral estimation: the matrix pencil method, which determines the frequencies and damping factors of signals consisting of finite sums of exponentially damped sinusoids. Our algorithm provides a quantum speedup in a natural regime where the sampling rate is much higher than the number of sinusoid components. Along the way, we develop techniques that are expected to be useful for other quantum algorithms as well—consecutive phase estimations to efficiently make products of asymmetric low rank matrices classically accessible and an alternative method to efficiently exponentiate non-Hermitian matrices. Our algorithm features an efficient quantum-classical division of labor: the time-critical steps are implemented in quantum superposition, while an interjacent step, requiring much fewer parameters, can operate classically. We show that frequencies and damping factors can be obtained in time logarithmic in the number of sampling points, exponentially faster than known classical algorithms.

  20. Vector Network Coding Algorithms

    OpenAIRE

    Ebrahimi, Javad; Fragouli, Christina

    2010-01-01

    We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L x L coding matrices that play a similar role as coding c in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector coding, our algori...