WorldWideScience

Sample records for previously developed algorithms

  1. Improvements to previous algorithms to predict gene structure and isoform concentrations using Affymetrix Exon arrays

    Directory of Open Access Journals (Sweden)

    Aramburu Ander

    2010-11-01

    Full Text Available Abstract Background Exon arrays provide a way to measure the expression of different isoforms of genes in an organism. Most of the procedures to deal with these arrays are focused on gene expression or on exon expression. Although the only biological analytes that can be properly assigned a concentration are transcripts, there are very few algorithms that focus on them. The reason is that previously developed summarization methods do not work well if applied to transcripts. In addition, gene structure prediction, i.e., the correspondence between probes and novel isoforms, is a field which is still unexplored. Results We have modified and adapted a previous algorithm to take advantage of the special characteristics of the Affymetrix exon arrays. The structure and concentration of transcripts -some of them possibly unknown- in microarray experiments were predicted using this algorithm. Simulations showed that the suggested modifications improved both specificity (SP and sensitivity (ST of the predictions. The algorithm was also applied to different real datasets showing its effectiveness and the concordance with PCR validated results. Conclusions The proposed algorithm shows a substantial improvement in the performance over the previous version. This improvement is mainly due to the exploitation of the redundancy of the Affymetrix exon arrays. An R-Package of SPACE with the updated algorithms have been developed and is freely available.

  2. Developing Scoring Algorithms

    Science.gov (United States)

    We developed scoring procedures to convert screener responses to estimates of individual dietary intake for fruits and vegetables, dairy, added sugars, whole grains, fiber, and calcium using the What We Eat in America 24-hour dietary recall data from the 2003-2006 NHANES.

  3. To develop a universal gamut mapping algorithm

    International Nuclear Information System (INIS)

    Morovic, J.

    1998-10-01

    When a colour image from one colour reproduction medium (e.g. nature, a monitor) needs to be reproduced on another (e.g. on a monitor or in print) and these media have different colour ranges (gamuts), it is necessary to have a method for mapping between them. If such a gamut mapping algorithm can be used under a wide range of conditions, it can also be incorporated in an automated colour reproduction system and considered to be in some sense universal. In terms of preliminary work, a colour reproduction system was implemented, for which a new printer characterisation model (including grey-scale correction) was developed. Methods were also developed for calculating gamut boundary descriptors and for calculating gamut boundaries along given lines from them. The gamut mapping solution proposed in this thesis is a gamut compression algorithm developed with the aim of being accurate and universally applicable. It was arrived at by way of an evolutionary gamut mapping development strategy for the purposes of which five test images were reproduced between a CRT and printed media obtained using an inkjet printer. Initially, a number of previously published algorithms were chosen and psychophysically evaluated whereby an important characteristic of this evaluation was that it also considered the performance of algorithms for individual colour regions within the test images used. New algorithms were then developed on their basis, subsequently evaluated and this process was repeated once more. In this series of experiments the new GCUSP algorithm, which consists of a chroma-dependent lightness compression followed by a compression towards the lightness of the reproduction cusp on the lightness axis, gave the most accurate and stable performance overall. The results of these experiments were also useful for improving the understanding of some gamut mapping factors - in particular gamut difference. In addition to looking at accuracy, the pleasantness of reproductions obtained

  4. Algorithm Development Library for Environmental Satellite Missions

    Science.gov (United States)

    Smith, D. C.; Grant, K. D.; Miller, S. W.; Jamilkowski, M. L.

    2012-12-01

    The National Oceanic and Atmospheric Administration (NOAA) and National Aeronautics and Space Administration (NASA) are jointly acquiring the next-generation civilian weather and environmental satellite system: the Joint Polar Satellite System (JPSS). JPSS will contribute the afternoon orbit component and ground processing system of the restructured National Polar-orbiting Operational Environmental Satellite System (NPOESS). As such, the Joint Polar Satellite System replaces the current Polar-orbiting Operational Environmental Satellites (POES) managed by the National Oceanic and Atmospheric Administration and the ground processing component of both Polar-orbiting Operational Environmental Satellites and the Defense Meteorological Satellite Program (DMSP) replacement, previously known as the Defense Weather Satellite System (DWSS), managed by the Department of Defense (DoD). The JPSS satellites will carry a suite of sensors designed to collect meteorological, oceanographic, climatological, and solar-geophysical observations of the earth, atmosphere, and space. The ground processing system for JPSS is known as the JPSS Common Ground System (JPSS CGS), and consists of a Command, Control, and Communications Segment (C3S) and an Interface Data Processing Segment (IDPS). Both segments are developed by Raytheon Intelligence and Information Systems (IIS). The C3S currently flies the Suomi National Polar Partnership (Suomi NPP) satellite and transfers mission data from Suomi NPP and between the ground facilities. The IDPS processes Suomi NPP satellite data to provide Environmental Data Records (EDRs) to NOAA and DoD processing centers operated by the United States government. When the JPSS-1 satellite is launched in early 2017, the responsibilities of the C3S and the IDPS will be expanded to support both Suomi NPP and JPSS-1. The EDRs for Suomi NPP are currently undergoing an extensive Calibration and Validation (Cal/Val) campaign. As Cal/Val proceeds, changes to the

  5. Developing Scoring Algorithms (Earlier Methods)

    Science.gov (United States)

    We developed scoring procedures to convert screener responses to estimates of individual dietary intake for fruits and vegetables, dairy, added sugars, whole grains, fiber, and calcium using the What We Eat in America 24-hour dietary recall data from the 2003-2006 NHANES.

  6. Model based development of engine control algorithms

    NARCIS (Netherlands)

    Dekker, H.J.; Sturm, W.L.

    1996-01-01

    Model based development of engine control systems has several advantages. The development time and costs are strongly reduced because much of the development and optimization work is carried out by simulating both engine and control system. After optimizing the control algorithm it can be executed

  7. Critical function monitoring system algorithm development

    International Nuclear Information System (INIS)

    Harmon, D.L.

    1984-01-01

    Accurate critical function status information is a key to operator decision-making during events threatening nuclear power plant safety. The Critical Function Monitoring System provides continuous critical function status monitoring by use of algorithms which mathematically represent the processes by which an operating staff would determine critical function status. This paper discusses in detail the systematic design methodology employed to develop adequate Critical Function Monitoring System algorithms

  8. Algorithms

    Indian Academy of Sciences (India)

    In the previous articles, we have discussed various common data-structures such as arrays, lists, queues and trees and illustrated the widely used algorithm design paradigm referred to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted ...

  9. Development of a Thermal Equilibrium Prediction Algorithm

    International Nuclear Information System (INIS)

    Aviles-Ramos, Cuauhtemoc

    2002-01-01

    A thermal equilibrium prediction algorithm is developed and tested using a heat conduction model and data sets from calorimetric measurements. The physical model used in this study is the exact solution of a system of two partial differential equations that govern the heat conduction in the calorimeter. A multi-parameter estimation technique is developed and implemented to estimate the effective volumetric heat generation and thermal diffusivity in the calorimeter measurement chamber, and the effective thermal diffusivity of the heat flux sensor. These effective properties and the exact solution are used to predict the heat flux sensor voltage readings at thermal equilibrium. Thermal equilibrium predictions are carried out considering only 20% of the total measurement time required for thermal equilibrium. A comparison of the predicted and experimental thermal equilibrium voltages shows that the average percentage error from 330 data sets is only 0.1%. The data sets used in this study come from calorimeters of different sizes that use different kinds of heat flux sensors. Furthermore, different nuclear material matrices were assayed in the process of generating these data sets. This study shows that the integration of this algorithm into the calorimeter data acquisition software will result in an 80% reduction of measurement time. This reduction results in a significant cutback in operational costs for the calorimetric assay of nuclear materials. (authors)

  10. DEVELOPMENT OF A NEW ALGORITHM FOR KEY AND S-BOX GENERATION IN BLOWFISH ALGORITHM

    Directory of Open Access Journals (Sweden)

    TAYSEER S. ATIA

    2014-08-01

    Full Text Available Blowfish algorithm is a block cipher algorithm, its strong, simple algorithm used to encrypt data in block of size 64-bit. Key and S-box generation process in this algorithm require time and memory space the reasons that make this algorithm not convenient to be used in smart card or application requires changing secret key frequently. In this paper a new key and S-box generation process was developed based on Self Synchronization Stream Cipher (SSS algorithm where the key generation process for this algorithm was modified to be used with the blowfish algorithm. Test result shows that the generation process requires relatively slow time and reasonably low memory requirement and this enhance the algorithm and gave it the possibility for different usage.

  11. Development of GPT-based optimization algorithm

    International Nuclear Information System (INIS)

    White, J.R.; Chapman, D.M.; Biswas, D.

    1985-01-01

    The University of Lowell and Westinghouse Electric Corporation are involved in a joint effort to evaluate the potential benefits of generalized/depletion perturbation theory (GPT/DTP) methods for a variety of light water reactor (LWR) physics applications. One part of that work has focused on the development of a GPT-based optimization algorithm for the overall design, analysis, and optimization of LWR reload cores. The use of GPT sensitivity data in formulating the fuel management optimization problem is conceptually straightforward; it is the actual execution of the concept that is challenging. Thus, the purpose of this paper is to address some of the major difficulties, to outline our approach to these problems, and to present some illustrative examples of an efficient GTP-based optimization scheme

  12. Global Precipitation Measurement: GPM Microwave Imager (GMI) Algorithm Development Approach

    Science.gov (United States)

    Stocker, Erich Franz

    2009-01-01

    This slide presentation reviews the approach to the development of the Global Precipitation Measurement algorithm. This presentation includes information about the responsibilities for the development of the algorithm, and the calibration. Also included is information about the orbit, and the sun angle. The test of the algorithm code will be done with synthetic data generated from the Precipitation Processing System (PPS).

  13. Predictive factors for the development of diabetes in women with previous gestational diabetes mellitus

    DEFF Research Database (Denmark)

    Damm, P.; Kühl, C.; Bertelsen, Aksel

    1992-01-01

    OBJECTIVES: The purpose of this study was to determine the incidence of diabetes in women with previous dietary-treated gestational diabetes mellitus and to identify predictive factors for development of diabetes. STUDY DESIGN: Two to 11 years post partum, glucose tolerance was investigated in 241...... women with previous dietary-treated gestational diabetes mellitus and 57 women without previous gestational diabetes mellitus (control group). RESULTS: Diabetes developed in 42 (17.4%) women with previous gestational diabetes mellitus (3.7% insulin-dependent diabetes mellitus and 13.7% non......-insulin-dependent diabetes mellitus). Diabetes did not develop in any of the controls. Predictive factors for diabetes development were fasting glucose level at diagnosis (high glucose, high risk), preterm delivery, and an oral glucose tolerance test result that showed diabetes 2 months post partum. In a subgroup...

  14. Development of a versatile algorithm for optimization of radiation therapy

    International Nuclear Information System (INIS)

    Gustafsson, Anders.

    1996-12-01

    A flexible iterative gradient algorithm for radiation therapy optimization has been developed. The algorithm is based on dose calculation using the pencil-beam description of external radiation beams in uniform and heterogeneous patients. The properties of the algorithm are described, including its ability to treat variable bounds and linear constraints, its efficiency in gradient calculation, its convergence properties and termination criteria. 116 refs

  15. 78 FR 35263 - Freeport LNG Development, L.P.; Application for Blanket Authorization To Export Previously...

    Science.gov (United States)

    2013-06-12

    ... the LNG at the time of export. The Application was filed under section 3 of the Natural Gas Act (NGA... not prohibited by U.S. law or policy. Current Application The current Application is filed in... Freeport LNG Development, L.P.; Application for Blanket Authorization To Export Previously Imported...

  16. Developing Reading Comprehension through Metacognitive Strategies: A Review of Previous Studies

    Science.gov (United States)

    Channa, Mansoor Ahmed; Nordin, Zaimuariffudin Shukri; Siming, Insaf Ali; Chandio, Ali Asgher; Koondher, Mansoor Ali

    2015-01-01

    This paper has reviewed the previous studies on metacognitive strategies based on planning, monitoring, and evaluating in order to develop reading comprehension. The main purpose of this review in metacognition, and reading domain is to help readers to enhance their capabilities and power reading through these strategies. The researchers reviewed…

  17. Mentoring to develop research selfefficacy, with particular reference to previously disadvantaged individuals

    OpenAIRE

    S. Schulze

    2010-01-01

    The development of inexperienced researchers is crucial. In response to the lack of research self-efficacy of many previously disadvantaged individuals, the article examines how mentoring can enhance the research self-efficacy of mentees. The study is grounded in the self-efficacy theory (SET) – an aspect of the social cognitive theory (SCT). Insights were gained from an in-depth study of SCT, SET and mentoring, and from a completed mentoring project. This led to the formulation of three basi...

  18. Algorithm Development for the Two-Fluid Plasma Model

    National Research Council Canada - National Science Library

    Shumlak, Uri

    2002-01-01

    A preliminary algorithm based on the two-fluid plasma model is developed to investigate the possibility of simulating plasmas with a more physically accurate model than the MHD (magnetohydrodynamic) model...

  19. Development of morphing algorithms for Histfactory using information geometry

    Energy Technology Data Exchange (ETDEWEB)

    Bandyopadhyay, Anjishnu; Brock, Ian [University of Bonn (Germany); Cranmer, Kyle [New York University (United States)

    2016-07-01

    Many statistical analyses are based on likelihood fits. In any likelihood fit we try to incorporate all uncertainties, both systematic and statistical. We generally have distributions for the nominal and ±1 σ variations of a given uncertainty. Using that information, Histfactory morphs the distributions for any arbitrary value of the given uncertainties. In this talk, a new morphing algorithm will be presented, which is based on information geometry. The algorithm uses the information about the difference between various probability distributions. Subsequently, we map this information onto geometrical structures and develop the algorithm on the basis of different geometrical properties. Apart from varying all nuisance parameters together, this algorithm can also probe both small (< 1 σ) and large (> 2 σ) variations. It will also be shown how this algorithm can be used for interpolating other forms of probability distributions.

  20. Development of a Novel Locomotion Algorithm for Snake Robot

    International Nuclear Information System (INIS)

    Khan, Raisuddin; Billah, Md Masum; Watanabe, Mitsuru; Shafie, A A

    2013-01-01

    A novel algorithm for snake robot locomotion is developed and analyzed in this paper. Serpentine is one of the renowned locomotion for snake robot in disaster recovery mission to overcome narrow space navigation. Several locomotion for snake navigation, such as concertina or rectilinear may be suitable for narrow spaces, but is highly inefficient if the same type of locomotion is used even in open spaces resulting friction reduction which make difficulties for snake movement. A novel locomotion algorithm has been proposed based on the modification of the multi-link snake robot, the modifications include alterations to the snake segments as well elements that mimic scales on the underside of the snake body. Snake robot can be able to navigate in the narrow space using this developed locomotion algorithm. The developed algorithm surmount the others locomotion limitation in narrow space navigation

  1. Developer Tools for Evaluating Multi-Objective Algorithms

    Science.gov (United States)

    Giuliano, Mark E.; Johnston, Mark D.

    2011-01-01

    Multi-objective algorithms for scheduling offer many advantages over the more conventional single objective approach. By keeping user objectives separate instead of combined, more information is available to the end user to make trade-offs between competing objectives. Unlike single objective algorithms, which produce a single solution, multi-objective algorithms produce a set of solutions, called a Pareto surface, where no solution is strictly dominated by another solution for all objectives. From the end-user perspective a Pareto-surface provides a tool for reasoning about trade-offs between competing objectives. From the perspective of a software developer multi-objective algorithms provide an additional challenge. How can you tell if one multi-objective algorithm is better than another? This paper presents formal and visual tools for evaluating multi-objective algorithms and shows how the developer process of selecting an algorithm parallels the end-user process of selecting a solution for execution out of the Pareto-Surface.

  2. B ampersand W PWR advanced control system algorithm development

    International Nuclear Information System (INIS)

    Winks, R.W.; Wilson, T.L.; Amick, M.

    1992-01-01

    This paper discusses algorithm development of an Advanced Control System for the B ampersand W Pressurized Water Reactor (PWR) nuclear power plant. The paper summarizes the history of the project, describes the operation of the algorithm, and presents transient results from a simulation of the plant and control system. The history discusses the steps in the development process and the roles played by the utility owners, B ampersand W Nuclear Service Company (BWNS), Oak Ridge National Laboratory (ORNL), and the Foxboro Company. The algorithm description is a brief overview of the features of the control system. The transient results show that operation of the algorithm in a normal power maneuvering mode and in a moderately large upset following a feedwater pump trip

  3. Mentoring to develop research selfefficacy, with particular reference to previously disadvantaged individuals

    Directory of Open Access Journals (Sweden)

    S. Schulze

    2010-07-01

    Full Text Available The development of inexperienced researchers is crucial. In response to the lack of research self-efficacy of many previously disadvantaged individuals, the article examines how mentoring can enhance the research self-efficacy of mentees. The study is grounded in the self-efficacy theory (SET – an aspect of the social cognitive theory (SCT. Insights were gained from an in-depth study of SCT, SET and mentoring, and from a completed mentoring project. This led to the formulation of three basic principles. Firstly, institutions need to provide supportive environmental conditions that facilitate research selfefficacy. This implies a supportive and efficient collective system. The possible effects of performance ratings and reward systems at the institution also need to be considered. Secondly, mentoring needs to create opportunities for young researchers to experience successful learning as a result of appropriate action. To this end, mentees need to be involved in actual research projects in small groups. At the same time the mentor needs to facilitate skills development by coaching and encouragement. Thirdly, mentors need to encourage mentees to believe in their ability to successfully complete research projects. This implies encouraging positive emotional states, stimulating self-reflection and self-comparison with others in the group, giving positive evaluative feedback and being an intentional role model.

  4. Algorithms

    Indian Academy of Sciences (India)

    have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming language Is called a program. From activities 1-3, we can observe that: • Each activity is a command.

  5. Algorithm development for Maxwell's equations for computational electromagnetism

    Science.gov (United States)

    Goorjian, Peter M.

    1990-01-01

    A new algorithm has been developed for solving Maxwell's equations for the electromagnetic field. It solves the equations in the time domain with central, finite differences. The time advancement is performed implicitly, using an alternating direction implicit procedure. The space discretization is performed with finite volumes, using curvilinear coordinates with electromagnetic components along those directions. Sample calculations are presented of scattering from a metal pin, a square and a circle to demonstrate the capabilities of the new algorithm.

  6. A Developed Artificial Bee Colony Algorithm Based on Cloud Model

    Directory of Open Access Journals (Sweden)

    Ye Jin

    2018-04-01

    Full Text Available The Artificial Bee Colony (ABC algorithm is a bionic intelligent optimization method. The cloud model is a kind of uncertainty conversion model between a qualitative concept T ˜ that is presented by nature language and its quantitative expression, which integrates probability theory and the fuzzy mathematics. A developed ABC algorithm based on cloud model is proposed to enhance accuracy of the basic ABC algorithm and avoid getting trapped into local optima by introducing a new select mechanism, replacing the onlooker bees’ search formula and changing the scout bees’ updating formula. Experiments on CEC15 show that the new algorithm has a faster convergence speed and higher accuracy than the basic ABC and some cloud model based ABC variants.

  7. Rate of torque and electromyographic development during anticipated eccentric contraction is lower in previously strained hamstrings.

    Science.gov (United States)

    Opar, David A; Williams, Morgan D; Timmins, Ryan G; Dear, Nuala M; Shield, Anthony J

    2013-01-01

    The effect of prior strain injury on myoelectrical activity of the hamstrings during tasks requiring high rates of torque development has received little attention. To determine if recreational athletes with a history of unilateral hamstring strain injury will exhibit lower levels of myoelectrical activity during eccentric contraction, rate of torque development (RTD), and impulse (IMP) at 30, 50, and 100 milliseconds after the onset of myoelectrical activity or torque development in the previously injured limb compared with the uninjured limb. Case control study; Level of evidence, 3. Twenty-six recreational athletes were recruited. Of these, 13 athletes had a history of unilateral hamstring strain injury (all confined to biceps femoris long head), and 13 had no history of hamstring strain injury. Following familiarization, all athletes undertook isokinetic dynamometry testing and surface electromyography (integrated EMG; iEMG) assessment of the biceps femoris long head and medial hamstrings during eccentric contractions at -60 and -180 deg·s(-1). In the injured limb of the injured group, compared with the contralateral uninjured limb, RTD and IMP was lower during -60 deg·s(-1) eccentric contractions at 50 milliseconds (RTD: injured limb, 312.27 ± 191.78 N·m·s(-1) vs uninjured limb, 518.54 ± 172.81 N·m·s(-1), P = .008; IMP: injured limb, 0.73 ± 0.30 N·m·s vs uninjured limb, 0.97 ± 0.23 N·m·s, P = .005) and 100 milliseconds (RTD: injured limb, 280.03 ± 131.42 N·m·s(-1) vs uninjured limb, 460.54 ± 152.94 N·m·s(-1), P = .001; IMP: injured limb, 2.15 ± 0.89 N·m·s vs uninjured limb, 3.07 ± 0.63 N·m·s, P contraction. Biceps femoris long head muscle activation was lower at 100 milliseconds at both contraction speeds (-60 deg·s(-1), normalized iEMG activity [×1000]: injured limb, 26.25 ± 10.11 vs uninjured limb, 33.57 ± 8.29, P = .009; -180 deg·s(-1), normalized iEMG activity [×1000]: injured limb, 31.16 ± 10.01 vs uninjured limb, 39.64

  8. Tactical weapons algorithm development for unitary and fused systems

    Science.gov (United States)

    Talele, Sunjay E.; Watson, John S.; Williams, Bradford D.; Amphay, Sengvieng A.

    1996-06-01

    A much needed capability in today's tactical Air Force is weapons systems capable of precision guidance in all weather conditions against targets in high clutter backgrounds. To achieve this capability, the Armament Directorate of Wright Laboratory, WL/MN, has been exploring various seeker technologies, including multi-sensor fusion, that may yield cost effective systems capable of operating under these conditions. A critical component of these seeker systems is their autonomous acquisition and tracking algorithms. It is these algorithms which will enable the autonomous operation of the weapons systems in the battlefield. In the past, a majority of the tactical weapon algorithms were developed in a manner which resulted in codes that were not releasable to the community, either because they were considered company proprietary or competition sensitive. As a result, the knowledge gained from these efforts was not transitioning through the technical community, thereby inhibiting the evolution of their development. In order to overcome this limitation, WL/MN has embarked upon a program to develop non-proprietary multi-sensor acquisition and tracking algorithms. To facilitate this development, a testbed has been constructed consisting of the Irma signature prediction model, data analysis workstations, and the modular algorithm concept evaluation tool (MACET) algorithm. All three of these components have been enhanced to accommodate both multi-spectral sensor fusion systems and the there dimensional signal processing techniques characteristic of ladar. MACET is a graphical interface driven system for rapid prototyping and evaluation of both unitary and fused sensor algorithms. This paper describes the MACET system and specifically elaborates on the three-dimensional capabilities recently incorporated into it.

  9. DIDADTIC TOOLS FOR THE STUDENTS’ ALGORITHMIC THINKING DEVELOPMENT

    Directory of Open Access Journals (Sweden)

    T. P. Pushkaryeva

    2017-01-01

    Full Text Available Introduction. Modern engineers must possess high potential of cognitive abilities, in particular, the algorithmic thinking (AT. In this regard, the training of future experts (university graduates of technical specialities has to provide the knowledge of principles and ways of designing of various algorithms, abilities to analyze them, and to choose the most optimal variants for engineering activity implementation. For full formation of AT skills it is necessary to consider all channels of psychological perception and cogitative processing of educational information: visual, auditory, and kinesthetic.The aim of the present research is theoretical basis of design, development and use of resources for successful development of AT during the educational process of training in programming.Methodology and research methods. Methodology of the research involves the basic thesis of cognitive psychology and information approach while organizing the educational process. The research used methods: analysis; modeling of cognitive processes; designing training tools that take into account the mentality and peculiarities of information perception; diagnostic efficiency of the didactic tools. Results. The three-level model for future engineers training in programming aimed at development of AT skills was developed. The model includes three components: aesthetic, simulative, and conceptual. Stages to mastering a new discipline are allocated. It is proved that for development of AT skills when training in programming it is necessary to use kinesthetic tools at the stage of mental algorithmic maps formation; algorithmic animation and algorithmic mental maps at the stage of algorithmic model and conceptual images formation. Kinesthetic tools for development of students’ AT skills when training in algorithmization and programming are designed. Using of kinesthetic training simulators in educational process provide the effective development of algorithmic style of

  10. Algorithms

    Indian Academy of Sciences (India)

    algorithms such as synthetic (polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language ... ·1 x:=sln(theta) x : = sm(theta) 1. ~. Idl d.t Read A.B,C. ~ lei ~ Print x.y.z. L;;;J. Figure 2 Symbols used In flowchart language to rep- resent Assignment, Read.

  11. Algorithm integration using ADL (Algorithm Development Library) for improving CrIMSS EDR science product quality

    Science.gov (United States)

    Das, B.; Wilson, M.; Divakarla, M. G.; Chen, W.; Barnet, C.; Wolf, W.

    2013-05-01

    Algorithm Development Library (ADL) is a framework that mimics the operational system IDPS (Interface Data Processing Segment) that is currently being used to process data from instruments aboard Suomi National Polar-orbiting Partnership (S-NPP) satellite. The satellite was launched successfully in October 2011. The Cross-track Infrared and Microwave Sounder Suite (CrIMSS) consists of the Advanced Technology Microwave Sounder (ATMS) and Cross-track Infrared Sounder (CrIS) instruments that are on-board of S-NPP. These instruments will also be on-board of JPSS (Joint Polar Satellite System) that will be launched in early 2017. The primary products of the CrIMSS Environmental Data Record (EDR) include global atmospheric vertical temperature, moisture, and pressure profiles (AVTP, AVMP and AVPP) and Ozone IP (Intermediate Product from CrIS radiances). Several algorithm updates have recently been proposed by CrIMSS scientists that include fixes to the handling of forward modeling errors, a more conservative identification of clear scenes, indexing corrections for daytime products, and relaxed constraints between surface temperature and air temperature for daytime land scenes. We have integrated these improvements into the ADL framework. This work compares the results from ADL emulation of future IDPS system incorporating all the suggested algorithm updates with the current official processing results by qualitative and quantitative evaluations. The results prove these algorithm updates improve science product quality.

  12. Effect of previous exhaustive exercise on metabolism and fatigue development during intense exercise in humans

    DEFF Research Database (Denmark)

    Iaia, F. M.; Perez-Gomez, J.; Nordsborg, Nikolai

    2010-01-01

    The present study examined how metabolic response and work capacity are affected by previous exhaustive exercise. Seven subjects performed an exhaustive cycle exercise ( approximately 130%-max; EX2) after warm-up (CON) and 2 min after an exhaustive bout at a very high (VH; approximately 30 s), high...... during a repeated high-intensity exercise lasting 1/2-2 min....

  13. Solving Optimization Problems via Vortex Optimization Algorithm and Cognitive Development Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Ahmet Demir

    2017-01-01

    Full Text Available In the fields which require finding the most appropriate value, optimization became a vital approach to employ effective solutions. With the use of optimization techniques, many different fields in the modern life have found solutions to their real-world based problems. In this context, classical optimization techniques have had an important popularity. But after a while, more advanced optimization problems required the use of more effective techniques. At this point, Computer Science took an important role on providing software related techniques to improve the associated literature. Today, intelligent optimization techniques based on Artificial Intelligence are widely used for optimization problems. The objective of this paper is to provide a comparative study on the employment of classical optimization solutions and Artificial Intelligence solutions for enabling readers to have idea about the potential of intelligent optimization techniques. At this point, two recently developed intelligent optimization algorithms, Vortex Optimization Algorithm (VOA and Cognitive Development Optimization Algorithm (CoDOA, have been used to solve some multidisciplinary optimization problems provided in the source book Thomas' Calculus 11th Edition and the obtained results have compared with classical optimization solutions. 

  14. Development and validation of algorithms to identify acute diverticulitis.

    Science.gov (United States)

    Kawatkar, Aniket; Chu, Li-Hao; Iyer, Rajan; Yen, Linnette; Chen, Wansu; Erder, M Haim; Hodgkins, Paul; Longstreth, George

    2015-01-01

    The objectives of this study were to develop and validate algorithms to accurately identify patients with diverticulitis using electronic medical records (EMRs). Using Kaiser Permanente Southern California's EMRs of adults (≥18 years) with International Classification of Diseases, Clinical Modifications, Ninth Revision diagnosis codes of diverticulitis (562.11, 562.13) between 1 January 2008 and 31 August 2009, we generated random samples for pilot (N = 692) and validation (N = 1502) respectively. Both samples were stratified by inpatient (IP), emergency department (ED), and outpatient (OP) care settings. We developed and validated several algorithms using EMR data on diverticulitis diagnosis code, antibiotics, computed tomography, diverticulosis history, pain medication and/or pain diagnosis, and excluding patients with infections and/or conditions that could mimic diverticulitis. Evidence of diverticulitis was confirmed through manual chart review. Agreement between EMR algorithm and manual chart confirmation was evaluated using sensitivity and positive predictive value (PPV). Both samples were similar in socio-demographics and clinical symptoms. An algorithm based on diverticulitis diagnosis code with antibiotic prescription dispensed within 7 days of diagnosis date, performed well overall. In the validation sample, sensitivity and PPV were (84.6, 98.2%), (95.8, 98.1%), and (91.8, 82.6%) for OP, ED, and IP, respectively. Using antibiotic prescriptions to supplement diagnostic codes improved the accuracy of case identification for diverticulitis, but results varied by care setting. Copyright © 2014 John Wiley & Sons, Ltd.

  15. Developing and Implementing the Data Mining Algorithms in RAVEN

    Energy Technology Data Exchange (ETDEWEB)

    Sen, Ramazan Sonat [Idaho National Lab. (INL), Idaho Falls, ID (United States); Maljovec, Daniel Patrick [Idaho National Lab. (INL), Idaho Falls, ID (United States); Alfonsi, Andrea [Idaho National Lab. (INL), Idaho Falls, ID (United States); Rabiti, Cristian [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    The RAVEN code is becoming a comprehensive tool to perform probabilistic risk assessment, uncertainty quantification, and verification and validation. The RAVEN code is being developed to support many programs and to provide a set of methodologies and algorithms for advanced analysis. Scientific computer codes can generate enormous amounts of data. To post-process and analyze such data might, in some cases, take longer than the initial software runtime. Data mining algorithms/methods help in recognizing and understanding patterns in the data, and thus discover knowledge in databases. The methodologies used in the dynamic probabilistic risk assessment or in uncertainty and error quantification analysis couple system/physics codes with simulation controller codes, such as RAVEN. RAVEN introduces both deterministic and stochastic elements into the simulation while the system/physics code model the dynamics deterministically. A typical analysis is performed by sampling values of a set of parameter values. A major challenge in using dynamic probabilistic risk assessment or uncertainty and error quantification analysis for a complex system is to analyze the large number of scenarios generated. Data mining techniques are typically used to better organize and understand data, i.e. recognizing patterns in the data. This report focuses on development and implementation of Application Programming Interfaces (APIs) for different data mining algorithms, and the application of these algorithms to different databases.

  16. Developing and Implementing the Data Mining Algorithms in RAVEN

    International Nuclear Information System (INIS)

    Sen, Ramazan Sonat; Maljovec, Daniel Patrick; Alfonsi, Andrea; Rabiti, Cristian

    2015-01-01

    The RAVEN code is becoming a comprehensive tool to perform probabilistic risk assessment, uncertainty quantification, and verification and validation. The RAVEN code is being developed to support many programs and to provide a set of methodologies and algorithms for advanced analysis. Scientific computer codes can generate enormous amounts of data. To post-process and analyze such data might, in some cases, take longer than the initial software runtime. Data mining algorithms/methods help in recognizing and understanding patterns in the data, and thus discover knowledge in databases. The methodologies used in the dynamic probabilistic risk assessment or in uncertainty and error quantification analysis couple system/physics codes with simulation controller codes, such as RAVEN. RAVEN introduces both deterministic and stochastic elements into the simulation while the system/physics code model the dynamics deterministically. A typical analysis is performed by sampling values of a set of parameter values. A major challenge in using dynamic probabilistic risk assessment or uncertainty and error quantification analysis for a complex system is to analyze the large number of scenarios generated. Data mining techniques are typically used to better organize and understand data, i.e. recognizing patterns in the data. This report focuses on development and implementation of Application Programming Interfaces (APIs) for different data mining algorithms, and the application of these algorithms to different databases.

  17. Remote System for Development, Implementation and Testing of Control Algorithms

    Directory of Open Access Journals (Sweden)

    Milan Matijevic

    2007-02-01

    Full Text Available Education in the field of automatic control requires adequate practice on real systems for better and full understanding of the control theory. Experimenting on real models developed exclusively for the purpose of education and gaining necessary experience is the most adequate and traditionally it requires physical presence in laboratories where the equipment is installed. Remote access to laboratories for control systems is a necessary precondition and support for implementation of the e learning in the area of control engineering. The main feature of the developed system is support for the development, implementation and testing of user defined control algorithms with remote controller laboratory. User can define control algorithm in some conventional programming language and test it using this remote system.

  18. Synchronous development of breast cancer and chest wall fibrosarcoma after previous mantle radiation for Hodgkin's disease

    International Nuclear Information System (INIS)

    Patlas, Michael; McCready, David; Kulkarni, Supriya; Dill-Macky, Marcus J.

    2005-01-01

    Survivors of Hodgkin's disease are at increased risk of developing a second malignant neoplasm, including breast carcinoma and sarcoma. We report the first case of synchronous development of chest wall fibrosarcoma and breast carcinoma after mantle radiotherapy for Hodgkin's disease. Mammographic, sonographic and MR features are demonstrated. (orig.)

  19. Development of the trigger algorithm for the MONOLITH experiment

    International Nuclear Information System (INIS)

    Gutsche, O.

    2001-05-01

    The MONOLITH project is proposed to prove atmospheric neutrino oscillations and to improve the corresponding measurements of Super-Kamiokande. The MONOLITH detector consists of a massive (34 kt) magnetized iron tracking calorimeter and is optimized for muon neutrino detection. This diploma thesis presents the development of the trigger algorithm for the MONOLITH experiment and related test measurements. Chapter two gives an introduction to the mechanism of neutrino oscillations. The two flavor approximation and the three flavor mechanism are described and influences of matter on neutrino oscillations are discussed. The principles of neutrino oscillation experiments are discussed and the results of Super-Kamiokande, a neutrino oscillation experiment, are presented. Super-Kamiokande gave the strongest indications for atmospheric neutrino oscillations so far. The third chapter introduces the MONOLITH project in the context of atmospheric neutrino oscillations. The MONOLITH detector is described and the main active component, the glass spark chamber, is presented. Chapter four presents the practical part of this thesis. A test setup of a glass spark chamber is built up including a cosmics trigger and a data acquisition system. Cosmic ray muons are used for the investigation of the chamber. During a long term test, the stability of the efficiency and the noise rate of the chamber are investigated. A status report of the results is given. The results are taken as input for the trigger development. In chapter five, the development of the trigger algorithm is presented. In the beginning, the structural design of the trigger algorithm is described. The efficiency and the rate of the trigger algorithm are investigated using two event sources, a Monte Carlo neutrino event sample and a generated noise sample. For the analysis, the data sources are processed by several processing stages which are visualized by corresponding event displays. In the course of the data processing

  20. Sustainable development, tourism and territory. Previous elements towards a systemic approach

    Directory of Open Access Journals (Sweden)

    Pierre TORRENTE

    2009-01-01

    Full Text Available Today, tourism is one of the major challenges for many countries and territories. The balance of payments, an ever-increasing number of visitors and the significant development of the tourism offer clearly illustrate the booming trend in this sector. This macro-economic approach is often used by the organizations in charge of tourism, WTO for instance. Quantitative assessments which consider the satisfaction of customers’ needs as an end in itself have prevailed both in tourism development schemes and in prospective approaches since the sixties.

  1. Development of target-tracking algorithms using neural network

    Energy Technology Data Exchange (ETDEWEB)

    Park, Dong Sun; Lee, Joon Whaoan; Yoon, Sook; Baek, Seong Hyun; Lee, Myung Jae [Chonbuk National University, Chonjoo (Korea)

    1998-04-01

    The utilization of remote-control robot system in atomic power plants or nuclear-related facilities grows rapidly, to protect workers form high radiation environments. Such applications require complete stability of the robot system, so that precisely tracking the robot is essential for the whole system. This research is to accomplish the goal by developing appropriate algorithms for remote-control robot systems. A neural network tracking system is designed and experimented to trace a robot Endpoint. This model is aimed to utilized the excellent capabilities of neural networks; nonlinear mapping between inputs and outputs, learning capability, and generalization capability. The neural tracker consists of two networks for position detection and prediction. Tracking algorithms are developed and experimented for the two models. Results of the experiments show that both models are promising as real-time target-tracking systems for remote-control robot systems. (author). 10 refs., 47 figs.

  2. Computational Fluid Dynamics. [numerical methods and algorithm development

    Science.gov (United States)

    1992-01-01

    This collection of papers was presented at the Computational Fluid Dynamics (CFD) Conference held at Ames Research Center in California on March 12 through 14, 1991. It is an overview of CFD activities at NASA Lewis Research Center. The main thrust of computational work at Lewis is aimed at propulsion systems. Specific issues related to propulsion CFD and associated modeling will also be presented. Examples of results obtained with the most recent algorithm development will also be presented.

  3. Development of hybrid artificial intelligent based handover decision algorithm

    Directory of Open Access Journals (Sweden)

    A.M. Aibinu

    2017-04-01

    Full Text Available The possibility of seamless handover remains a mirage despite the plethora of existing handover algorithms. The underlying factor responsible for this has been traced to the Handover decision module in the Handover process. Hence, in this paper, the development of novel hybrid artificial intelligent handover decision algorithm has been developed. The developed model is made up of hybrid of Artificial Neural Network (ANN based prediction model and Fuzzy Logic. On accessing the network, the Received Signal Strength (RSS was acquired over a period of time to form a time series data. The data was then fed to the newly proposed k-step ahead ANN-based RSS prediction system for estimation of prediction model coefficients. The synaptic weights and adaptive coefficients of the trained ANN was then used to compute the k-step ahead ANN based RSS prediction model coefficients. The predicted RSS value was later codified as Fuzzy sets and in conjunction with other measured network parameters were fed into the Fuzzy logic controller in order to finalize handover decision process. The performance of the newly developed k-step ahead ANN based RSS prediction algorithm was evaluated using simulated and real data acquired from available mobile communication networks. Results obtained in both cases shows that the proposed algorithm is capable of predicting ahead the RSS value to about ±0.0002 dB. Also, the cascaded effect of the complete handover decision module was also evaluated. Results obtained show that the newly proposed hybrid approach was able to reduce ping-pong effect associated with other handover techniques.

  4. Algorithms

    Indian Academy of Sciences (India)

    In the program shown in Figure 1, we have repeated the algorithm. M times and we can make the following observations. Each block is essentially a different instance of "code"; that is, the objects differ by the value to which N is initialized before the execution of the. "code" block. Thus, we can now avoid the repetition of the ...

  5. Algorithms

    Indian Academy of Sciences (India)

    algorithms built into the computer corresponding to the logic- circuit rules that are used to .... For the purpose of carrying ou t ari thmetic or logical operations the memory is organized in terms .... In fixed point representation, one essentially uses integer arithmetic operators assuming the binary point to be at some point other ...

  6. Developing Information Power Grid Based Algorithms and Software

    Science.gov (United States)

    Dongarra, Jack

    1998-01-01

    This exploratory study initiated our effort to understand performance modeling on parallel systems. The basic goal of performance modeling is to understand and predict the performance of a computer program or set of programs on a computer system. Performance modeling has numerous applications, including evaluation of algorithms, optimization of code implementations, parallel library development, comparison of system architectures, parallel system design, and procurement of new systems. Our work lays the basis for the construction of parallel libraries that allow for the reconstruction of application codes on several distinct architectures so as to assure performance portability. Following our strategy, once the requirements of applications are well understood, one can then construct a library in a layered fashion. The top level of this library will consist of architecture-independent geometric, numerical, and symbolic algorithms that are needed by the sample of applications. These routines should be written in a language that is portable across the targeted architectures.

  7. Developing an Orographic Adjustment for the SCaMPR Algorithm

    Science.gov (United States)

    Yucel, I.; Akcelik, M.; Kuligowski, R. J.

    2016-12-01

    In support of the National Oceanic and Atmospheric Administration (NOAA) National Weather Service's (NWS) flash flood warning and heavy precipitation forecast efforts, the NOAA National Environmental Satellite Data and Information Service (NESDIS) Center for Satellite Applications and Research (STAR) has been providing satellite-based precipitation estimates operationally since 1978. The GOES-R Algorithm Working Group (AWG) is responsible for developing and demonstrating algorithms for retrieving various geophysical parameters from GOES data, including rainfall. The rainfall algorithm selected by the GOES-R AWG is the Self-Calibrating Multivariate Precipitation Retrieval (SCaMPR). However, the SCaMPR does not currently make any adjustments for the effects of complex topography on rainfall. Elevation-dependent bias structures suggest that there is an increased sensitivity to deep convection, which generates heavy precipitation at the expense of missing lighter precipitation events. A regionally dependent empirical elevation-based bias correction technique may help improve the quality of satellite-derived precipitation products. This study investigates the potential for improving the SCaMPR algorithm by incorporating an orographic correction based on calibration of the SCaMPR against rain gauge transects in northwestern Mexico to identify correctable biases related to elevation, slope, and wind direction. The findings suggest that continued improvement to the developed orographic correction scheme is warranted in order to advance quantitative precipitation estimation in complex terrain regions for use in weather forecasting and hydrologic applications. The relationships that are isolated during this analysis will be used to create a more accurate terrain adjustment for SCaMPR.

  8. Recent Progress in Development of SWOT River Discharge Algorithms

    Science.gov (United States)

    Pavelsky, Tamlin M.; Andreadis, Konstantinos; Biancamaria, Sylvian; Durand, Michael; Moller, Dewlyn; Rodriguez, Enersto; Smith, Laurence C.

    2013-09-01

    The Surface Water and Ocean Topography (SWOT) Mission is a satellite mission under joint development by NASA and CNES. The mission will use interferometric synthetic aperture radar technology to continuously map, for the first time, water surface elevations and water surface extents in rivers, lakes, and oceans at high spatial resolutions. Among the primary goals of SWOT is the accurate retrieval of river discharge directly from SWOT measurements. Although it is central to the SWOT mission, discharge retrieval represents a substantial challenge due to uncertainties in SWOT measurements and because traditional discharge algorithms are not optimized for SWOT-like measurements. However, recent work suggests that SWOT may also have unique strengths that can be exploited to yield accurate estimates of discharge. A NASA-sponsored workshop convened June 18-20, 2012 at the University of North Carolina focused on progress and challenges in developing SWOT-specific discharge algorithms. Workshop participants agreed that the only viable approach to discharge estimation will be based on a slope-area scaling method such as Manning's equation, but modified slightly to reflect the fact that SWOT will estimate reach-averaged rather than cross- sectional discharge. While SWOT will provide direct measurements of some key parameters such as width and slope, others such as baseflow depth and channel roughness must be estimated. Fortunately, recent progress has suggested several algorithms that may allow the simultaneous estimation of these quantities from SWOT observations by using multitemporal observations over several adjacent reaches. However, these algorithms will require validation, which will require the collection of new field measurements, airborne imagery from AirSWOT (a SWOT analogue), and compilation of global datasets of channel roughness, river width, and other relevant variables.

  9. Development of computed tomography system and image reconstruction algorithm

    International Nuclear Information System (INIS)

    Khairiah Yazid; Mohd Ashhar Khalid; Azaman Ahmad; Khairul Anuar Mohd Salleh; Ab Razak Hamzah

    2006-01-01

    Computed tomography is one of the most advanced and powerful nondestructive inspection techniques, which is currently used in many different industries. In several CT systems, detection has been by combination of an X-ray image intensifier and charge -coupled device (CCD) camera or by using line array detector. The recent development of X-ray flat panel detector has made fast CT imaging feasible and practical. Therefore this paper explained the arrangement of a new detection system which is using the existing high resolution (127 μm pixel size) flat panel detector in MINT and the image reconstruction technique developed. The aim of the project is to develop a prototype flat panel detector based CT imaging system for NDE. The prototype consisted of an X-ray tube, a flat panel detector system, a rotation table and a computer system to control the sample motion and image acquisition. Hence this project is divided to two major tasks, firstly to develop image reconstruction algorithm and secondly to integrate X-ray imaging components into one CT system. The image reconstruction algorithm using filtered back-projection method is developed and compared to other techniques. The MATLAB program is the tools used for the simulations and computations for this project. (Author)

  10. SPECIAL LIBRARIES OF FRAGMENTS OF ALGORITHMIC NETWORKS TO AUTOMATE THE DEVELOPMENT OF ALGORITHMIC MODELS

    Directory of Open Access Journals (Sweden)

    V. E. Marley

    2015-01-01

    Full Text Available Summary. The concept of algorithmic models appeared from the algorithmic approach in which the simulated object, the phenomenon appears in the form of process, subject to strict rules of the algorithm, which placed the process of operation of the facility. Under the algorithmic model is the formalized description of the scenario subject specialist for the simulated process, the structure of which is comparable with the structure of the causal and temporal relationships between events of the process being modeled, together with all information necessary for its software implementation. To represent the structure of algorithmic models used algorithmic network. Normally, they were defined as loaded finite directed graph, the vertices which are mapped to operators and arcs are variables, bound by operators. The language of algorithmic networks has great features, the algorithms that it can display indifference the class of all random algorithms. In existing systems, automation modeling based on algorithmic nets, mainly used by operators working with real numbers. Although this reduces their ability, but enough for modeling a wide class of problems related to economy, environment, transport, technical processes. The task of modeling the execution of schedules and network diagrams is relevant and useful. There are many counting systems, network graphs, however, the monitoring process based analysis of gaps and terms of graphs, no analysis of prediction execution schedule or schedules. The library is designed to build similar predictive models. Specifying source data to obtain a set of projections from which to choose one and take it for a new plan.

  11. Development and Testing of the Gust Front Algorithm.

    Science.gov (United States)

    1987-11-01

    NRO) and b) Cimarron (CIM), looking at the same gust front (April 13, 19.fi). ~vii LIST oF TABLEc Table .. List of Thresholds Table 2. Tower Data at...for the Norman and Cimarron Radars viii M MNSMW Development and Testing of the Gust Front Algorithm Arthur Witt and Steven D. Smith NOAA...Doppler radars (the NSSL radars located at Norman and Cimarron (CIM), wnich is about 40 km NW of Norman) looking at the saine gust front. The comparison was

  12. Development of Image Reconstruction Algorithms in electrical Capacitance Tomography

    International Nuclear Information System (INIS)

    Fernandez Marron, J. L.; Alberdi Primicia, J.; Barcala Riveira, J. M.

    2007-01-01

    The Electrical Capacitance Tomography (ECT) has not obtained a good development in order to be used at industrial level. That is due first to difficulties in the measurement of very little capacitances (in the range of femto farads) and second to the problem of reconstruction on- line of the images. This problem is due also to the small numbers of electrodes (maximum 16), that made the usual algorithms of reconstruction has many errors. In this work it is described a new purely geometrical method that could be used for this purpose. (Author) 4 refs

  13. Development of an algorithm for quantifying extremity biological tissue

    International Nuclear Information System (INIS)

    Pavan, Ana L.M.; Miranda, Jose R.A.; Pina, Diana R. de

    2013-01-01

    The computerized radiology (CR) has become the most widely used device for image acquisition and production, since its introduction in the 80s. The detection and early diagnosis, obtained via CR, are important for the successful treatment of diseases such as arthritis, metabolic bone diseases, tumors, infections and fractures. However, the standards used for optimization of these images are based on international protocols. Therefore, it is necessary to compose radiographic techniques for CR system that provides a secure medical diagnosis, with doses as low as reasonably achievable. To this end, the aim of this work is to develop a quantifier algorithm of tissue, allowing the construction of a homogeneous end used phantom to compose such techniques. It was developed a database of computed tomography images of hand and wrist of adult patients. Using the Matlab ® software, was developed a computational algorithm able to quantify the average thickness of soft tissue and bones present in the anatomical region under study, as well as the corresponding thickness in simulators materials (aluminium and lucite). This was possible through the application of mask and Gaussian removal technique of histograms. As a result, was obtained an average thickness of soft tissue of 18,97 mm and bone tissue of 6,15 mm, and their equivalents in materials simulators of 23,87 mm of acrylic and 1,07mm of aluminum. The results obtained agreed with the medium thickness of biological tissues of a patient's hand pattern, enabling the construction of an homogeneous phantom

  14. Algorithms

    Indian Academy of Sciences (India)

    In the previous article of this series, we looked at simple data types and their representation in computer memory. The notion of a simple data type can be extended to denote a set of elements corresponding to one data item at a higher level. The process of structuring or grouping of the basic data elements is often referred ...

  15. Integrated Graphics Operations and Analysis Lab Development of Advanced Computer Graphics Algorithms

    Science.gov (United States)

    Wheaton, Ira M.

    2011-01-01

    The focus of this project is to aid the IGOAL in researching and implementing algorithms for advanced computer graphics. First, this project focused on porting the current International Space Station (ISS) Xbox experience to the web. Previously, the ISS interior fly-around education and outreach experience only ran on an Xbox 360. One of the desires was to take this experience and make it into something that can be put on NASA s educational site for anyone to be able to access. The current code works in the Unity game engine which does have cross platform capability but is not 100% compatible. The tasks for an intern to complete this portion consisted of gaining familiarity with Unity and the current ISS Xbox code, porting the Xbox code to the web as is, and modifying the code to work well as a web application. In addition, a procedurally generated cloud algorithm will be developed. Currently, the clouds used in AGEA animations and the Xbox experiences are a texture map. The desire is to create a procedurally generated cloud algorithm to provide dynamically generated clouds for both AGEA animations and the Xbox experiences. This task consists of gaining familiarity with AGEA and the plug-in interface, developing the algorithm, creating an AGEA plug-in to implement the algorithm inside AGEA, and creating a Unity script to implement the algorithm for the Xbox. This portion of the project was unable to be completed in the time frame of the internship; however, the IGOAL will continue to work on it in the future.

  16. Simple Algorithms for Distributed Leader Election in Anonymous Synchronous Rings and Complete Networks Inspired by Neural Development in Fruit Flies.

    Science.gov (United States)

    Xu, Lei; Jeavons, Peter

    2015-11-01

    Leader election in anonymous rings and complete networks is a very practical problem in distributed computing. Previous algorithms for this problem are generally designed for a classical message passing model where complex messages are exchanged. However, the need to send and receive complex messages makes such algorithms less practical for some real applications. We present some simple synchronous algorithms for distributed leader election in anonymous rings and complete networks that are inspired by the development of the neural system of the fruit fly. Our leader election algorithms all assume that only one-bit messages are broadcast by nodes in the network and processors are only able to distinguish between silence and the arrival of one or more messages. These restrictions allow implementations to use a simpler message-passing architecture. Even with these harsh restrictions our algorithms are shown to achieve good time and message complexity both analytically and experimentally.

  17. QAP collaborates in development of the sick child algorithm.

    Science.gov (United States)

    1994-01-01

    Algorithms which specify procedures for proper diagnosis and treatment of common diseases have been available to primary health care services in less developed countries for the past decade. Whereas each algorithm has usually been limited to a single ailment, children often present with the need for more comprehensive assessment and treatment. Treating just one illness in these children leads to incomplete treatment or missed opportunities for preventive services. To address this problem, the World Health Organization has recently developed a Sick Child Algorithm (SCA) for children aged 2 months-5 years. In addition to specifying case management procedures for acute respiratory illness, diarrhea/dehydration, fever, otitis, and malnutrition, the SCA prompts a check of the child's immunization status. The specificity and sensitivity of this SCA were field-tested in Kenya and the Gambia. In Kenya, the Malaria Branch of the US Centers for Disease Control and Prevention tested the SCA under typical conditions in Siaya District. The Quality Assurance Project of the Center for Human Services carried out a parallel facility-based systems analysis at the request of the Malaria Branch. The assessment which took place in September-October 1993, took the form of observations of provider/patient interactions, provider interviews, and verification of supplies and equipment in 19 rural health facilities to determine how current practices compare to actions prescribed by the SCA. This will reveal the type and amount of technical support needed to achieve conformity to the SCA's clinical practice recommendations. The data will allow officials to devise the proper training programs and will predict quality improvements likely to be achieved through adoption of the SCA in terms of effective case treatment and fewer missed immunization opportunities. Preliminary analysis indicates that the primary health care delivery in Siya deviates in several significant respects from performance

  18. Development of sensor-based nitrogen recommendation algorithms for cereal crops

    Science.gov (United States)

    Asebedo, Antonio Ray

    through 2014 to evaluate the previously developed KSU sensor-based N recommendation algorithm in corn N fertigation systems. Results indicate that the current KSU corn algorithm was effective at achieving high yields, but has the tendency to overestimate N requirements. To optimize sensor-based N recommendations for N fertigation systems, algorithms must be specifically designed for these systems to take advantage of their full capabilities, thus allowing implementation of high NUE N management systems.

  19. Developing a corpus to verify the performance of a tone labelling algorithm

    CSIR Research Space (South Africa)

    Raborife, M

    2011-11-01

    Full Text Available The authors report on a study that involved the development of a corpus used to verify the performance of two tone labelling algorithms, with one algorithm being an improvement on the other. These algorithms were developed for speech synthesis...

  20. Development of a parallel genetic algorithm using MPI and its application in a nuclear reactor core. Design optimization

    International Nuclear Information System (INIS)

    Waintraub, Marcel; Pereira, Claudio M.N.A.; Baptista, Rafael P.

    2005-01-01

    This work presents the development of a distributed parallel genetic algorithm applied to a nuclear reactor core design optimization. In the implementation of the parallelism, a 'Message Passing Interface' (MPI) library, standard for parallel computation in distributed memory platforms, has been used. Another important characteristic of MPI is its portability for various architectures. The main objectives of this paper are: validation of the results obtained by the application of this algorithm in a nuclear reactor core optimization problem, through comparisons with previous results presented by Pereira et al.; and performance test of the Brazilian Nuclear Engineering Institute (IEN) cluster in reactors physics optimization problems. The experiments demonstrated that the developed parallel genetic algorithm using the MPI library presented significant gains in the obtained results and an accentuated reduction of the processing time. Such results ratify the use of the parallel genetic algorithms for the solution of nuclear reactor core optimization problems. (author)

  1. Development and testing of incident detection algorithms. Vol. 2, research methodology and detailed results.

    Science.gov (United States)

    1976-04-01

    The development and testing of incident detection algorithms was based on Los Angeles and Minneapolis freeway surveillance data. Algorithms considered were based on times series and pattern recognition techniques. Attention was given to the effects o...

  2. Development and evaluation of thermal model reduction algorithms for spacecraft

    Science.gov (United States)

    Deiml, Michael; Suderland, Martin; Reiss, Philipp; Czupalla, Markus

    2015-05-01

    This paper is concerned with the topic of the reduction of thermal models of spacecraft. The work presented here has been conducted in cooperation with the company OHB AG, formerly Kayser-Threde GmbH, and the Institute of Astronautics at Technische Universität München with the goal to shorten and automatize the time-consuming and manual process of thermal model reduction. The reduction of thermal models can be divided into the simplification of the geometry model for calculation of external heat flows and radiative couplings and into the reduction of the underlying mathematical model. For simplification a method has been developed which approximates the reduced geometry model with the help of an optimization algorithm. Different linear and nonlinear model reduction techniques have been evaluated for their applicability in reduction of the mathematical model. Thereby the compatibility with the thermal analysis tool ESATAN-TMS is of major concern, which restricts the useful application of these methods. Additional model reduction methods have been developed, which account to these constraints. The Matrix Reduction method allows the approximation of the differential equation to reference values exactly expect for numerical errors. The summation method enables a useful, applicable reduction of thermal models that can be used in industry. In this work a framework for model reduction of thermal models has been created, which can be used together with a newly developed graphical user interface for the reduction of thermal models in industry.

  3. Algorithm development for corticosteroid management in systemic juvenile idiopathic arthritis trial using consensus methodology

    Directory of Open Access Journals (Sweden)

    Ilowite Norman T

    2012-08-01

    Full Text Available Abstract Background The management of background corticosteroid therapy in rheumatology clinical trials poses a major challenge. We describe the consensus methodology used to design an algorithm to standardize changes in corticosteroid dosing during the Randomized Placebo Phase Study of Rilonacept in Systemic Juvenile Idiopathic Arthritis Trial (RAPPORT. Methods The 20 RAPPORT site principal investigators (PIs and 4 topic specialists constituted an expert panel that participated in the consensus process. The panel used a modified Delphi Method consisting of an on-line questionnaire, followed by a one day face-to-face consensus conference. Consensus was defined as ≥ 75% agreement. For items deemed essential but when consensus on critical values was not achieved, simple majority vote drove the final decision. Results The panel identified criteria for initiating or increasing corticosteroids. These included the presence or development of anemia, myocarditis, pericarditis, pleuritis, peritonitis, and either complete or incomplete macrophage activation syndrome (MAS. The panel also identified criteria for tapering corticosteroids which included absence of fever for ≥ 3 days in the previous week, absence of poor physical functioning, and seven laboratory criteria. A tapering schedule was also defined. Conclusion The expert panel established consensus regarding corticosteroid management and an algorithm for steroid dosing that was well accepted and used by RAPPORT investigators. Developed specifically for the RAPPORT trial, further study of the algorithm is needed before recommendation for more general clinical use.

  4. Development of a Grapevine Pruning Algorithm for Using in Pruning

    Directory of Open Access Journals (Sweden)

    S. M Hosseini

    2017-10-01

    Full Text Available Introduction Great areas of the orchards in the world are dedicated to cultivation of the grapevine. Normally grape vineyards are pruned twice a year. Among the operations of grape production, winter pruning of the bushes is the only operation that still has not been fully mechanized while it is known as the most laborious jobs in the farm. Some of the grape producing countries use various mechanical machines to prune the grapevines, but in most cases, these machines do not have a good performance. Therefore intelligent pruning machine seems to be necessary in this regard and this intelligent pruning machines can reduce the labor required to prune the vineyards. It this study in was attempted to develop an algorithm that uses image processing techniques to identify which parts of the grapevine should be cut. Stereo vision technique was used to obtain three dimensional images from the bare bushes whose leaves were fallen in autumn. Stereo vision systems are used to determine the depth from two images taken at the same time but from slightly different viewpoints using two cameras. Each pair of images of a common scene is related by a popular geometry, and corresponding points in the images pairs are constrained to lie on pairs of conjugate popular lines. Materials and Methods Photos were taken from gardens of the Research Center for Agriculture and Natural Resources of Fars province, Iran. At first, the distance between the plants and the cameras should be determined. The distance between the plants and cameras can be obtained by using the stereo vision techniques. Therefore, this method was used in this paper by two pictures taken from each plant with the left and right cameras. The algorithm was written in MATLAB. To facilitate the segmentation of the branches from the rows at the back, a blue plate with dimensions of 2×2 m2 were used at the background. After invoking the images, branches were segmented from the background to produce the binary

  5. A DIFFERENTIAL EVOLUTION ALGORITHM DEVELOPED FOR A NURSE SCHEDULING PROBLEM

    Directory of Open Access Journals (Sweden)

    Shahnazari-Shahrezaei, P.

    2012-11-01

    Full Text Available Nurse scheduling is a type of manpower allocation problem that tries to satisfy hospital managers objectives and nurses preferences as much as possible by generating fair shift schedules. This paper presents a nurse scheduling problem based on a real case study, and proposes two meta-heuristics a differential evolution algorithm (DE and a greedy randomised adaptive search procedure (GRASP to solve it. To investigate the efficiency of the proposed algorithms, two problems are solved. Furthermore, some comparison metrics are applied to examine the reliability of the proposed algorithms. The computational results in this paper show that the proposed DE outperforms the GRASP.

  6. IR Algorithm Development for Fire and Forget Projectiles.

    Science.gov (United States)

    1982-06-18

    its in-house computer . These algorithms are then run against stored IR images of actual foreign tanks to determine the capabilities and -1il-rations of...34"ELECTE- S JUL 212U 0-1 MARCHESE 2. ALGORITHMS USED FOR ARMORED TARGET DETECTION: An algorithm is a set of logical rules or mathenatical instructions used...three adjacent pixels of the array sampled three times while the array rotates) an M2 value is computed . An M2 or variance map of the scene is thus

  7. Toward Developing Genetic Algorithms to Aid in Critical Infrastructure Modeling

    Energy Technology Data Exchange (ETDEWEB)

    2007-05-01

    Today’s society relies upon an array of complex national and international infrastructure networks such as transportation, telecommunication, financial and energy. Understanding these interdependencies is necessary in order to protect our critical infrastructure. The Critical Infrastructure Modeling System, CIMS©, examines the interrelationships between infrastructure networks. CIMS© development is sponsored by the National Security Division at the Idaho National Laboratory (INL) in its ongoing mission for providing critical infrastructure protection and preparedness. A genetic algorithm (GA) is an optimization technique based on Darwin’s theory of evolution. A GA can be coupled with CIMS© to search for optimum ways to protect infrastructure assets. This includes identifying optimum assets to enforce or protect, testing the addition of or change to infrastructure before implementation, or finding the optimum response to an emergency for response planning. This paper describes the addition of a GA to infrastructure modeling for infrastructure planning. It first introduces the CIMS© infrastructure modeling software used as the modeling engine to support the GA. Next, the GA techniques and parameters are defined. Then a test scenario illustrates the integration with CIMS© and the preliminary results.

  8. Developing an Algorithm to Consider Mutliple Demand Response Objectives

    Directory of Open Access Journals (Sweden)

    D. Behrens

    2018-02-01

    Full Text Available Due to technological improvement and changing environment, energy grids face various challenges, which, for example, deal with integrating new appliances such as electric vehicles and photovoltaic. Managing such grids has become increasingly important for research and practice, since, for example, grid reliability and cost benefits are endangered. Demand response (DR is one possibility to contribute to this crucial task by shifting and managing energy loads in particular. Realizing DR thereby can address multiple objectives (such as cost savings, peak load reduction and flattening the load profile to obtain various goals. However, current research lacks algorithms that address multiple DR objectives sufficiently. This paper aims to design a multi-objective DR optimization algorithm and to purpose a solution strategy. We therefore first investigate the research field and existing solutions, and then design an algorithm suitable for taking multiple objectives into account. The algorithm has a predictable runtime and guarantees termination.

  9. Development and analysis of a three phase cloudlet allocation algorithm

    Directory of Open Access Journals (Sweden)

    Sudip Roy

    2017-10-01

    Full Text Available Cloud computing is one of the most popular and pragmatic topics of research nowadays. The allocation of cloudlet(s to suitable VM(s is one of the most challenging areas of research in the domain of cloud computing. This paper highlights a new cloudlet allocation algorithm which improves the performance of a cloud service provider (CSP in comparison with the other existing cloudlet allocation algorithms. The proposed Range wise Busy-checking 2-way Balanced (RB2B cloudlet allocation algorithm optimizes few basic parameters associated with the performance analysis. An extensive simulation is done to evaluate the proposed algorithm using Cloudsim to attest its efficacy in comparison to the other existing allocation policies.

  10. Development of an algorithm for controlling a multilevel three-phase converter

    Science.gov (United States)

    Taissariyeva, Kyrmyzy; Ilipbaeva, Lyazzat

    2017-08-01

    This work is devoted to the development of an algorithm for controlling transistors in a three-phase multilevel conversion system. The developed algorithm allows to organize a correct operation and describes the state of transistors at each moment of time when constructing a computer model of a three-phase multilevel converter. The developed algorithm of operation of transistors provides in-phase of a three-phase converter and obtaining a sinusoidal voltage curve at the converter output.

  11. Development of a New Fractal Algorithm to Predict Quality Traits of MRI Loins

    DEFF Research Database (Denmark)

    Caballero, Daniel; Caro, Andrés; Amigo, José Manuel

    2017-01-01

    Traditionally, the quality traits of meat products have been estimated by means of physico-chemical methods. Computer vision algorithms on MRI have also been presented as an alternative to these destructive methods since MRI is non-destructive, non-ionizing and innocuous. The use of fractals...... to analyze MRI could be another possibility for this purpose. In this paper, a new fractal algorithm is developed, to obtain features from MRI based on fractal characteristics. This algorithm is called OPFTA (One Point Fractal Texture Algorithm). Three fractal algorithms were tested in this study: CFA...... (Classical fractal algorithm), FTA (Fractal texture algorithm) and OPFTA. The results obtained by means of these three fractal algorithms were correlated to the results obtained by means of physico-chemical methods. OPFTA and FTA achieved correlation coefficients higher than 0.75 and CFA reached low...

  12. Development of an inter-layer solute transport algorithm for SOLTR computer program. Part 1. The algorithm

    International Nuclear Information System (INIS)

    Miller, I.; Roman, K.

    1979-12-01

    In order to perform studies of the influence of regional groundwater flow systems on the long-term performance of potential high-level nuclear waste repositories, it was determined that an adequate computer model would have to consider the full three-dimensional flow system. Golder Associates' SOLTR code, while three-dimensional, has an overly simple algorithm for simulating the passage of radionuclides from one aquifier to another above or below it. Part 1 of this report describes the algorithm developed to provide SOLTR with an improved capability for simulating interaquifer transport

  13. Validation of a Previously Developed Geospatial Model That Predicts the Prevalence of Listeria monocytogenes in New York State Produce Fields.

    Science.gov (United States)

    Weller, Daniel; Shiwakoti, Suvash; Bergholz, Peter; Grohn, Yrjo; Wiedmann, Martin; Strawn, Laura K

    2016-02-01

    Technological advancements, particularly in the field of geographic information systems (GIS), have made it possible to predict the likelihood of foodborne pathogen contamination in produce production environments using geospatial models. Yet, few studies have examined the validity and robustness of such models. This study was performed to test and refine the rules associated with a previously developed geospatial model that predicts the prevalence of Listeria monocytogenes in produce farms in New York State (NYS). Produce fields for each of four enrolled produce farms were categorized into areas of high or low predicted L. monocytogenes prevalence using rules based on a field's available water storage (AWS) and its proximity to water, impervious cover, and pastures. Drag swabs (n = 1,056) were collected from plots assigned to each risk category. Logistic regression, which tested the ability of each rule to accurately predict the prevalence of L. monocytogenes, validated the rules based on water and pasture. Samples collected near water (odds ratio [OR], 3.0) and pasture (OR, 2.9) showed a significantly increased likelihood of L. monocytogenes isolation compared to that for samples collected far from water and pasture. Generalized linear mixed models identified additional land cover factors associated with an increased likelihood of L. monocytogenes isolation, such as proximity to wetlands. These findings validated a subset of previously developed rules that predict L. monocytogenes prevalence in produce production environments. This suggests that GIS and geospatial models can be used to accurately predict L. monocytogenes prevalence on farms and can be used prospectively to minimize the risk of preharvest contamination of produce. Copyright © 2016, American Society for Microbiology. All Rights Reserved.

  14. Advances in diagnosing vaginitis: development of a new algorithm.

    Science.gov (United States)

    Nyirjesy, Paul; Sobel, Jack D

    2005-11-01

    The current approach to diagnosing vulvovaginal symptoms is both flawed and inadequate. Mistakes occur at the level of the patient herself, her provider, and the sensitivity of office-based tests. Often, the differential diagnosis is so broad that providers may overlook some of the possibilities. A diagnostic algorithm which separates women into either a normal or elevated vaginal pH can successfully classify most women with vaginitis. Based on the amine test, vaginal leukocytes, and vaginal parabasal cells, those with an elevated pH can be placed into further diagnostic categories. Such an algorithm helps to prioritize different diagnoses and suggest appropriate ancillary tests.

  15. Developing a Learning Algorithm-Generated Empirical Relaxer

    Energy Technology Data Exchange (ETDEWEB)

    Mitchell, Wayne [Univ. of Colorado, Boulder, CO (United States). Dept. of Applied Math; Kallman, Josh [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Toreja, Allen [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Gallagher, Brian [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Jiang, Ming [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Laney, Dan [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-03-30

    One of the main difficulties when running Arbitrary Lagrangian-Eulerian (ALE) simulations is determining how much to relax the mesh during the Eulerian step. This determination is currently made by the user on a simulation-by-simulation basis. We present a Learning Algorithm-Generated Empirical Relaxer (LAGER) which uses a regressive random forest algorithm to automate this decision process. We also demonstrate that LAGER successfully relaxes a variety of test problems, maintains simulation accuracy, and has the potential to significantly decrease both the person-hours and computational hours needed to run a successful ALE simulation.

  16. [Incidence and clinical risk factors for the development of diabetes mellitus in women with previous gestational diabetes].

    Science.gov (United States)

    Domínguez-Vigo, P; Álvarez-Silvares, E; Alves-Pérez M T; Domínguez-Sánchez, J; González-González, A

    2016-04-01

    Gestational diabetes is considered a variant of diabetes mellitus as they share a common pathophysiological basis: insulin resistance in target and insufficient secretion of it by pancreatic p-cell bodies. Pregnancy is a unique physiological situation provides an opportunity to identify future risk of diabetes mellitus. To determine the long-term incidence of diabetes mellitus in women who have previously been diagnosed with gestational diabetes and identifying clinical risk factors for developing the same. nested case-control cohort study. 671 patients between 1996 and 2009 were diagnosed with gestational diabetes were selected. The incidence of diabetes mellitus was estimated and 2 subgroups were formed: Group A or cases: women who develop diabetes mellitus after diagnosis of gestational diabetes. Group B or control: random sample of 71 women with a history of gestational diabetes in the follow-up period remained normoglycemic. Both groups were studied up to 18 years postpartum. By studying Kaplan Meier survival of the influence of different gestational variables it was obtained in the later development of diabetes mellitus with time parameter and COX models for categorical variables were applied. Significant variables were studied by multivariate Cox analysis. In all analyzes the Hazard ratio was calculated with confidence intervals at 95%. The incidence of diabetes mellitus was 10.3% in patients with a history of gestational diabetes. They were identified as risk factors in the index pregnancy to later development of diabetes mellitus: greater than 35 and younger than 27 years maternal age, BMI greater than 30 kg/m2, hypertensive disorders of pregnancy, insulin therapy, poor metabolic control and more than a complicated pregnancy with gestational diabetes. Clinical factors have been identified in the pregnancy complicated by gestational diabetes that determine a higher probability of progression to diabetes mellitus in the medium and long term.

  17. Planning policy, sustainability and housebuilder practices: The move into (and out of?) the redevelopment of previously developed land.

    Science.gov (United States)

    Karadimitriou, Nikos

    2013-05-01

    This paper explores the transformations of the housebuilding industry under the policy requirement to build on previously developed land (PDL). This requirement was a key lever in promoting the sustainable urban development agenda of UK governments from the early 1990s to 2010 and has survived albeit somewhat relaxed and permutated in the latest National Planning Policy Framework (NPPF). The paper therefore looks at the way in which the policy push towards densification and mixed use affected housebuilders' business strategy and practices and their ability to cope with the 2007 downturn of the housing market and its aftermath. It also points out the eventual feedback of some of these practices into planning policy. Following the gradual shift of British urban policy focus towards sustainability which started in the early 1990s, new configurations of actors, new skills, strategies and approaches to managing risk emerged in property development and housebuilding. There were at least two ways in which housebuilders could have responded to the requirements of developing long term mixed use high density projects on PDL. One way was to develop new products and to employ practices and combinations of practices involving phasing, a flexible approach to planning applications and innovative production methods. Alternatively, they could approach PDL development as a temporary turn of policy or view mixed use high density schemes as a niche market to be explored without drastically overhauling the business model of the entire firm. These transformations of the UK housebuilding sector were unfolding during a long period of buoyancy in the housing market which came to an end in 2007. Very little is known both about how housebuilder strategies and production practices evolved during the boom years as well as about how these firms coped with the effects of the 2007 market downturn. The paper draws on published data (company annual reports, government statistics) and primary

  18. Planning policy, sustainability and housebuilder practices: The move into (and out of?) the redevelopment of previously developed land

    Science.gov (United States)

    Karadimitriou, Nikos

    2013-01-01

    This paper explores the transformations of the housebuilding industry under the policy requirement to build on previously developed land (PDL). This requirement was a key lever in promoting the sustainable urban development agenda of UK governments from the early 1990s to 2010 and has survived albeit somewhat relaxed and permutated in the latest National Planning Policy Framework (NPPF). The paper therefore looks at the way in which the policy push towards densification and mixed use affected housebuilders’ business strategy and practices and their ability to cope with the 2007 downturn of the housing market and its aftermath. It also points out the eventual feedback of some of these practices into planning policy. Following the gradual shift of British urban policy focus towards sustainability which started in the early 1990s, new configurations of actors, new skills, strategies and approaches to managing risk emerged in property development and housebuilding. There were at least two ways in which housebuilders could have responded to the requirements of developing long term mixed use high density projects on PDL. One way was to develop new products and to employ practices and combinations of practices involving phasing, a flexible approach to planning applications and innovative production methods. Alternatively, they could approach PDL development as a temporary turn of policy or view mixed use high density schemes as a niche market to be explored without drastically overhauling the business model of the entire firm. These transformations of the UK housebuilding sector were unfolding during a long period of buoyancy in the housing market which came to an end in 2007. Very little is known both about how housebuilder strategies and production practices evolved during the boom years as well as about how these firms coped with the effects of the 2007 market downturn. The paper draws on published data (company annual reports, government statistics) and primary

  19. Heuristic Algorithms for Solving Bounded Diameter Minimum Spanning Tree Problem and Its Application to Genetic Algorithm Development

    OpenAIRE

    Nghia, Nguyen Duc; Binh, Huynh Thi Thanh

    2008-01-01

    We have introduced the heuristic algorithm for solving BDMST problem, called CBRC. The experiment shows that CBRC have best result than the other known heuristic algorithm for solving BDMST prolem on Euclidean instances. The best solution found by the genetic algorithm which uses best heuristic algorithm or only one heuristic algorithm for initialization the population is not better than the best solution found by the genetic algorithm which uses mixed heuristic algorithms (randomized heurist...

  20. Development of traffic light control algorithm in smart municipal network

    OpenAIRE

    Kuzminykh, Ievgeniia

    2016-01-01

    This paper presents smart system that bypasses the normal functioning algorithm of traffic lights, triggers a green light when the lights are red or reset the timer of the traffic lights when they are about to turn red. Different pieces of hardware like microcontroller units, transceivers, resistors, diodes, LEDs, a digital compass and accelerometer will be coupled together and programed to create unified complex intelligent system.

  1. Development of fuzzy logic algorithm for water purification plant

    OpenAIRE

    SUDESH SINGH RANA; SUDESH SINGH RANA

    2015-01-01

    This paper propose the design of FLC algorithm for industrial application such application is water purification plant. In the water purification plant raw water or ground water is promptly purified by injecting chemical at rates related to water quality. The feed of chemical rates judged and determined by the skilled operator. Yagishita et al.[1] structured a system based on fuzzy logic so that the feed rate of the coagulant can be judged automatically without any skilled operator. We perfor...

  2. Development of modelling algorithm of technological systems by statistical tests

    Science.gov (United States)

    Shemshura, E. A.; Otrokov, A. V.; Chernyh, V. G.

    2018-03-01

    The paper tackles the problem of economic assessment of design efficiency regarding various technological systems at the stage of their operation. The modelling algorithm of a technological system was performed using statistical tests and with account of the reliability index allows estimating the level of machinery technical excellence and defining the efficiency of design reliability against its performance. Economic feasibility of its application shall be determined on the basis of service quality of a technological system with further forecasting of volumes and the range of spare parts supply.

  3. Motion Cueing Algorithm Development: Human-Centered Linear and Nonlinear Approaches

    Science.gov (United States)

    Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.

    2005-01-01

    While the performance of flight simulator motion system hardware has advanced substantially, the development of the motion cueing algorithm, the software that transforms simulated aircraft dynamics into realizable motion commands, has not kept pace. Prior research identified viable features from two algorithms: the nonlinear "adaptive algorithm", and the "optimal algorithm" that incorporates human vestibular models. A novel approach to motion cueing, the "nonlinear algorithm" is introduced that combines features from both approaches. This algorithm is formulated by optimal control, and incorporates a new integrated perception model that includes both visual and vestibular sensation and the interaction between the stimuli. Using a time-varying control law, the matrix Riccati equation is updated in real time by a neurocomputing approach. Preliminary pilot testing resulted in the optimal algorithm incorporating a new otolith model, producing improved motion cues. The nonlinear algorithm vertical mode produced a motion cue with a time-varying washout, sustaining small cues for longer durations and washing out large cues more quickly compared to the optimal algorithm. The inclusion of the integrated perception model improved the responses to longitudinal and lateral cues. False cues observed with the NASA adaptive algorithm were absent. The neurocomputing approach was crucial in that the number of presentations of an input vector could be reduced to meet the real time requirement without degrading the quality of the motion cues.

  4. Design requirements and development of an airborne descent path definition algorithm for time navigation

    Science.gov (United States)

    Izumi, K. H.; Thompson, J. L.; Groce, J. L.; Schwab, R. W.

    1986-01-01

    The design requirements for a 4D path definition algorithm are described. These requirements were developed for the NASA ATOPS as an extension of the Local Flow Management/Profile Descent algorithm. They specify the processing flow, functional and data architectures, and system input requirements, and recommended the addition of a broad path revision (reinitialization) function capability. The document also summarizes algorithm design enhancements and the implementation status of the algorithm on an in-house PDP-11/70 computer. Finally, the requirements for the pilot-computer interfaces, the lateral path processor, and guidance and steering function are described.

  5. Design patterns for the development of electronic health record-driven phenotype extraction algorithms.

    Science.gov (United States)

    Rasmussen, Luke V; Thompson, Will K; Pacheco, Jennifer A; Kho, Abel N; Carrell, David S; Pathak, Jyotishman; Peissig, Peggy L; Tromp, Gerard; Denny, Joshua C; Starren, Justin B

    2014-10-01

    Design patterns, in the context of software development and ontologies, provide generalized approaches and guidance to solving commonly occurring problems, or addressing common situations typically informed by intuition, heuristics and experience. While the biomedical literature contains broad coverage of specific phenotype algorithm implementations, no work to date has attempted to generalize common approaches into design patterns, which may then be distributed to the informatics community to efficiently develop more accurate phenotype algorithms. Using phenotyping algorithms stored in the Phenotype KnowledgeBase (PheKB), we conducted an independent iterative review to identify recurrent elements within the algorithm definitions. We extracted and generalized recurrent elements in these algorithms into candidate patterns. The authors then assessed the candidate patterns for validity by group consensus, and annotated them with attributes. A total of 24 electronic Medical Records and Genomics (eMERGE) phenotypes available in PheKB as of 1/25/2013 were downloaded and reviewed. From these, a total of 21 phenotyping patterns were identified, which are available as an online data supplement. Repeatable patterns within phenotyping algorithms exist, and when codified and cataloged may help to educate both experienced and novice algorithm developers. The dissemination and application of these patterns has the potential to decrease the time to develop algorithms, while improving portability and accuracy. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. Clustering algorithm evaluation and the development of a replacement for procedure 1. [for crop inventories

    Science.gov (United States)

    Lennington, R. K.; Johnson, J. K.

    1979-01-01

    An efficient procedure which clusters data using a completely unsupervised clustering algorithm and then uses labeled pixels to label the resulting clusters or perform a stratified estimate using the clusters as strata is developed. Three clustering algorithms, CLASSY, AMOEBA, and ISOCLS, are compared for efficiency. Three stratified estimation schemes and three labeling schemes are also considered and compared.

  7. Development Modules for Specification of Requirements for a System of Verification of Parallel Algorithms

    Directory of Open Access Journals (Sweden)

    Vasiliy Yu. Meltsov

    2012-05-01

    Full Text Available This paper presents the results of the development of one of the modules of the system verification of parallel algorithms that are used to verify the inference engine. This module is designed to build the specification requirements, the feasibility of which on the algorithm is necessary to prove (test.

  8. Signal-Processing Algorithm Development for the ACLAIM Sensor

    Science.gov (United States)

    vonLaven, Scott

    1995-01-01

    Methods for further minimizing the risk by making use of previous lidar observations were investigated. EOFs are likely to play an important role in these methods, and a procedure for extracting EOFs from data has been implemented, The new processing methods involving EOFs could range from extrapolation, as discussed, to more complicated statistical procedures for maintaining low unstart risk.

  9. Development and performance analysis of model-based fault detection and diagnosis algorithm

    International Nuclear Information System (INIS)

    Kim, Jung Taek; Park, Jae Chang; Lee, Jung Woon; Kim, Kyung Youn; Lee, In Soo; Kim, Bong Seok; Kang, Sook In

    2002-05-01

    It is important to note that an effective means to assure the reliability and security for the nuclear power plant is to detect and diagnose the faults (failures) as soon and as accurately as possible. The objective of the project is to develop model-based fault detection and diagnosis algorithm for the pressurized water reactor and evaluate the performance of the developed algorithm. The scope of the work can be classified into two categories. The one is state-space model-based FDD algorithm based on the interacting multiple model (IMM) algorithm. The other is input-output model-based FDD algorithm based on the ART neural network. Extensive computer simulations are carried out to evaluate the performance in terms of speed and accuracy

  10. Development of radio frequency interference detection algorithms for passive microwave remote sensing

    Science.gov (United States)

    Misra, Sidharth

    Radio Frequency Interference (RFI) signals are man-made sources that are increasingly plaguing passive microwave remote sensing measurements. RFI is of insidious nature, with some signals low power enough to go undetected but large enough to impact science measurements and their results. With the launch of the European Space Agency (ESA) Soil Moisture and Ocean Salinity (SMOS) satellite in November 2009 and the upcoming launches of the new NASA sea-surface salinity measuring Aquarius mission in June 2011 and soil-moisture measuring Soil Moisture Active Passive (SMAP) mission around 2015, active steps are being taken to detect and mitigate RFI at L-band. An RFI detection algorithm was designed for the Aquarius mission. The algorithm performance was analyzed using kurtosis based RFI ground-truth. The algorithm has been developed with several adjustable location dependant parameters to control the detection statistics (false-alarm rate and probability of detection). The kurtosis statistical detection algorithm has been compared with the Aquarius pulse detection method. The comparative study determines the feasibility of the kurtosis detector for the SMAP radiometer, as a primary RFI detection algorithm in terms of detectability and data bandwidth. The kurtosis algorithm has superior detection capabilities for low duty-cycle radar like pulses, which are more prevalent according to analysis of field campaign data. Most RFI algorithms developed have generally been optimized for performance with individual pulsed-sinusoidal RFI sources. A new RFI detection model is developed that takes into account multiple RFI sources within an antenna footprint. The performance of the kurtosis detection algorithm under such central-limit conditions is evaluated. The SMOS mission has a unique hardware system, and conventional RFI detection techniques cannot be applied. Instead, an RFI detection algorithm for SMOS is developed and applied in the angular domain. This algorithm compares

  11. A Prototype Hail Detection Algorithm and Hail Climatology Developed with the Advanced Microwave Sounding Unit (AMSU)

    Science.gov (United States)

    Ferraro, Ralph; Beauchamp, James; Cecil, Dan; Heymsfeld, Gerald

    2015-01-01

    In previous studies published in the open literature, a strong relationship between the occurrence of hail and the microwave brightness temperatures (primarily at 37 and 85 GHz) was documented. These studies were performed with the Nimbus-7 SMMR, the TRMM Microwave Imager (TMI) and most recently, the Aqua AMSR-E sensor. This lead to climatologies of hail frequency from TMI and AMSR-E, however, limitations include geographical domain of the TMI sensor (35 S to 35 N) and the overpass time of the Aqua satellite (130 am/pm local time), both of which reduce an accurate mapping of hail events over the global domain and the full diurnal cycle. Nonetheless, these studies presented exciting, new applications for passive microwave sensors. Since 1998, NOAA and EUMETSAT have been operating the AMSU-A/B and the MHS on several operational satellites: NOAA-15 through NOAA-19; MetOp-A and -B. With multiple satellites in operation since 2000, the AMSU/MHS sensors provide near global coverage every 4 hours, thus, offering a much larger time and temporal sampling than TRMM or AMSR-E. With similar observation frequencies near 30 and 85 GHz and additionally three at the 183 GHz water vapor band, the potential to detect strong convection associated with severe storms on a more comprehensive time and space scale exists. In this study, we develop a prototype AMSU-based hail detection algorithm through the use of collocated satellite and surface hail reports over the continental U.S. for a 12-year period (2000-2011). Compared with the surface observations, the algorithm detects approximately 40 percent of hail occurrences. The simple threshold algorithm is then used to generate a hail climatology that is based on all available AMSU observations during 2000-11 that is stratified in several ways, including total hail occurrence by month (March through September), total annual, and over the diurnal cycle. Independent comparisons are made compared to similar data sets derived from other

  12. AeroADL: applying the integration of the Suomi-NPP science algorithms with the Algorithm Development Library to the calibration and validation task

    Science.gov (United States)

    Houchin, J. S.

    2014-09-01

    A common problem for the off-line validation of the calibration algorithms and algorithm coefficients is being able to run science data through the exact same software used for on-line calibration of that data. The Joint Polar Satellite System (JPSS) program solved part of this problem by making the Algorithm Development Library (ADL) available, which allows the operational algorithm code to be compiled and run on a desktop Linux workstation using flat file input and output. However, this solved only part of the problem, as the toolkit and methods to initiate the processing of data through the algorithms were geared specifically toward the algorithm developer, not the calibration analyst. In algorithm development mode, a limited number of sets of test data are staged for the algorithm once, and then run through the algorithm over and over as the software is developed and debugged. In calibration analyst mode, we are continually running new data sets through the algorithm, which requires significant effort to stage each of those data sets for the algorithm without additional tools. AeroADL solves this second problem by providing a set of scripts that wrap the ADL tools, providing both efficient means to stage and process an input data set, to override static calibration coefficient look-up-tables (LUT) with experimental versions of those tables, and to manage a library containing multiple versions of each of the static LUT files in such a way that the correct set of LUTs required for each algorithm are automatically provided to the algorithm without analyst effort. Using AeroADL, The Aerospace Corporation's analyst team has demonstrated the ability to quickly and efficiently perform analysis tasks for both the VIIRS and OMPS sensors with minimal training on the software tools.

  13. Developing algorithm for the critical care physician scheduling

    Science.gov (United States)

    Lee, Hyojun; Pah, Adam; Amaral, Luis; Northwestern Memorial Hospital Collaboration

    Understanding the social network has enabled us to quantitatively study social phenomena such as behaviors in adoption and propagation of information. However, most work has been focusing on networks of large heterogeneous communities, and little attention has been paid to how work-relevant information spreads within networks of small and homogeneous groups of highly trained individuals, such as physicians. Within the professionals, the behavior patterns and the transmission of information relevant to the job are dependent not only on the social network between the employees but also on the schedules and teams that work together. In order to systematically investigate the dependence of the spread of ideas and adoption of innovations on a work-environment network, we sought to construct a model for the interaction network of critical care physicians at Northwestern Memorial Hospital (NMH) based on their work schedules. We inferred patterns and hidden rules from past work schedules such as turnover rates. Using the characteristics of the work schedules of the physicians and their turnover rates, we were able to create multi-year synthetic work schedules for a generic intensive care unit. The algorithm for creating shift schedules can be applied to other schedule dependent networks ARO1.

  14. Development of Automatic Cluster Algorithm for Microcalcification in Digital Mammography

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Seok Yoon [Dept. of Medical Engineering, Korea University, Seoul (Korea, Republic of); Kim, Chang Soo [Dept. of Radiological Science, College of Health Sciences, Catholic University of Pusan, Pusan (Korea, Republic of)

    2009-03-15

    Digital Mammography is an efficient imaging technique for the detection and diagnosis of breast pathological disorders. Six mammographic criteria such as number of cluster, number, size, extent and morphologic shape of microcalcification, and presence of mass, were reviewed and correlation with pathologic diagnosis were evaluated. It is very important to find breast cancer early when treatment can reduce deaths from breast cancer and breast incision. In screening breast cancer, mammography is typically used to view the internal organization. Clusterig microcalcifications on mammography represent an important feature of breast mass, especially that of intraductal carcinoma. Because microcalcification has high correlation with breast cancer, a cluster of a microcalcification can be very helpful for the clinical doctor to predict breast cancer. For this study, three steps of quantitative evaluation are proposed : DoG filter, adaptive thresholding, Expectation maximization. Through the proposed algorithm, each cluster in the distribution of microcalcification was able to measure the number calcification and length of cluster also can be used to automatically diagnose breast cancer as indicators of the primary diagnosis.

  15. Algorithmic Research and Software Development for an Industrial Strength Sparse Matrix Library for Parallel Computers

    National Research Council Canada - National Science Library

    Grimes, Roger

    1999-01-01

    This final report describes the status of work performed during the months of Sept 1995 through Jan 1999 on the Algorithmic Research And Software Development For An Industrial Strength Sparse Matrix...

  16. Spectrophotometric determination of uranium with arsenazo previous liquid-liquid extraction and colour development in organic medium

    International Nuclear Information System (INIS)

    Palomares Delgado, F.; Vera Palomino, J.; Petrement Eguiluz, J. C.

    1964-01-01

    The determination of uranium with arsenazo is hindered by a great number of cation which form stable complexes with the reactive and may given rise to serious interferences. By studying the optimum conditions of uranium the extraction be means of tributylphosphate solutions dissolved in methylisobuthylketone, under conditions for previous masking of the interfering cations, an organic extract was obtained containing all the uranium together with small amounts of iron. The possible interference derived from the latter element is avoided by reduction with hydroxylammoniumchlorid followed by complex formation of the Fe(II)-ortophenantroline compound in alcoholic medium. (Author) 17 refs

  17. Development of a new genetic algorithm to solve the feedstock scheduling problem in an anaerobic digester

    Science.gov (United States)

    Cram, Ana Catalina

    As worldwide environmental awareness grow, alternative sources of energy have become important to mitigate climate change. Biogas in particular reduces greenhouse gas emissions that contribute to global warming and has the potential of providing 25% of the annual demand for natural gas in the U.S. In 2011, 55,000 metric tons of methane emissions were reduced and 301 metric tons of carbon dioxide emissions were avoided through the use of biogas alone. Biogas is produced by anaerobic digestion through the fermentation of organic material. It is mainly composed of methane with a rage of 50 to 80% in its concentration. Carbon dioxide covers 20 to 50% and small amounts of hydrogen, carbon monoxide and nitrogen. The biogas production systems are anaerobic digestion facilities and the optimal operation of an anaerobic digester requires the scheduling of all batches from multiple feedstocks during a specific time horizon. The availability times, biomass quantities, biogas production rates and storage decay rates must all be taken into account for maximal biogas production to be achieved during the planning horizon. Little work has been done to optimize the scheduling of different types of feedstock in anaerobic digestion facilities to maximize the total biogas produced by these systems. Therefore, in the present thesis, a new genetic algorithm is developed with the main objective of obtaining the optimal sequence in which different feedstocks will be processed and the optimal time to allocate to each feedstock in the digester with the main objective of maximizing the production of biogas considering different types of feedstocks, arrival times and decay rates. Moreover, all batches need to be processed in the digester in a specified time with the restriction that only one batch can be processed at a time. The developed algorithm is applied to 3 different examples and a comparison with results obtained in previous studies is presented.

  18. Synchronous development of breast cancer and chest wall fibrosarcoma after previous mantle radiation for Hodgkin's disease

    Energy Technology Data Exchange (ETDEWEB)

    Patlas, Michael [Hamilton General Hospital, Department of Radiology, Hamilton, ON (Canada); McCready, David [University Health Network and Mount Sinai Hospital, Department of Surgery, Toronto, ON (Canada); Kulkarni, Supriya; Dill-Macky, Marcus J. [University Health Network and Mount Sinai Hospital, Department of Medical Imaging, Toronto, ON (Canada)

    2005-09-01

    Survivors of Hodgkin's disease are at increased risk of developing a second malignant neoplasm, including breast carcinoma and sarcoma. We report the first case of synchronous development of chest wall fibrosarcoma and breast carcinoma after mantle radiotherapy for Hodgkin's disease. Mammographic, sonographic and MR features are demonstrated. (orig.)

  19. Development and validation of an algorithm for laser application in wound treatment

    Directory of Open Access Journals (Sweden)

    Diequison Rite da Cunha

    2017-12-01

    Full Text Available ABSTRACT Objective: To develop and validate an algorithm for laser wound therapy. Method: Methodological study and literature review. For the development of the algorithm, a review was performed in the Health Sciences databases of the past ten years. The algorithm evaluation was performed by 24 participants, nurses, physiotherapists, and physicians. For data analysis, the Cronbach’s alpha coefficient and the chi-square test for independence was used. The level of significance of the statistical test was established at 5% (p<0.05. Results: The professionals’ responses regarding the facility to read the algorithm indicated: 41.70%, great; 41.70%, good; 16.70%, regular. With regard the algorithm being sufficient for supporting decisions related to wound evaluation and wound cleaning, 87.5% said yes to both questions. Regarding the participants’ opinion that the algorithm contained enough information to support their decision regarding the choice of laser parameters, 91.7% said yes. The questionnaire presented reliability using the Cronbach’s alpha coefficient test (α = 0.962. Conclusion: The developed and validated algorithm showed reliability for evaluation, wound cleaning, and use of laser therapy in wounds.

  20. Development of MODIS data-based algorithm for retrieving sea surface temperature in coastal waters.

    Science.gov (United States)

    Wang, Jiao; Deng, Zhiqiang

    2017-06-01

    A new algorithm was developed for retrieving sea surface temperature (SST) in coastal waters using satellite remote sensing data from Moderate Resolution Imaging Spectroradiometer (MODIS) aboard Aqua platform. The new SST algorithm was trained using the Artificial Neural Network (ANN) method and tested using 8 years of remote sensing data from MODIS Aqua sensor and in situ sensing data from the US coastal waters in Louisiana, Texas, Florida, California, and New Jersey. The ANN algorithm could be utilized to map SST in both deep offshore and particularly shallow nearshore waters at the high spatial resolution of 1 km, greatly expanding the coverage of remote sensing-based SST data from offshore waters to nearshore waters. Applications of the ANN algorithm require only the remotely sensed reflectance values from the two MODIS Aqua thermal bands 31 and 32 as input data. Application results indicated that the ANN algorithm was able to explaining 82-90% variations in observed SST in US coastal waters. While the algorithm is generally applicable to the retrieval of SST, it works best for nearshore waters where important coastal resources are located and existing algorithms are either not applicable or do not work well, making the new ANN-based SST algorithm unique and particularly useful to coastal resource management.

  1. Development and evaluation of spectral transformation algorithms for analysis and characterization of forest vegetation

    Science.gov (United States)

    Zhao, Guang

    1998-11-01

    This research reviewed and evaluated some of the most important statistically based spectral transformation algorithms. Two spectral transformation algorithms, canonical discriminant analysis (CDA) and multiple logistic regression (MLR) transformations were developed and evaluated in two independent studies. The objectives were to investigate if the methods are capable of solving the two fundamental questions raised in the beginning: separating spectral overlap and quantifying spatial variability under forest conditions. It was generalized from previous research that spectral transformations are usually performed to complete one or more tasks, with ultimate goal of optimizing data structure for improving visual interpretation, analysis, and classification performance. PCA is the most widely used spectral transformation techniques. Kauth-Thomas Tasseled Cap transformed components are important vegetation indices, and they are developed using sensor and scene physical characteristics and Gram-Schmidt orthogonalization process. A theoretical comparison was conducted to identify major differences among Tasseled Cap, PCA, and CDA transformations in their objectives, prior knowledge requirements, limitations, processes, and variance-covariance usage. CDA was a better "separation" algorithm than PCA in improving overall classification accuracy. CDA was used as a transformation technique to not only increase class separation, but also reduce data dimension and noise. The last two canonical components usually contain largely noise variances, which hold less than 1 percent of the variance found in source variables. A sub-dimension (the first four components) is preferable for final classifications than the whole derived canonical component data sets, as the noise variances associated with the last two components were removed. Comparison of CDA and PCA eigenstructure matrices revealed that there is no distinct pattern in terms of source variable contribution and load signs

  2. Inflatable Antenna for CubeSat: Extension of the Previously Developed S-Band Design to the X-Band

    Science.gov (United States)

    Babuscia, Alessandra; Choi, Thomas; Cheung, Kar-Ming; Thangavelautham, Jekan; Ravichandran, Mithun; Chandra, Aman

    2015-01-01

    The inflatable antenna for CubeSat is a 1 meter antenna reflector designed with one side reflective Mylar, another side clear Mylar with a patch antenna at the focus. The development of this technology responds to the increasing need for more capable communication systems to allow CubeSats to operate autonomously in interplanetary missions. An initial version of the antenna for the S-Band was developed and tested in both anechoic chamber and vacuum chamber. Recent developments in transceivers and amplifiers for CubeSat at X-band motivated the extension from the S-Band to the X-Band. This paper describes the process of extending the design of the antenna to the X-Band focusing on patch antenna redesign, new manufacturing challenges and initial results of experimental tests.

  3. On the development of protein pKa calculation algorithms

    Science.gov (United States)

    Carstensen, Tommy; Farrell, Damien; Huang, Yong; Baker, Nathan A.; Nielsen, Jens Erik

    2011-01-01

    Protein pKa calculation methods are developed partly to provide fast non-experimental estimates of the ionization constants of protein side chains. However, the most significant reason for developing such methods is that a good pKa calculation method is presumed to provide an accurate physical model of protein electrostatics, which can be applied in methods for drug design, protein design and other structure-based energy calculation methods. We explore the validity of this presumption by simulating the development of a pKa calculation method using artificial experimental data derived from a human-defined physical reality. We examine the ability of an RMSD-guided development protocol to retrieve the correct (artificial) physical reality and find that a rugged optimization landscape and a huge parameter space prevent the identification of the correct physical reality. We examine the importance of the training set in developing pKa calculation methods and investigate the effect of experimental noise on our ability to identify the correct physical reality, and find that both effects have a significant and detrimental impact on the physical reality of the optimal model identified. Our findings are of relevance to all structure-based methods for protein energy calculations and simulation, and have large implications for all types of current pKa calculation methods. Our analysis furthermore suggests that careful and extensive validation on many types of experimental data can go some way in making current models more realistic. PMID:21744393

  4. An integrated environment for fast development and performance assessment of sonar image processing algorithms - SSIE

    DEFF Research Database (Denmark)

    Henriksen, Lars

    1996-01-01

    The sonar simulator integrated environment (SSIE) is a tool for developing high performance processing algorithms for single or sequences of sonar images. The tool is based on MATLAB providing a very short lead time from concept to executable code and thereby assessment of the algorithms tested...... of the algorithms is the availability of sonar images. To accommodate this problem the SSIE has been equipped with a simulator capable of generating high fidelity sonar images for a given scene of objects, sea-bed AUV path, etc. In the paper the main components of the SSIE is described and examples of different...... processing steps are given...

  5. The development of an algebraic multigrid algorithm for symmetric positive definite linear systems

    Energy Technology Data Exchange (ETDEWEB)

    Vanek, P.; Mandel, J.; Brezina, M. [Univ. of Colorado, Denver, CO (United States)

    1996-12-31

    An algebraic multigrid algorithm for symmetric, positive definite linear systems is developed based on the concept of prolongation by smoothed aggregation. Coarse levels are generated automatically. We present a set of requirements motivated heuristically by a convergence theory. The algorithm then attempts to satisfy the requirements. Input to the method are the coefficient matrix and zero energy modes, which are determined from nodal coordinates and knowledge of the differential equation. Efficiency of the resulting algorithm is demonstrated by computational results on real world problems from solid elasticity, plate blending, and shells.

  6. Development of web-based reliability data analysis algorithm model and its application

    International Nuclear Information System (INIS)

    Hwang, Seok-Won; Oh, Ji-Yong; Moosung-Jae

    2010-01-01

    For this study, a database model of plant reliability was developed for the effective acquisition and management of plant-specific data that can be used in various applications of plant programs as well as in Probabilistic Safety Assessment (PSA). Through the development of a web-based reliability data analysis algorithm, this approach systematically gathers specific plant data such as component failure history, maintenance history, and shift diary. First, for the application of the developed algorithm, this study reestablished the raw data types, data deposition procedures and features of the Enterprise Resource Planning (ERP) system process. The component codes and system codes were standardized to make statistical analysis between different types of plants possible. This standardization contributes to the establishment of a flexible database model that allows the customization of reliability data for the various applications depending on component types and systems. In addition, this approach makes it possible for users to perform trend analyses and data comparisons for the significant plant components and systems. The validation of the algorithm is performed through a comparison of the importance measure value (Fussel-Vesely) of the mathematical calculation and that of the algorithm application. The development of a reliability database algorithm is one of the best approaches for providing systemic management of plant-specific reliability data with transparency and continuity. This proposed algorithm reinforces the relationships between raw data and application results so that it can provide a comprehensive database that offers everything from basic plant-related data to final customized data.

  7. The reliability of the Associate Platinum digital foot scanner in measuring previously developed footprint characteristics: a technical note.

    Science.gov (United States)

    Papuga, M Owen; Burke, Jeanmarie R

    2011-02-01

    An ink pad and paper, pressure-sensitive platforms, and photography have previously been used to collect footprint data used in clinical assessment. Digital scanners have been widely used more recently to collect such data. The purpose of this study was to evaluate the intra- and interrater reliability of a flatbed digital image scanning technology to capture footprint data. This study used a repeated-measures design on 32 (16 male 16 female) healthy subjects. The following measured indices of footprint were recorded from 2-dimensional images of the plantar surface of the foot recorded with an Associate Platinum (Foot Levelers Inc, Roanoke, VA) digital foot scanner: Staheli index, Chippaux-Smirak index, arch angle, and arch index. Intraclass correlation coefficient (ICC) values were calculated to evaluate intrarater, interday, and interclinician reliability. The ICC values for intrarater reliability were greater than or equal to .817, indicating an excellent level of reproducibility in assessing the collected images. Analyses of variance revealed that there were no significant differences between raters for each index (P > .05). The ICC values also indicated excellent reliability (.881-.971) between days and clinicians in all but one of the indices of footprint, arch angle (.689), with good reliability between clinicians. The full-factorial analysis of variance model did not reveal any interaction effects (P > .05), which indicated that indices of footprint were not changing across days and clinicians. Scanning technology used in this study demonstrated good intra- and interrater reliability measurements of footprint indices, as demonstrated by high ICC values. Copyright © 2011 National University of Health Sciences. Published by Mosby, Inc. All rights reserved.

  8. RStorm: Developing and Testing Streaming Algorithms in R

    NARCIS (Netherlands)

    Kaptein, M.C.

    2014-01-01

    Streaming data, consisting of indefinitely evolving sequences, are becoming ubiquitous in many branches of science and in various applications. Computer scientists have developed streaming applications such as Storm and the S4 distributed stream computing platform1 to deal with data streams.

  9. RStorm : Developing and testing streaming algorithms in R

    NARCIS (Netherlands)

    Kaptein, M.C.

    2014-01-01

    Streaming data, consisting of indefinitely evolving sequences, are becoming ubiquitous in many branches of science and in various applications. Computer scientists have developed streaming applications such as Storm and the S4 distributed stream computing platform1 to deal with data streams.

  10. Developed adaptive neuro-fuzzy algorithm to control air conditioning ...

    African Journals Online (AJOL)

    user

    The paper developed artificial intelligence technique adaptive neuro-fuzzy controller for air conditioning systems at different pressures. The first order Sugeno fuzzy .... condenser heat rejection rate, refrigerant mass flow rate, compressor power, electric power input to the compressor motor and the coefficient of performance.

  11. Developed adaptive neuro-fuzzy algorithm to control air conditioning ...

    African Journals Online (AJOL)

    The paper developed artificial intelligence technique adaptive neuro-fuzzy controller for air conditioning systems at different pressures. The first order Sugeno fuzzy inference system was implemented and utilized for modeling and controller design. In addition, the estimation of the heat transfer rate and water mass flow rate ...

  12. Development of an algorithm for energy efficient automated train driving

    OpenAIRE

    Ozhigin, Artem; Prunev, Pavel; Sverdlin, Victor; Vikulina, Yulia

    2016-01-01

    International audience; Automated train driving function is greatly demanded in high-speed and commuter trains operated by Russian railways. Siemens Corporate Technology is involved in the development of such real-time function within a "robotised" train control system. The main intention of the system is not only to relieve the human driver from routine control over traction and brakes (allowing him to pay more attention to assurance of safety) but also to increase train efficiency by reduci...

  13. Hemoglobin-Based Oxygen Carrier (HBOC) Development in Trauma: Previous Regulatory Challenges, Lessons Learned, and a Path Forward.

    Science.gov (United States)

    Keipert, Peter E

    2017-01-01

    Historically, hemoglobin-based oxygen carriers (HBOCs) were being developed as "blood substitutes," despite their transient circulatory half-life (~ 24 h) vs. transfused red blood cells (RBCs). More recently, HBOC commercial development focused on "oxygen therapeutic" indications to provide a temporary oxygenation bridge until medical or surgical interventions (including RBC transfusion, if required) can be initiated. This included the early trauma trials with HemAssist ® (BAXTER), Hemopure ® (BIOPURE) and PolyHeme ® (NORTHFIELD) for resuscitating hypotensive shock. These trials all failed due to safety concerns (e.g., cardiac events, mortality) and certain protocol design limitations. In 2008 the Food and Drug Administration (FDA) put all HBOC trials in the US on clinical hold due to the unfavorable benefit:risk profile demonstrated by various HBOCs in different clinical studies in a meta-analysis published by Natanson et al. (2008). During standard resuscitation in trauma, organ dysfunction and failure can occur due to ischemia in critical tissues, which can be detected by the degree of lactic acidosis. SANGART'S Phase 2 trauma program with MP4OX therefore added lactate >5 mmol/L as an inclusion criterion to enroll patients who had lost sufficient blood to cause a tissue oxygen debt. This was key to the successful conduct of their Phase 2 program (ex-US, from 2009 to 2012) to evaluate MP4OX as an adjunct to standard fluid resuscitation and transfusion of RBCs. In 2013, SANGART shared their Phase 2b results with the FDA, and succeeded in getting the FDA to agree that a planned Phase 2c higher dose comparison study of MP4OX in trauma could include clinical sites in the US. Unfortunately, SANGART failed to secure new funding and was forced to terminate development and operations in Dec 2013, even though a regulatory path forward with FDA approval to proceed in trauma had been achieved.

  14. TIGER: Development of Thermal Gradient Compensation Algorithms and Techniques

    Science.gov (United States)

    Hereford, James; Parker, Peter A.; Rhew, Ray D.

    2004-01-01

    In a wind tunnel facility, the direct measurement of forces and moments induced on the model are performed by a force measurement balance. The measurement balance is a precision-machined device that has strain gages at strategic locations to measure the strain (i.e., deformations) due to applied forces and moments. The strain gages convert the strain (and hence the applied force) to an electrical voltage that is measured by external instruments. To address the problem of thermal gradients on the force measurement balance NASA-LaRC has initiated a research program called TIGER - Thermally-Induced Gradients Effects Research. The ultimate goals of the TIGER program are to: (a) understand the physics of the thermally-induced strain and its subsequent impact on load measurements and (b) develop a robust thermal gradient compensation technique. This paper will discuss the impact of thermal gradients on force measurement balances, specific aspects of the TIGER program (the design of a special-purpose balance, data acquisition and data analysis challenges), and give an overall summary.

  15. Development and Evaluation of the National Cancer Institute's Dietary Screener Questionnaire Scoring Algorithms.

    Science.gov (United States)

    Thompson, Frances E; Midthune, Douglas; Kahle, Lisa; Dodd, Kevin W

    2017-06-01

    Background: Methods for improving the utility of short dietary assessment instruments are needed. Objective: We sought to describe the development of the NHANES Dietary Screener Questionnaire (DSQ) and its scoring algorithms and performance. Methods: The 19-item DSQ assesses intakes of fruits and vegetables, whole grains, added sugars, dairy, fiber, and calcium. Two nonconsecutive 24-h dietary recalls and the DSQ were administered in NHANES 2009-2010 to respondents aged 2-69 y ( n = 7588). The DSQ frequency responses, coupled with sex- and age-specific portion size information, were regressed on intake from 24-h recalls by using the National Cancer Institute usual intake method to obtain scoring algorithms to estimate mean and prevalences of reaching 2 a priori threshold levels. The resulting scoring algorithms were applied to the DSQ and compared with intakes estimated with the 24-h recall data only. The stability of the derived scoring algorithms was evaluated in repeated sampling. Finally, scoring algorithms were applied to screener data, and these estimates were compared with those from multiple 24-h recalls in 3 external studies. Results: The DSQ and its scoring algorithms produced estimates of mean intake and prevalence that agreed closely with those from multiple 24-h recalls. The scoring algorithms were stable in repeated sampling. Differences in the means were algorithms is an advance in the use of screeners. However, because these algorithms may not be generalizable to all studies, a pilot study in the proposed study population is advisable. Although more precise instruments such as 24-h dietary recalls are recommended in most research, the NHANES DSQ provides a less burdensome alternative when time and resources are constrained and interest is in a limited set of dietary factors. © 2017 American Society for Nutrition.

  16. Bobcat 2013: a hyperspectral data collection supporting the development and evaluation of spatial-spectral algorithms

    Science.gov (United States)

    Kaufman, Jason; Celenk, Mehmet; White, A. K.; Stocker, Alan D.

    2014-06-01

    The amount of hyperspectral imagery (HSI) data currently available is relatively small compared to other imaging modalities, and what is suitable for developing, testing, and evaluating spatial-spectral algorithms is virtually nonexistent. In this work, a significant amount of coincident airborne hyperspectral and high spatial resolution panchromatic imagery that supports the advancement of spatial-spectral feature extraction algorithms was collected to address this need. The imagery was collected in April 2013 for Ohio University by the Civil Air Patrol, with their Airborne Real-time Cueing Hyperspectral Enhanced Reconnaissance (ARCHER) sensor. The target materials, shapes, and movements throughout the collection area were chosen such that evaluation of change detection algorithms, atmospheric compensation techniques, image fusion methods, and material detection and identification algorithms is possible. This paper describes the collection plan, data acquisition, and initial analysis of the collected imagery.

  17. Development of an Algorithm to Classify Colonoscopy Indication from Coded Health Care Data.

    Science.gov (United States)

    Adams, Kenneth F; Johnson, Eric A; Chubak, Jessica; Kamineni, Aruna; Doubeni, Chyke A; Buist, Diana S M; Williams, Andrew E; Weinmann, Sheila; Doria-Rose, V Paul; Rutter, Carolyn M

    2015-01-01

    Electronic health data are potentially valuable resources for evaluating colonoscopy screening utilization and effectiveness. The ability to distinguish screening colonoscopies from exams performed for other purposes is critical for research that examines factors related to screening uptake and adherence, and the impact of screening on patient outcomes, but distinguishing between these indications in secondary health data proves challenging. The objective of this study is to develop a new and more accurate algorithm for identification of screening colonoscopies using electronic health data. Data from a case-control study of colorectal cancer with adjudicated colonoscopy indication was used to develop logistic regression-based algorithms. The proposed algorithms predict the probability that a colonoscopy was indicated for screening, with variables selected for inclusion in the models using the Least Absolute Shrinkage and Selection Operator (LASSO). The algorithms had excellent classification accuracy in internal validation. The primary, restricted model had AUC= 0.94, sensitivity=0.91, and specificity=0.82. The secondary, extended model had AUC=0.96, sensitivity=0.88, and specificity=0.90. The LASSO approach enabled estimation of parsimonious algorithms that identified screening colonoscopies with high accuracy in our study population. External validation is needed to replicate these results and to explore the performance of these algorithms in other settings.

  18. Implementation on Landsat Data of a Simple Cloud Mask Algorithm Developed for MODIS Land Bands

    Science.gov (United States)

    Oreopoulos, Lazaros; Wilson, Michael J.; Varnai, Tamas

    2010-01-01

    This letter assesses the performance on Landsat-7 images of a modified version of a cloud masking algorithm originally developed for clear-sky compositing of Moderate Resolution Imaging Spectroradiometer (MODIS) images at northern mid-latitudes. While data from recent Landsat missions include measurements at thermal wavelengths, and such measurements are also planned for the next mission, thermal tests are not included in the suggested algorithm in its present form to maintain greater versatility and ease of use. To evaluate the masking algorithm we take advantage of the availability of manual (visual) cloud masks developed at USGS for the collection of Landsat scenes used here. As part of our evaluation we also include the Automated Cloud Cover Assesment (ACCA) algorithm that includes thermal tests and is used operationally by the Landsat-7 mission to provide scene cloud fractions, but no cloud masks. We show that the suggested algorithm can perform about as well as ACCA both in terms of scene cloud fraction and pixel-level cloud identification. Specifically, we find that the algorithm gives an error of 1.3% for the scene cloud fraction of 156 scenes, and a root mean square error of 7.2%, while it agrees with the manual mask for 93% of the pixels, figures very similar to those from ACCA (1.2%, 7.1%, 93.7%).

  19. The development of a scalable parallel 3-D CFD algorithm for turbomachinery. M.S. Thesis Final Report

    Science.gov (United States)

    Luke, Edward Allen

    1993-01-01

    Two algorithms capable of computing a transonic 3-D inviscid flow field about rotating machines are considered for parallel implementation. During the study of these algorithms, a significant new method of measuring the performance of parallel algorithms is developed. The theory that supports this new method creates an empirical definition of scalable parallel algorithms that is used to produce quantifiable evidence that a scalable parallel application was developed. The implementation of the parallel application and an automated domain decomposition tool are also discussed.

  20. Crowdsourcing seizure detection: algorithm development and validation on human implanted device recordings.

    Science.gov (United States)

    Baldassano, Steven N; Brinkmann, Benjamin H; Ung, Hoameng; Blevins, Tyler; Conrad, Erin C; Leyde, Kent; Cook, Mark J; Khambhati, Ankit N; Wagenaar, Joost B; Worrell, Gregory A; Litt, Brian

    2017-06-01

    There exist significant clinical and basic research needs for accurate, automated seizure detection algorithms. These algorithms have translational potential in responsive neurostimulation devices and in automatic parsing of continuous intracranial electroencephalography data. An important barrier to developing accurate, validated algorithms for seizure detection is limited access to high-quality, expertly annotated seizure data from prolonged recordings. To overcome this, we hosted a kaggle.com competition to crowdsource the development of seizure detection algorithms using intracranial electroencephalography from canines and humans with epilepsy. The top three performing algorithms from the contest were then validated on out-of-sample patient data including standard clinical data and continuous ambulatory human data obtained over several years using the implantable NeuroVista seizure advisory system. Two hundred teams of data scientists from all over the world participated in the kaggle.com competition. The top performing teams submitted highly accurate algorithms with consistent performance in the out-of-sample validation study. The performance of these seizure detection algorithms, achieved using freely available code and data, sets a new reproducible benchmark for personalized seizure detection. We have also shared a 'plug and play' pipeline to allow other researchers to easily use these algorithms on their own datasets. The success of this competition demonstrates how sharing code and high quality data results in the creation of powerful translational tools with significant potential to impact patient care. © The Author (2017). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  1. Structured interview for mild traumatic brain injury after military blast: inter-rater agreement and development of diagnostic algorithm.

    Science.gov (United States)

    Walker, William C; Cifu, David X; Hudak, Anne M; Goldberg, Gary; Kunz, Richard D; Sima, Adam P

    2015-04-01

    The existing gold standard for diagnosing a suspected previous mild traumatic brain injury (mTBI) is clinical interview. But it is prone to bias, especially for parsing the physical versus psychological effects of traumatic combat events, and its inter-rater reliability is unknown. Several standardized TBI interview instruments have been developed for research use but have similar limitations. Therefore, we developed the Virginia Commonwealth University (VCU) retrospective concussion diagnostic interview, blast version (VCU rCDI-B), and undertook this cross-sectional study aiming to 1) measure agreement among clinicians' mTBI diagnosis ratings, 2) using clinician consensus develop a fully structured diagnostic algorithm, and 3) assess accuracy of this algorithm in a separate sample. Two samples (n = 66; n = 37) of individuals within 2 years of experiencing blast effects during military deployment underwent semistructured interview regarding their worst blast experience. Five highly trained TBI physicians independently reviewed and interpreted the interview content and gave blinded ratings of whether or not the experience was probably an mTBI. Paired inter-rater reliability was extremely variable, with kappa ranging from 0.194 to 0.825. In sample 1, the physician consensus prevalence of probable mTBI was 84%. Using these diagnosis ratings, an algorithm was developed and refined from the fully structured portion of the VCU rCDI-B. The final algorithm considered certain symptom patterns more specific for mTBI than others. For example, an isolated symptom of "saw stars" was deemed sufficient to indicate mTBI, whereas an isolated symptom of "dazed" was not. The accuracy of this algorithm, when applied against the actual physician consensus in sample 2, was almost perfect (correctly classified = 97%; Cohen's kappa = 0.91). In conclusion, we found that highly trained clinicians often disagree on historical blast-related mTBI determinations. A fully structured interview

  2. Development of CAD implementing the algorithm of boundary elements’ numerical analytical method

    Directory of Open Access Journals (Sweden)

    Yulia V. Korniyenko

    2015-03-01

    Full Text Available Up to recent days the algorithms for numerical-analytical boundary elements method had been implemented with programs written in MATLAB environment language. Each program had a local character, i.e. used to solve a particular problem: calculation of beam, frame, arch, etc. Constructing matrices in these programs was carried out “manually” therefore being time-consuming. The research was purposed onto a reasoned choice of programming language for new CAD development, allows to implement algorithm of numerical analytical boundary elements method and to create visualization tools for initial objects and calculation results. Research conducted shows that among wide variety of programming languages the most efficient one for CAD development, employing the numerical analytical boundary elements method algorithm, is the Java language. This language provides tools not only for development of calculating CAD part, but also to build the graphic interface for geometrical models construction and calculated results interpretation.

  3. Development and comparative assessment of Raman spectroscopic classification algorithms for lesion discrimination in stereotactic breast biopsies with microcalcifications

    Science.gov (United States)

    Dingari, Narahara Chari; Barman, Ishan; Saha, Anushree; McGee, Sasha; Galindo, Luis H.; Liu, Wendy; Plecha, Donna; Klein, Nina; Dasari, Ramachandra Rao; Fitzmaurice, Maryann

    2014-01-01

    Microcalcifications are an early mammographic sign of breast cancer and a target for stereotactic breast needle biopsy. Here, we develop and compare different approaches for developing Raman classification algorithms to diagnose invasive and in situ breast cancer, fibrocystic change and fibroadenoma that can be associated with microcalcifications. In this study, Raman spectra were acquired from tissue cores obtained from fresh breast biopsies and analyzed using a constituent-based breast model. Diagnostic algorithms based on the breast model fit coefficients were devised using logistic regression, C4.5 decision tree classification, k-nearest neighbor (k-NN) and support vector machine (SVM) analysis, and subjected to leave-one-out cross validation. The best performing algorithm was based on SVM analysis (with radial basis function), which yielded a positive predictive value of 100% and negative predictive value of 96% for cancer diagnosis. Importantly, these results demonstrate that Raman spectroscopy provides adequate diagnostic information for lesion discrimination even in the presence of microcalcifications, which to the best of our knowledge has not been previously reported. Raman spectroscopy and multivariate classification provide accurate discrimination among lesions in stereotactic breast biopsies, irrespective of microcalcification status. PMID:22815240

  4. DEVELOPMENT OF A HYBRID FUZZY GENETIC ALGORITHM MODEL FOR SOLVING TRANSPORTATION SCHEDULING PROBLEM

    Directory of Open Access Journals (Sweden)

    H.C.W. Lau

    2015-12-01

    Full Text Available There has been an increasing public demand for passenger rail service in the recent times leading to a strong focus on the need for effective and efficient use of resources and managing the increasing passenger requirements, service reliability and variability by the railway management. Whilst shortening the passengers’ waiting and travelling time is important for commuter satisfaction, lowering operational costs is equally important for railway management. Hence, effective and cost optimised train scheduling based on the dynamic passenger demand is one of the main issues for passenger railway management. Although the passenger railway scheduling problem has received attention in operations research in recent years, there is limited literature investigating the adoption of practical approaches that capitalize on the merits of mathematical modeling and search algorithms for effective cost optimization. This paper develops a hybrid fuzzy logic based genetic algorithm model to solve the multi-objective passenger railway scheduling problem aiming to optimize total operational costs at a satisfactory level of customer service. This hybrid approach integrates genetic algorithm with the fuzzy logic approach which uses the fuzzy controller to determine the crossover rate and mutation rate in genetic algorithm approach in the optimization process. The numerical study demonstrates the improvement of the proposed hybrid approach, and the fuzzy genetic algorithm has demonstrated its effectiveness to generate better results than standard genetic algorithm and other traditional heuristic approaches, such as simulated annealing.

  5. Correlation signatures of wet soils and snows. [algorithm development and computer programming

    Science.gov (United States)

    Phillips, M. R.

    1972-01-01

    Interpretation, analysis, and development of algorithms have provided the necessary computational programming tools for soil data processing, data handling and analysis. Algorithms that have been developed thus far, are adequate and have been proven successful for several preliminary and fundamental applications such as software interfacing capabilities, probability distributions, grey level print plotting, contour plotting, isometric data displays, joint probability distributions, boundary mapping, channel registration and ground scene classification. A description of an Earth Resources Flight Data Processor, (ERFDP), which handles and processes earth resources data under a users control is provided.

  6. DOOCS environment for FPGA-based cavity control system and control algorithms development

    Energy Technology Data Exchange (ETDEWEB)

    Pucyk, P.; Koprek, W.; Kaleta, P.; Szewinski, J.; Pozniak, K.T.; Czarski, T.; Romaniuk, R.S. [Technical Univ. Warsaw (PL). Inst. of Electronic Systems (ISE)

    2005-07-01

    The paper describes the concept and realization of the DOOCS control software for FPGAbased TESLA cavity controller and simulator (SIMCON). It bases on universal software components, created for laboratory purposes and used in MATLAB based control environment. These modules have been recently adapted to the DOOCS environment to ensure a unified software to hardware communication model. The presented solution can be also used as a general platform for control algorithms development. The proposed interfaces between MATLAB and DOOCS modules allow to check the developed algorithm in the operation environment before implementation in the FPGA. As the examples two systems have been presented. (orig.)

  7. Developing Subdomain Allocation Algorithms Based on Spatial and Communicational Constraints to Accelerate Dust Storm Simulation

    Science.gov (United States)

    Gui, Zhipeng; Yu, Manzhu; Yang, Chaowei; Jiang, Yunfeng; Chen, Songqing; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Hassan, Mohammed Anowarul; Jin, Baoxuan

    2016-01-01

    Dust storm has serious disastrous impacts on environment, human health, and assets. The developments and applications of dust storm models have contributed significantly to better understand and predict the distribution, intensity and structure of dust storms. However, dust storm simulation is a data and computing intensive process. To improve the computing performance, high performance computing has been widely adopted by dividing the entire study area into multiple subdomains and allocating each subdomain on different computing nodes in a parallel fashion. Inappropriate allocation may introduce imbalanced task loads and unnecessary communications among computing nodes. Therefore, allocation is a key factor that may impact the efficiency of parallel process. An allocation algorithm is expected to consider the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire simulation. This research introduces three algorithms to optimize the allocation by considering the spatial and communicational constraints: 1) an Integer Linear Programming (ILP) based algorithm from combinational optimization perspective; 2) a K-Means and Kernighan-Lin combined heuristic algorithm (K&K) integrating geometric and coordinate-free methods by merging local and global partitioning; 3) an automatic seeded region growing based geometric and local partitioning algorithm (ASRG). The performance and effectiveness of the three algorithms are compared based on different factors. Further, we adopt the K&K algorithm as the demonstrated algorithm for the experiment of dust model simulation with the non-hydrostatic mesoscale model (NMM-dust) and compared the performance with the MPI default sequential allocation. The results demonstrate that K&K method significantly improves the simulation performance with better subdomain allocation. This method can also be adopted for other relevant atmospheric and numerical

  8. Fundamental analysis and algorithms for development of a mobile fast-scan lateral migration radiography system

    Science.gov (United States)

    Su, Zhong

    Lateral migration radiography (LMR) is a unique x-ray Compton backscatter imaging (CBI) technique to image surface and subsurface, or internal structure of an object. An x-ray pencil beam scans the interrogated area and the backscattered photons are registered by detectors which have varying degrees of collimation. In early LMR applications, either the LMR systems or the imaged objects are moved on a rectangular grid, and at each node, the systems register backscattered photon energy deposition as pixel intensity in acquired images. The mechanical movement of the system or objects from pixel to pixel causes prolonged image scan time with a high percentage of system dead time. To avoid this drawback, a particular x-ray beam formation technique is proposed and analyzed. A corresponding mobile, fast-scan LMR system is designed, fabricated and tested. The results show a two orders-of-magnitude reduction in image scan time compared with those of previous systems. The x-ray beam formation technique, based on a rotating collimator in the LMR system, implements surface line scan by sampling an x-ray fan beam. This rotating collimator yields unique imaging effects compared to those for an x-ray beam with fixed collimation and perpendicular incidence: (1) the speed of the x-ray beam spot on the scanned surface is not uniform; (2) constant movement of the x-ray beam spot changes the resolution in the image raster direction; (3) x-ray beam spot size changes with location on the scanned surface; (4) the object image shows a squeezed effect in the raster scan direction; (5) under a uniform background, the Compton scatter angular distribution causes the x-ray backscatter field to be stronger, when the x-ray beam has greater incidence angle; and (6) the x-ray illumination spot trace on the scanned surface is skewed. The physics generating these effects is analyzed with Monte Carlo computer simulations and/or measurements. Image acquisition and image processing algorithms are

  9. Development of a Multi-Objective Evolutionary Algorithm for Strain-Enhanced Quantum Cascade Lasers

    Directory of Open Access Journals (Sweden)

    David Mueller

    2016-07-01

    Full Text Available An automated design approach using an evolutionary algorithm for the development of quantum cascade lasers (QCLs is presented. Our algorithmic approach merges computational intelligence techniques with the physics of device structures, representing a design methodology that reduces experimental effort and costs. The algorithm was developed to produce QCLs with a three-well, diagonal-transition active region and a five-well injector region. Specifically, we applied this technique to Al x Ga 1 - x As/In y Ga 1 - y As strained active region designs. The algorithmic approach is a non-dominated sorting method using four aggregate objectives: target wavelength, population inversion via longitudinal-optical (LO phonon extraction, injector level coupling, and an optical gain metric. Analysis indicates that the most plausible device candidates are a result of the optical gain metric and a total aggregate of all objectives. However, design limitations exist in many of the resulting candidates, indicating need for additional objective criteria and parameter limits to improve the application of this and other evolutionary algorithm methods.

  10. Developments in the Aerosol Layer Height Retrieval Algorithm for the Copernicus Sentinel-4/UVN Instrument

    Science.gov (United States)

    Nanda, Swadhin; Sanders, Abram; Veefkind, Pepijn

    2016-04-01

    The Sentinel-4 mission is a part of the European Commission's Copernicus programme, the goal of which is to provide geo-information to manage environmental assets, and to observe, understand and mitigate the effects of the changing climate. The Sentinel-4/UVN instrument design is motivated by the need to monitor trace gas concentrations and aerosols in the atmosphere from a geostationary orbit. The on-board instrument is a high resolution UV-VIS-NIR (UVN) spectrometer system that provides hourly radiance measurements over Europe and northern Africa with a spatial sampling of 8 km. The main application area of Sentinel-4/UVN is air quality. One of the data products that is being developed for Sentinel-4/UVN is the Aerosol Layer Height (ALH). The goal is to determine the height of aerosol plumes with a resolution of better than 0.5 - 1 km. The ALH product thus targets aerosol layers in the free troposphere, such as desert dust, volcanic ash and biomass during plumes. KNMI is assigned with the development of the Aerosol Layer Height (ALH) algorithm. Its heritage is the ALH algorithm developed by Sanders and De Haan (ATBD, 2016) for the TROPOMI instrument on board the Sentinel-5 Precursor mission that is to be launched in June or July 2016 (tentative date). The retrieval algorithm designed so far for the aerosol height product is based on the absorption characteristics of the oxygen-A band (759-770 nm). The algorithm has heritage to the ALH algorithm developed for TROPOMI on the Sentinel 5 precursor satellite. New aspects for Sentinel-4/UVN include the higher resolution (0.116 nm compared to 0.4 for TROPOMI) and hourly observation from the geostationary orbit. The algorithm uses optimal estimation to obtain a spectral fit of the reflectance across absorption band, while assuming a single uniform layer with fixed width to represent the aerosol vertical distribution. The state vector includes amongst other elements the height of this layer and its aerosol optical

  11. Detecting Intermittent Steering Activity ; Development of a Phase-detection Algorithm

    NARCIS (Netherlands)

    Silva Peixoto de Aboim Chaves, H.M. da; Pauwelussen, J.J.A.; Mulder, M.; Paassen, M.M. van; Happee, R.; Mulder, M.

    2012-01-01

    Drivers usually maintain an error-neglecting control strategy (passive phase) in keeping their vehicle on the road, only to change to an error-correcting approach (active phase) when the vehicle state becomes inadequate. We developed an algorithm that is capable of detecting whether the driver is

  12. Development of a thresholding algorithm for calcium classification at multiple CT energies

    Science.gov (United States)

    Ng, LY.; Alssabbagh, M.; Tajuddin, A. A.; Shuaib, I. L.; Zainon, R.

    2017-05-01

    The objective of this study was to develop a thresholding method for calcium classification with different concentration using single-energy computed tomography. Five different concentrations of calcium chloride were filled in PMMA tubes and placed inside a water-filled PMMA phantom (diameter 10 cm). The phantom was scanned at 70, 80, 100, 120 and 140 kV using a SECT. CARE DOSE 4D was used and the slice thickness was set to 1 mm for all energies. ImageJ software inspired by the National Institute of Health (NIH) was used to measure the CT numbers for each calcium concentration from the CT images. The results were compared with a developed algorithm for verification. The percentage differences between the measured CT numbers obtained from the developed algorithm and the ImageJ show similar results. The multi-thresholding algorithm was found to be able to distinguish different concentrations of calcium chloride. However, it was unable to detect low concentrations of calcium chloride and iron (III) nitrate with CT numbers between 25 HU and 65 HU. The developed thresholding method used in this study may help to differentiate between calcium plaques and other types of plaques in blood vessels as it is proven to have a good ability to detect the high concentration of the calcium chloride. However, the algorithm needs to be improved to solve the limitations of detecting calcium chloride solution which has a similar CT number with iron (III) nitrate solution.

  13. Development and Evaluation of Model Algorithms to Account for Chemical Transformation in the Nearroad Environment

    Science.gov (United States)

    We describe the development and evaluation of two new model algorithms for NOx chemistry in the R-LINE near-road dispersion model for traffic sources. With increased urbanization, there is increased mobility leading to higher amount of traffic related activity on a global scale. ...

  14. Software development minimum guidance system. Algorithm and specifications of realizing special hardware processor data prefilter program

    International Nuclear Information System (INIS)

    Baginyan, S.A.; Govorun, N.N.; Tkhang, T.L.; Shigaev, V.N.

    1982-01-01

    Software development minimum guidance system for measuring pictures of bubble chamber on the base of a scanner (HPD) and special hardware processor (SHP) is described. The algorithm of selective filter is proposed. The local software structure and functional specifications of its major parts are described. Some examples of processing picture from HBC-1 (JINR) are also presented

  15. Chronic wrist pain: diagnosis and management. Development and use of a new algorithm

    NARCIS (Netherlands)

    van Vugt, R. M.; Bijlsma, J. W.; van Vugt, A. C.

    1999-01-01

    Chronic wrist pain can be difficult to manage and the differential diagnosis is extensive. To provide guidelines for assessment of the painful wrist an algorithm was developed to encourage a structured approach to the diagnosis and management of these patients. A review of the literature on causes

  16. Evaluation of nine HIV rapid test kits to develop a national HIV testing algorithm in Nigeria

    Directory of Open Access Journals (Sweden)

    Orji Bassey

    2015-05-01

    Full Text Available Background: Non-cold chain-dependent HIV rapid testing has been adopted in many resource-constrained nations as a strategy for reaching out to populations. HIV rapid test kits (RTKs have the advantage of ease of use, low operational cost and short turnaround times. Before 2005, different RTKs had been used in Nigeria without formal evaluation. Between 2005 and 2007, a study was conducted to formally evaluate a number of RTKs and construct HIV testing algorithms. Objectives: The objectives of this study were to assess and select HIV RTKs and develop national testing algorithms. Method: Nine RTKs were evaluated using 528 well-characterised plasma samples. These comprised 198 HIV-positive specimens (37.5% and 330 HIV-negative specimens (62.5%, collected nationally. Sensitivity and specificity were calculated with 95% confidence intervals for all nine RTKs singly and for serial and parallel combinations of six RTKs; and relative costs were estimated. Results: Six of the nine RTKs met the selection criteria, including minimum sensitivity and specificity (both ≥ 99.0% requirements. There were no significant differences in sensitivities or specificities of RTKs in the serial and parallel algorithms, but the cost of RTKs in parallel algorithms was twice that in serial algorithms. Consequently, three serial algorithms, comprising four test kits (BundiTM, DetermineTM, Stat-Pak® and Uni-GoldTM with 100.0% sensitivity and 99.1% – 100.0% specificity, were recommended and adopted as national interim testing algorithms in 2007. Conclusion: This evaluation provides the first evidence for reliable combinations of RTKs for HIV testing in Nigeria. However, these RTKs need further evaluation in the field (Phase II to re-validate their performance.

  17. The Cardiac Safety Research Consortium electrocardiogram warehouse: thorough QT database specifications and principles of use for algorithm development and testing.

    Science.gov (United States)

    Kligfield, Paul; Green, Cynthia L; Mortara, Justin; Sager, Philip; Stockbridge, Norman; Li, Michael; Zhang, Joanne; George, Samuel; Rodriguez, Ignacio; Bloomfield, Daniel; Krucoff, Mitchell W

    2010-12-01

    This document examines the formation, structure, and principles guiding the use of electrocardiogram (ECG) data sets obtained during thorough QT studies that have been derived from the ECG Warehouse of the Cardiac Safety Research Consortium (CSRC). These principles are designed to preserve the fairness and public interest of access to these data, commensurate with the mission of the CSRC. The data sets comprise anonymized XML formatted digitized ECGs and descriptive variables from placebo and positive control arms of individual studies previously submitted on a proprietary basis to the US Food and Drug Administration by pharmaceutical sponsors. Sponsors permit the release of these studies into the public domain through the CSRC on behalf of the Food and Drug Administration's Critical Path Initiative and public health interest. For algorithm research protocols submitted to and approved by CSRC, unblinded "training" ECG data sets are provided for algorithm development and for initial evaluation, whereas separate blinded "testing" data sets are used for formal algorithm evaluation in cooperation with the CSRC according to methods detailed in this document. Copyright © 2010 Mosby, Inc. All rights reserved.

  18. Image microarrays derived from tissue microarrays (IMA-TMA: New resource for computer-aided diagnostic algorithm development

    Directory of Open Access Journals (Sweden)

    Jennifer A Hipp

    2012-01-01

    Full Text Available Background: Conventional tissue microarrays (TMAs consist of cores of tissue inserted into a recipient paraffin block such that a tissue section on a single glass slide can contain numerous patient samples in a spatially structured pattern. Scanning TMAs into digital slides for subsequent analysis by computer-aided diagnostic (CAD algorithms all offers the possibility of evaluating candidate algorithms against a near-complete repertoire of variable disease morphologies. This parallel interrogation approach simplifies the evaluation, validation, and comparison of such candidate algorithms. A recently developed digital tool, digital core (dCORE, and image microarray maker (iMAM enables the capture of uniformly sized and resolution-matched images, with these representing key morphologic features and fields of view, aggregated into a single monolithic digital image file in an array format, which we define as an image microarray (IMA. We further define the TMA-IMA construct as IMA-based images derived from whole slide images of TMAs themselves. Methods: Here we describe the first combined use of the previously described dCORE and iMAM tools, toward the goal of generating a higher-order image construct, with multiple TMA cores from multiple distinct conventional TMAs assembled as a single digital image montage. This image construct served as the basis of the carrying out of a massively parallel image analysis exercise, based on the use of the previously described spatially invariant vector quantization (SIVQ algorithm. Results: Multicase, multifield TMA-IMAs of follicular lymphoma and follicular hyperplasia were separately rendered, using the aforementioned tools. Each of these two IMAs contained a distinct spectrum of morphologic heterogeneity with respect to both tingible body macrophage (TBM appearance and apoptotic body morphology. SIVQ-based pattern matching, with ring vectors selected to screen for either tingible body macrophages or apoptotic

  19. Research on Suspension with Novel Dampers Based on Developed FOA-LQG Control Algorithm

    Directory of Open Access Journals (Sweden)

    Xiao Ping

    2017-01-01

    Full Text Available To enhance working-performance robustness of suspension, a vehicle suspension with permanent-magnet magnetic-valve magnetorheological damper (PMMVMD was studied. Firstly, mechanical structure of traditional magnetorheological damper (MD used in vehicle suspensions was redesigned through introducing a permanent magnet and a magnetic valve. Based on theories of electromagnetics and Bingham model, prediction model of damping force was built. On this basis, two-degree-of-freedom vehicle suspension model was established. In addition, fruit fly optimization algorithm- (FOA- line quadratic Gaussian (LQG control algorithm suitable for PMMVMD suspensions was designed on the basis of developing normal FOA. Finally, comparison simulation experiments and bench tests were conducted by taking white noise and a sine wave as the road surface input and the results indicated that working performance of PMMVMD suspension based on FOA-LQG control algorithm was good.

  20. A sonification algorithm for developing the off-roads models for driving simulators

    Science.gov (United States)

    Chiroiu, Veturia; Brişan, Cornel; Dumitriu, Dan; Munteanu, Ligia

    2018-01-01

    In this paper, a sonification algorithm for developing the off-road models for driving simulators, is proposed. The aim of this algorithm is to overcome difficulties of heuristics identification which are best suited to a particular off-road profile built by measurements. The sonification algorithm is based on the stochastic polynomial chaos analysis suitable in solving equations with random input data. The fluctuations are generated by incomplete measurements leading to inhomogeneities of the cross-sectional curves of off-roads before and after deformation, the unstable contact between the tire and the road and the unreal distribution of contact and friction forces in the unknown contact domains. The approach is exercised on two particular problems and results compare favorably to existing analytical and numerical solutions. The sonification technique represents a useful multiscale analysis able to build a low-cost virtual reality environment with increased degrees of realism for driving simulators and higher user flexibility.

  1. The ASTRA Toolbox: A platform for advanced algorithm development in electron tomography

    Energy Technology Data Exchange (ETDEWEB)

    Aarle, Wim van, E-mail: wim.vanaarle@uantwerpen.be [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Palenstijn, Willem Jan, E-mail: willemjan.palenstijn@uantwerpen.be [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Centrum Wiskunde & Informatica, Science Park 123, NL-1098 XG Amsterdam (Netherlands); De Beenhouwer, Jan, E-mail: jan.debeenhouwer@uantwerpen.be [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Altantzis, Thomas, E-mail: thomas.altantzis@uantwerpen.be [Electron Microscopy for Materials Science, University of Antwerp, Groenenborgerlaan 171, B-2020 Wilrijk (Belgium); Bals, Sara, E-mail: sara.bals@uantwerpen.be [Electron Microscopy for Materials Science, University of Antwerp, Groenenborgerlaan 171, B-2020 Wilrijk (Belgium); Batenburg, K. Joost, E-mail: joost.batenburg@cwi.nl [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Centrum Wiskunde & Informatica, Science Park 123, NL-1098 XG Amsterdam (Netherlands); Mathematical Institute, Leiden University, P.O. Box 9512, NL-2300 RA Leiden (Netherlands); Sijbers, Jan, E-mail: jan.sijbers@uantwerpen.be [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium)

    2015-10-15

    We present the ASTRA Toolbox as an open platform for 3D image reconstruction in tomography. Most of the software tools that are currently used in electron tomography offer limited flexibility with respect to the geometrical parameters of the acquisition model and the algorithms used for reconstruction. The ASTRA Toolbox provides an extensive set of fast and flexible building blocks that can be used to develop advanced reconstruction algorithms, effectively removing these limitations. We demonstrate this flexibility, the resulting reconstruction quality, and the computational efficiency of this toolbox by a series of experiments, based on experimental dual-axis tilt series. - Highlights: • The ASTRA Toolbox is an open platform for 3D image reconstruction in tomography. • Advanced reconstruction algorithms can be prototyped using the fast and flexible building blocks. • This flexibility is demonstrated on a common use case: dual-axis tilt series reconstruction with prior knowledge. • The computational efficiency is validated on an experimentally measured tilt series.

  2. The ASTRA Toolbox: A platform for advanced algorithm development in electron tomography

    International Nuclear Information System (INIS)

    Aarle, Wim van; Palenstijn, Willem Jan; De Beenhouwer, Jan; Altantzis, Thomas; Bals, Sara; Batenburg, K. Joost; Sijbers, Jan

    2015-01-01

    We present the ASTRA Toolbox as an open platform for 3D image reconstruction in tomography. Most of the software tools that are currently used in electron tomography offer limited flexibility with respect to the geometrical parameters of the acquisition model and the algorithms used for reconstruction. The ASTRA Toolbox provides an extensive set of fast and flexible building blocks that can be used to develop advanced reconstruction algorithms, effectively removing these limitations. We demonstrate this flexibility, the resulting reconstruction quality, and the computational efficiency of this toolbox by a series of experiments, based on experimental dual-axis tilt series. - Highlights: • The ASTRA Toolbox is an open platform for 3D image reconstruction in tomography. • Advanced reconstruction algorithms can be prototyped using the fast and flexible building blocks. • This flexibility is demonstrated on a common use case: dual-axis tilt series reconstruction with prior knowledge. • The computational efficiency is validated on an experimentally measured tilt series

  3. Dataset exploited for the development and validation of automated cyanobacteria quantification algorithm, ACQUA

    Directory of Open Access Journals (Sweden)

    Emanuele Gandola

    2016-09-01

    Full Text Available The estimation and quantification of potentially toxic cyanobacteria in lakes and reservoirs are often used as a proxy of risk for water intended for human consumption and recreational activities. Here, we present data sets collected from three volcanic Italian lakes (Albano, Vico, Nemi that present filamentous cyanobacteria strains at different environments. Presented data sets were used to estimate abundance and morphometric characteristics of potentially toxic cyanobacteria comparing manual Vs. automated estimation performed by ACQUA (“ACQUA: Automated Cyanobacterial Quantification Algorithm for toxic filamentous genera using spline curves, pattern recognition and machine learning” (Gandola et al., 2016 [1]. This strategy was used to assess the algorithm performance and to set up the denoising algorithm. Abundance and total length estimations were used for software development, to this aim we evaluated the efficiency of statistical tools and mathematical algorithms, here described. The image convolution with the Sobel filter has been chosen to denoise input images from background signals, then spline curves and least square method were used to parameterize detected filaments and to recombine crossing and interrupted sections aimed at performing precise abundances estimations and morphometric measurements. Keywords: Comparing data, Filamentous cyanobacteria, Algorithm, Deoising, Natural sample

  4. Development and validation of a risk-prediction algorithm for the recurrence of panic disorder.

    Science.gov (United States)

    Liu, Yan; Sareen, Jitender; Bolton, James; Wang, JianLi

    2015-05-01

    To develop and validate a risk prediction algorithm for the recurrence of panic disorder. Three-year longitudinal data were taken from the National Epidemiologic Survey on Alcohol and Related Conditions (2001/2002-2004/2005). One thousand six hundred and eighty one participants with a lifetime panic disorder and who had not had panic attacks for at least 2 months at baseline were included. The development cohort included 949 participants; 732 from different census regions were in the validation cohort. Recurrence of panic disorder over the follow-up period was assessed using the Alcohol Use Disorder and Associated Disabilities Interview Schedule, based on the DSM-IV criteria. Logistic regression was used for deriving the algorithm. Discrimination and calibration were assessed in the development and the validation cohorts. The developed algorithm consisted of 11 predictors: age, sex, panic disorder in the past 12 months, nicotine dependence, rapid heartbeat/tachycardia, taking medication for panic attacks, feelings of choking and persistent worry about having another panic attack, two personality traits, and childhood trauma. The algorithm had good discriminative power (C statistic = 0.7863, 95% CI: 0.7487, 0.8240). The C statistic was 0.7283 (95% CI: 0.6889, 0.7764) in the external validation data set. The developed risk algorithm for predicting the recurrence of panic disorder has good discrimination and excellent calibration. Data related to the predictors can be easily attainable in routine clinical practice. It can be used by clinicians to calculate the probability of recurrence of panic disorder in the next 3 years for individual patients, communicate with patients regarding personal risks, and thus improve personalized treatment approaches. © 2015 Wiley Periodicals, Inc.

  5. The development of gamma energy identify algorithm for compact radiation sensors using stepwise refinement technique

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Hyun Jun [Div. of Radiation Regulation, Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of); Kim, Ye Won; Kim, Hyun Duk; Cho, Gyu Seong [Dept. of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Yi, Yun [Dept. of of Electronics and Information Engineering, Korea University, Seoul (Korea, Republic of)

    2017-06-15

    A gamma energy identifying algorithm using spectral decomposition combined with smoothing method was suggested to confirm the existence of the artificial radio isotopes. The algorithm is composed by original pattern recognition method and smoothing method to enhance the performance to identify gamma energy of radiation sensors that have low energy resolution. The gamma energy identifying algorithm for the compact radiation sensor is a three-step of refinement process. Firstly, the magnitude set is calculated by the original spectral decomposition. Secondly, the magnitude of modeling error in the magnitude set is reduced by the smoothing method. Thirdly, the expected gamma energy is finally decided based on the enhanced magnitude set as a result of the spectral decomposition with the smoothing method. The algorithm was optimized for the designed radiation sensor composed of a CsI (Tl) scintillator and a silicon pin diode. The two performance parameters used to estimate the algorithm are the accuracy of expected gamma energy and the number of repeated calculations. The original gamma energy was accurately identified with the single energy of gamma radiation by adapting this modeling error reduction method. Also the average error decreased by half with the multi energies of gamma radiation in comparison to the original spectral decomposition. In addition, the number of repeated calculations also decreased by half even in low fluence conditions under 104 (/0.09 cm{sup 2} of the scintillator surface). Through the development of this algorithm, we have confirmed the possibility of developing a product that can identify artificial radionuclides nearby using inexpensive radiation sensors that are easy to use by the public. Therefore, it can contribute to reduce the anxiety of the public exposure by determining the presence of artificial radionuclides in the vicinity.

  6. SPHERES as Formation Flight Algorithm Development and Validation Testbed: Current Progress and Beyond

    Science.gov (United States)

    Kong, Edmund M.; Saenz-Otero, Alvar; Nolet, Simon; Berkovitz, Dustin S.; Miller, David W.; Sell, Steve W.

    2004-01-01

    The MIT-SSL SPHERES testbed provides a facility for the development of algorithms necessary for the success of Distributed Satellite Systems (DSS). The initial development contemplated formation flight and docking control algorithms; SPHERES now supports the study of metrology, control, autonomy, artificial intelligence, and communications algorithms and their effects on DSS projects. To support this wide range of topics, the SPHERES design contemplated the need to support multiple researchers, as echoed from both the hardware and software designs. The SPHERES operational plan further facilitates the development of algorithms by multiple researchers, while the operational locations incrementally increase the ability of the tests to operate in a representative environment. In this paper, an overview of the SPHERES testbed is first presented. The SPHERES testbed serves as a model of the design philosophies that allow for the various researches being carried out on such a facility. The implementation of these philosophies are further highlighted in the three different programs that are currently scheduled for testing onboard the International Space Station (ISS) and three that are proposed for a re-flight mission: Mass Property Identification, Autonomous Rendezvous and Docking, TPF Multiple Spacecraft Formation Flight in the first flight and Precision Optical Pointing, Tethered Formation Flight and Mars Orbit Sample Retrieval for the re-flight mission.

  7. Development of real time diagnostics and feedback algorithms for JET in view of the next step

    Energy Technology Data Exchange (ETDEWEB)

    Murari, A.; Barana, O. [Consorzio RFX Associazione EURATOM ENEA per la Fusione, Corso Stati Uniti 4, Padua (Italy); Felton, R.; Zabeo, L.; Piccolo, F.; Sartori, F. [Euratom/UKAEA Fusion Assoc., Culham Science Centre, Abingdon, Oxon (United Kingdom); Joffrin, E.; Mazon, D.; Laborde, L.; Moreau, D. [Association EURATOM-CEA, CEA Cadarache, 13 - Saint-Paul-lez-Durance (France); Albanese, R. [Assoc. Euratom-ENEA-CREATE, Univ. Mediterranea RC (Italy); Arena, P.; Bruno, M. [Assoc. Euratom-ENEA-CREATE, Univ.di Catania (Italy); Ambrosino, G.; Ariola, M. [Assoc. Euratom-ENEA-CREATE, Univ. Napoli Federico Napoli (Italy); Crisanti, F. [Associazone EURATOM ENEA sulla Fusione, C.R. Frascati (Italy); Luna, E. de la; Sanchez, J. [Associacion EURATOM CIEMAT para Fusion, Madrid (Spain)

    2004-07-01

    Real time control of many plasma parameters will be an essential aspect in the development of reliable high performance operation of Next Step Tokamaks. The main prerequisites for any feedback scheme are the precise real-time determination of the quantities to be controlled, requiring top quality and highly reliable diagnostics, and the availability of robust control algorithms. A new set of real time diagnostics was recently implemented on JET to prove the feasibility of determining, with high accuracy and time resolution, the most important plasma quantities. With regard to feedback algorithms, new model-based controllers were developed to allow a more robust control of several plasma parameters. Both diagnostics and algorithms were successfully used in several experiments, ranging from H-mode plasmas to configuration with ITBs (internal thermal barriers). Since elaboration of computationally heavy measurements is often required, significant attention was devoted to non-algorithmic methods like Digital or Cellular Neural/Nonlinear Networks. The real time hardware and software adopted architectures are also described with particular attention to their relevance to ITER. (authors)

  8. Development of an IMU-based foot-ground contact detection (FGCD) algorithm.

    Science.gov (United States)

    Kim, Myeongkyu; Lee, Donghun

    2017-03-01

    It is well known that, to locate humans in GPS-denied environments, a lower limb kinematic solution based on Inertial Measurement Unit (IMU), force plate, and pressure insoles is essential. The force plate and pressure insole are used to detect foot-ground contacts. However, the use of multiple sensors is not desirable in most cases. This paper documents the development of an IMU-based FGCD (foot-ground contact detection) algorithm considering the variations of both walking terrain and speed. All IMU outputs showing significant changes on the moments of foot-ground contact phases are fully identified through experiments in five walking terrains. For the experiment on each walking terrain, variations of walking speeds are also examined to confirm the correlations between walking speed and the main parameters in the FGCD algorithm. As experimental results, FGCD algorithm successfully detecting four contact phases is developed, and validation of performance of the FGCD algorithm is also implemented. Practitioner Summary: In this research, it was demonstrated that the four contact phases of Heel strike (or Toe strike), Full contact, Heel off and Toe off can be independently detected regardless of the walking speed and walking terrain based on the detection criteria composed of the ranges and the rates of change of the main parameters measured from the Inertial Measurement Unit sensors.

  9. jClustering, an open framework for the development of 4D clustering algorithms.

    Directory of Open Access Journals (Sweden)

    José María Mateos-Pérez

    Full Text Available We present jClustering, an open framework for the design of clustering algorithms in dynamic medical imaging. We developed this tool because of the difficulty involved in manually segmenting dynamic PET images and the lack of availability of source code for published segmentation algorithms. Providing an easily extensible open tool encourages publication of source code to facilitate the process of comparing algorithms and provide interested third parties with the opportunity to review code. The internal structure of the framework allows an external developer to implement new algorithms easily and quickly, focusing only on the particulars of the method being implemented and not on image data handling and preprocessing. This tool has been coded in Java and is presented as an ImageJ plugin in order to take advantage of all the functionalities offered by this imaging analysis platform. Both binary packages and source code have been published, the latter under a free software license (GNU General Public License to allow modification if necessary.

  10. Development of real time diagnostics and feedback algorithms for JET in view of the next step

    International Nuclear Information System (INIS)

    Murari, A.; Barana, O.; Murari, A.; Felton, R.; Zabeo, L.; Piccolo, F.; Sartori, F.; Joffrin, E.; Mazon, D.; Laborde, L.; Moreau, D.; Albanese, R.; Arena, P.; Bruno, M.; Ambrosino, G.; Ariola, M.; Crisanti, F.; Luna, E. de la; Sanchez, J.

    2004-01-01

    Real time control of many plasma parameters will be an essential aspect in the development of reliable high performance operation of Next Step Tokamaks. The main prerequisites for any feedback scheme are the precise real-time determination of the quantities to be controlled, requiring top quality and highly reliable diagnostics, and the availability of robust control algorithms. A new set of real time diagnostics was recently implemented on JET to prove the feasibility of determining, with high accuracy and time resolution, the most important plasma quantities. With regard to feedback algorithms, new model-based controllers were developed to allow a more robust control of several plasma parameters. Both diagnostics and algorithms were successfully used in several experiments, ranging from H-mode plasmas to configuration with ITBs (internal thermal barriers). Since elaboration of computationally heavy measurements is often required, significant attention was devoted to non-algorithmic methods like Digital or Cellular Neural/Nonlinear Networks. The real time hardware and software adopted architectures are also described with particular attention to their relevance to ITER. (authors)

  11. Development of real time diagnostics and feedback algorithms for JET in view of the next step

    International Nuclear Information System (INIS)

    Murari, A.; Felton, R.; Zabeo, L.; Piccolo, F.; Sartori, F.; Murari, A.; Barana, O.; Albanese, R.; Joffrin, E.; Mazon, D.; Laborde, L.; Moreau, D.; Arena, P.; Bruno, M.; Ambrosino, G.; Ariola, M.; Crisanti, F.; Luna, E. de la; Sanchez, J.

    2004-01-01

    Real time control of many plasma parameters will be an essential aspect in the development of reliable high performance operation of Next Step Tokamaks. The main prerequisites for any feedback scheme are the precise real-time determination of the quantities to be controlled, requiring top quality and highly reliable diagnostics, and the availability of robust control algorithms. A new set of real time diagnostics was recently implemented on JET to prove the feasibility of determining, with high accuracy and time resolution, the most important plasma quantities. With regard to feedback algorithms, new model-based controllers were developed to allow a more robust control of several plasma parameters. Both diagnostics and algorithms were successfully used in several experiments, ranging from H-mode plasmas to configuration with internal transport barriers. Since elaboration of computationally heavy measurements is often required, significant attention was devoted to non-algorithmic methods like Digital or Cellular Neural/Nonlinear Networks. The real time hardware and software adopted architectures are also described with particular attention to their relevance to ITER. (authors)

  12. Development of an Evolutionary Algorithm for the ab Initio Discovery of Two-Dimensional Materials

    Science.gov (United States)

    Revard, Benjamin Charles

    Crystal structure prediction is an important first step on the path toward computational materials design. Increasingly robust methods have become available in recent years for computing many materials properties, but because properties are largely a function of crystal structure, the structure must be known before these methods can be brought to bear. In addition, structure prediction is particularly useful for identifying low-energy structures of subperiodic materials, such as two-dimensional (2D) materials, which may adopt unexpected structures that differ from those of the corresponding bulk phases. Evolutionary algorithms, which are heuristics for global optimization inspired by biological evolution, have proven to be a fruitful approach for tackling the problem of crystal structure prediction. This thesis describes the development of an improved evolutionary algorithm for structure prediction and several applications of the algorithm to predict the structures of novel low-energy 2D materials. The first part of this thesis contains an overview of evolutionary algorithms for crystal structure prediction and presents our implementation, including details of extending the algorithm to search for clusters, wires, and 2D materials, improvements to efficiency when running in parallel, improved composition space sampling, and the ability to search for partial phase diagrams. We then present several applications of the evolutionary algorithm to 2D systems, including InP, the C-Si and Sn-S phase diagrams, and several group-IV dioxides. This thesis makes use of the Cornell graduate school's "papers" option. Chapters 1 and 3 correspond to the first-author publications of Refs. [131] and [132], respectively, and chapter 2 will soon be submitted as a first-author publication. The material in chapter 4 is taken from Ref. [144], in which I share joint first-authorship. In this case I have included only my own contributions.

  13. [Development of an algorithm to predict the incidence of major depression among primary care consultants].

    Science.gov (United States)

    Saldivia, Sandra; Vicente, Benjamin; Marston, Louise; Melipillán, Roberto; Nazareth, Irwin; Bellón-Saameño, Juan; Xavier, Miguel; Maaroos, Heidi Ingrid; Svab, Igor; Geerlings, M-I; King, Michael

    2014-03-01

    The reduction of major depression incidence is a public health challenge. To develop an algorithm to estimate the risk of occurrence of major depression in patients attending primary health centers (PHC). Prospective cohort study of a random sample of 2832 patients attending PHC centers in Concepción, Chile, with evaluations at baseline, six and twelve months. Thirty nine known risk factors for depression were measured to build a model, using a logistic regression. The algorithm was developed in 2,133 patients not depressed at baseline and compared with risk algorithms developed in a sample of 5,216 European primary care attenders. The main outcome was the incidence of major depression in the follow-up period. The cumulative incidence of depression during the 12 months follow up in Chile was 12%. Eight variables were identified. Four corresponded to the patient (gender, age, depression background and educational level) and four to patients' current situation (physical and mental health, satisfaction with their situation at home and satisfaction with the relationship with their partner). The C-Index, used to assess the discriminating power of the final model, was 0.746 (95% confidence intervals (CI = 0,707-0,785), slightly lower than the equation obtained in European (0.790 95% CI = 0.767-0.813) and Spanish attenders (0.82; 95% CI = 0.79-0.84). Four of the factors identified in the risk algorithm are not modifiable. The other two factors are directly associated with the primary support network (family and partner). This risk algorithm for the incidence of major depression provides a tool that can guide efforts towards design, implementation and evaluation of effectiveness of interventions to prevent major depression.

  14. Development of algorithm for depreciation costs allocation in dynamic input-output industrial enterprise model

    Directory of Open Access Journals (Sweden)

    Keller Alevtina

    2017-01-01

    Full Text Available The article considers the issue of allocation of depreciation costs in the dynamic inputoutput model of an industrial enterprise. Accounting the depreciation costs in such a model improves the policy of fixed assets management. It is particularly relevant to develop the algorithm for the allocation of depreciation costs in the construction of dynamic input-output model of an industrial enterprise, since such enterprises have a significant amount of fixed assets. Implementation of terms of the adequacy of such an algorithm itself allows: evaluating the appropriateness of investments in fixed assets, studying the final financial results of an industrial enterprise, depending on management decisions in the depreciation policy. It is necessary to note that the model in question for the enterprise is always degenerate. It is caused by the presence of zero rows in the matrix of capital expenditures by lines of structural elements unable to generate fixed assets (part of the service units, households, corporate consumers. The paper presents the algorithm for the allocation of depreciation costs for the model. This algorithm was developed by the authors and served as the basis for further development of the flowchart for subsequent implementation with use of software. The construction of such algorithm and its use for dynamic input-output models of industrial enterprises is actualized by international acceptance of the effectiveness of the use of input-output models for national and regional economic systems. This is what allows us to consider that the solutions discussed in the article are of interest to economists of various industrial enterprises.

  15. Development of an algorithm for analysing the electronic measurement of medication adherence in routine HIV care.

    Science.gov (United States)

    Rotzinger, Aurélie; Cavassini, Matthias; Bugnon, Olivier; Schneider, Marie Paule

    2016-10-01

    Background Medication adherence is crucial for successful treatment. Various methods exist for measuring adherence, including electronic drug monitoring, pharmacy refills, pill count, and interviews. These methods are not equivalent, and no method can be considered as the gold standard. A combination of methods is therefore recommended. Objective To develop an algorithm for the management of routinely collected adherence data and to compare persistence and implementation curves using post-algorithm data (reconciled data) versus raw electronic drug monitoring data. Setting A community pharmacy located within a university medical outpatient clinic in Lausanne, Switzerland. Methods The algorithm was developed to take advantage of the strengths of each available adherence measurement method, with electronic drug monitoring as a cornerstone to capture the dynamics of patient behaviour, pill count as a complementary objective method to detect any discrepancy between the number of openings measured by electronic monitoring and the number of pills ingested per opening, and annotated interviews to interpret the discrepancy. The algorithm was tested using data from patients taking lopinavir/r and having participated in an adherence-enhancing programme for more than 3 months. Main outcome measure Adherence was calculated as the percentage of persistent patients (persistence) and the proportion of days with correct dosing over time (implementation) from inclusion to the end of the median follow-up period. Results A 10-step algorithm was established. Among 2041 analysed inter-visit periods, 496 (24 %) were classified as inaccurate, among which 372 (75 %) could be reconciled. The average implementation values were 85 % (raw data) and 91 % (reconciled data) (p electronic drug monitoring, pill count and patient interviews is possible within the setting of a medication adherence clinic. Electronic drug monitoring underestimates medication adherence, affecting subsequent

  16. MEMS-based sensing and algorithm development for fall detection and gait analysis

    Science.gov (United States)

    Gupta, Piyush; Ramirez, Gabriel; Lie, Donald Y. C.; Dallas, Tim; Banister, Ron E.; Dentino, Andrew

    2010-02-01

    Falls by the elderly are highly detrimental to health, frequently resulting in injury, high medical costs, and even death. Using a MEMS-based sensing system, algorithms are being developed for detecting falls and monitoring the gait of elderly and disabled persons. In this study, wireless sensors utilize Zigbee protocols were incorporated into planar shoe insoles and a waist mounted device. The insole contains four sensors to measure pressure applied by the foot. A MEMS based tri-axial accelerometer is embedded in the insert and a second one is utilized by the waist mounted device. The primary fall detection algorithm is derived from the waist accelerometer. The differential acceleration is calculated from samples received in 1.5s time intervals. This differential acceleration provides the quantification via an energy index. From this index one may ascertain different gait and identify fall events. Once a pre-determined index threshold is exceeded, the algorithm will classify an event as a fall or a stumble. The secondary algorithm is derived from frequency analysis techniques. The analysis consists of wavelet transforms conducted on the waist accelerometer data. The insole pressure data is then used to underline discrepancies in the transforms, providing more accurate data for classifying gait and/or detecting falls. The range of the transform amplitude in the fourth iteration of a Daubechies-6 transform was found sufficient to detect and classify fall events.

  17. Development of a Novel Probabilistic Algorithm for Localization of Rotors during Atrial Fibrillation

    Science.gov (United States)

    Ganesan, Prasanth; Salmin, Anthony; Cherry, Elizabeth M.; Ghoraani, Behnaz

    2018-01-01

    Atrial fibrillation (AF) is an irregular heart rhythm that can lead to stroke and other heart-related complications. Catheter ablation has been commonly used to destroy triggering sources of AF in the atria and consequently terminate the arrhythmia. However, efficient and accurate localization of the AF sustaining sources known as rotors is a major challenge in catheter ablation. In this paper, we developed a novel probabilistic algorithm that can adaptively guide a Lasso diagnostic catheter to locate the center of a rotor. Our algorithm uses a Bayesian updating approach to search for and locate rotors based on the characteristics of electrogram signals collected at every catheter placement. The algorithm was evaluated using a 10 × 10 cm 2D atrial tissue simulation of the Nygren human atrial cell model and was able to successfully guide the catheter to the rotor center in 3.37±1.05 (mean±std) steps (including placement at the center) when starting from any location on the tissue. Our novel automated algorithm can potentially play a significant role in patient-specific ablation of AF sources and increase the success of AF elimination procedures. PMID:28268378

  18. Modelling Kara Sea phytoplankton primary production: Development and skill assessment of regional algorithms

    Science.gov (United States)

    Demidov, Andrey B.; Kopelevich, Oleg V.; Mosharov, Sergey A.; Sheberstov, Sergey V.; Vazyulya, Svetlana V.

    2017-07-01

    Empirical region-specific (RSM), depth-integrated (DIM) and depth-resolved (DRM) primary production models are developed based on data from the Kara Sea during the autumn (September-October 1993, 2007, 2011). The model is validated by using field and satellite (MODIS-Aqua) observations. Our findings suggest that RSM algorithms perform better than non-region-specific algorithms (NRSM) in terms of regression analysis, root-mean-square difference (RMSD) and model efficiency. In general, the RSM and NRSM underestimate or overestimate the in situ water column integrated primary production (IPP) by a factor of 2 and 2.8, respectively. Additionally, our results suggest that the model skill of the RSM increases when the chlorophyll specific carbon fixation rate, efficiency of photosynthesis and photosynthetically available radiation (PAR) are used as input variables. The parameterization of chlorophyll (chl a) vertical profiles is performed in Kara Sea waters with different trophic statuses. Model validation with field data suggests that the DIM and DRM algorithms perform equally (RMSD of 0.29 and 0.31, respectively). No changes in the performance of the DIM and DRM algorithms are observed (RMSD of 0.30 and 0.31, respectively) when satellite-derived chl a, PAR and the diffuse attenuation coefficient (Kd) are applied as input variables.

  19. Algorithmic mathematics

    CERN Document Server

    Hougardy, Stefan

    2016-01-01

    Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.

  20. Development and characterization of an anthropomorphic breast software phantom based upon region-growing algorithm

    Science.gov (United States)

    Bakic, Predrag R.; Zhang, Cuiping; Maidment, Andrew D. A.

    2011-01-01

    Purpose: We present a novel algorithm for computer simulation of breast anatomy for generation of anthropomorphic software breast phantoms. A realistic breast simulation is necessary for preclinical validation of volumetric imaging modalities.Methods: The anthropomorphic software breast phantom simulates the skin, regions of adipose and fibroglandular tissue, and the matrix of Cooper’s ligaments and adipose compartments. The adipose compartments are simulated using a seeded region-growing algorithm; compartments are grown from a set of seed points with specific orientation and growing speed. The resulting adipose compartments vary in shape and size similar to real breasts; the adipose region has a compact coverage by adipose compartments of various sizes, while the fibroglandular region has fewer, more widely separated adipose compartments. Simulation parameters can be selected to cover the breadth of variations in breast anatomy observed clinically.Results: When simulating breasts of the same glandularity with different numbers of adipose compartments, the average compartment volume was proportional to the phantom size and inversely proportional to the number of simulated compartments. The use of the software phantom in clinical image simulation is illustrated by synthetic digital breast tomosynthesis images of the phantom. The proposed phantom design was capable of simulating breasts of different size, glandularity, and adipose compartment distribution. The region-growing approach allowed us to simulate adipose compartments with various size and shape. Qualitatively, simulated x-ray projections of the phantoms, generated using the proposed algorithm, have a more realistic appearance compared to previous versions of the phantom.Conclusions: A new algorithm for computer simulation of breast anatomy has been proposed that improved the realism of the anthropomorphic software breast phantom. PMID:21815391

  1. Forecasting of the development of professional medical equipment engineering based on neuro-fuzzy algorithms

    Science.gov (United States)

    Vaganova, E. V.; Syryamkin, M. V.

    2015-11-01

    The purpose of the research is the development of evolutionary algorithms for assessments of promising scientific directions. The main attention of the present study is paid to the evaluation of the foresight possibilities for identification of technological peaks and emerging technologies in professional medical equipment engineering in Russia and worldwide on the basis of intellectual property items and neural network modeling. An automated information system consisting of modules implementing various classification methods for accuracy of the forecast improvement and the algorithm of construction of neuro-fuzzy decision tree have been developed. According to the study result, modern trends in this field will focus on personalized smart devices, telemedicine, bio monitoring, «e-Health» and «m-Health» technologies.

  2. THE DEVELOPMENT OF A PARAMETERIZED SCATTER REMOVAL ALGORITHM FOR NUCLEAR MATERIALS IDENTIFICATION SYSTEM IMAGING

    Energy Technology Data Exchange (ETDEWEB)

    Grogan, Brandon R [ORNL

    2010-05-01

    This report presents a novel method for removing scattering effects from Nuclear Materials Identification System (NMIS) imaging. The NMIS uses fast neutron radiography to generate images of the internal structure of objects nonintrusively. If the correct attenuation through the object is measured, the positions and macroscopic cross sections of features inside the object can be determined. The cross sections can then be used to identify the materials, and a 3D map of the interior of the object can be reconstructed. Unfortunately, the measured attenuation values are always too low because scattered neutrons contribute to the unattenuated neutron signal. Previous efforts to remove the scatter from NMIS imaging have focused on minimizing the fraction of scattered neutrons that are misidentified as directly transmitted by electronically collimating and time tagging the source neutrons. The parameterized scatter removal algorithm (PSRA) approaches the problem from an entirely new direction by using Monte Carlo simulations to estimate the point scatter functions (PScFs) produced by neutrons scattering in the object. PScFs have been used to remove scattering successfully in other applications, but only with simple 2D detector models. This work represents the first time PScFs have ever been applied to an imaging detector geometry as complicated as the NMIS. By fitting the PScFs using a Gaussian function, they can be parameterized, and the proper scatter for a given problem can be removed without the need for rerunning the simulations each time. In order to model the PScFs, an entirely new method for simulating NMIS measurements was developed for this work. The development of the new models and the codes required to simulate them are presented in detail. The PSRA was used on several simulated and experimental measurements, and chi-squared goodness of fit tests were used to compare the corrected values to the ideal values that would be expected with no scattering. Using the

  3. The Development of a Parameterized Scatter Removal Algorithm for Nuclear Materials Identification System Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Grogan, Brandon Robert [Univ. of Tennessee, Knoxville, TN (United States)

    2010-03-01

    This dissertation presents a novel method for removing scattering effects from Nuclear Materials Identification System (NMIS) imaging. The NMIS uses fast neutron radiography to generate images of the internal structure of objects non-intrusively. If the correct attenuation through the object is measured, the positions and macroscopic cross-sections of features inside the object can be determined. The cross sections can then be used to identify the materials and a 3D map of the interior of the object can be reconstructed. Unfortunately, the measured attenuation values are always too low because scattered neutrons contribute to the unattenuated neutron signal. Previous efforts to remove the scatter from NMIS imaging have focused on minimizing the fraction of scattered neutrons which are misidentified as directly transmitted by electronically collimating and time tagging the source neutrons. The parameterized scatter removal algorithm (PSRA) approaches the problem from an entirely new direction by using Monte Carlo simulations to estimate the point scatter functions (PScFs) produced by neutrons scattering in the object. PScFs have been used to remove scattering successfully in other applications, but only with simple 2D detector models. This work represents the first time PScFs have ever been applied to an imaging detector geometry as complicated as the NMIS. By fitting the PScFs using a Gaussian function, they can be parameterized and the proper scatter for a given problem can be removed without the need for rerunning the simulations each time. In order to model the PScFs, an entirely new method for simulating NMIS measurements was developed for this work. The development of the new models and the codes required to simulate them are presented in detail. The PSRA was used on several simulated and experimental measurements and chi-squared goodness of fit tests were used to compare the corrected values to the ideal values that would be expected with no scattering. Using

  4. Development of algorithm for depreciation costs allocation in dynamic input-output industrial enterprise model

    OpenAIRE

    Keller Alevtina; Vinogradova Tatyana

    2017-01-01

    The article considers the issue of allocation of depreciation costs in the dynamic inputoutput model of an industrial enterprise. Accounting the depreciation costs in such a model improves the policy of fixed assets management. It is particularly relevant to develop the algorithm for the allocation of depreciation costs in the construction of dynamic input-output model of an industrial enterprise, since such enterprises have a significant amount of fixed assets. Implementation of terms of the...

  5. Ocean Observations with EOS/MODIS: Algorithm Development and Post Launch Studies

    Science.gov (United States)

    Gordon, Howard R.

    1997-01-01

    Significant accomplishments made during the present reporting period are as follows: (1) We developed a new method for identifying the presence of absorbing aerosols and, simultaneously, performing atmospheric correction. The algorithm consists of optimizing the match between the top-of-atmosphere radiance spectrum and the result of models of both the ocean and aerosol optical properties; (2) We developed an algorithm for providing an accurate computation of the diffuse transmittance of the atmosphere given an aerosol model. A module for inclusion into the MODIS atmospheric-correction algorithm was completed; (3) We acquired reflectance data for oceanic whitecaps during a cruise on the RV Ka'imimoana in the Tropical Pacific (Manzanillo, Mexico to Honolulu, Hawaii). The reflectance spectrum of whitecaps was found to be similar to that for breaking waves in the surf zone measured by Frouin, Schwindling and Deschamps, however, the drop in augmented reflectance from 670 to 860 nm was not as great, and the magnitude of the augmented reflectance was significantly less than expected; and (4) We developed a method for the approximate correction for the effects of the MODIS polarization sensitivity. The correction, however, requires adequate characterization of the polarization sensitivity of MODIS prior to launch.

  6. Measuring river from the cloud - River width algorithm development on Google Earth Engine

    Science.gov (United States)

    Yang, X.; Pavelsky, T.; Allen, G. H.; Donchyts, G.

    2017-12-01

    Rivers are some of the most dynamic features of the terrestrial land surface. They help distribute freshwater, nutrients, sediment, and they are also responsible for some of the greatest natural hazards. Despite their importance, our understanding of river behavior is limited at the global scale, in part because we do not have a river observational dataset that spans both time and space. Remote sensing data represent a rich, largely untapped resource for observing river dynamics. In particular, publicly accessible archives of satellite optical imagery, which date back to the 1970s, can be used to study the planview morphodynamics of rivers at the global scale. Here we present an image processing algorithm developed using the Google Earth Engine cloud-based platform, that can automatically extracts river centerlines and widths from Landsat 5, 7, and 8 scenes at 30 m resolution. Our algorithm makes use of the latest monthly global surface water history dataset and an existing Global River Width from Landsat (GRWL) dataset to efficiently extract river masks from each Landsat scene. Then a combination of distance transform and skeletonization techniques are used to extract river centerlines. Finally, our algorithm calculates wetted river width at each centerline pixel perpendicular to its local centerline direction. We validated this algorithm using in situ data estimated from 16 USGS gauge stations (N=1781). We find that 92% of the width differences are within 60 m (i.e. the minimum length of 2 Landsat pixels). Leveraging Earth Engine's infrastructure of collocated data and processing power, our goal is to use this algorithm to reconstruct the morphodynamic history of rivers globally by processing over 100,000 Landsat 5 scenes, covering from 1984 to 2013.

  7. Development and evaluation of an articulated registration algorithm for human skeleton registration

    Science.gov (United States)

    Yip, Stephen; Perk, Timothy; Jeraj, Robert

    2014-03-01

    Accurate registration over multiple scans is necessary to assess treatment response of bone diseases (e.g. metastatic bone lesions). This study aimed to develop and evaluate an articulated registration algorithm for the whole-body skeleton registration in human patients. In articulated registration, whole-body skeletons are registered by auto-segmenting into individual bones using atlas-based segmentation, and then rigidly aligning them. Sixteen patients (weight = 80-117 kg, height = 168-191 cm) with advanced prostate cancer underwent the pre- and mid-treatment PET/CT scans over a course of cancer therapy. Skeletons were extracted from the CT images by thresholding (HU>150). Skeletons were registered using the articulated, rigid, and deformable registration algorithms to account for position and postural variability between scans. The inter-observers agreement in the atlas creation, the agreement between the manually and atlas-based segmented bones, and the registration performances of all three registration algorithms were all assessed using the Dice similarity index—DSIobserved, DSIatlas, and DSIregister. Hausdorff distance (dHausdorff) of the registered skeletons was also used for registration evaluation. Nearly negligible inter-observers variability was found in the bone atlases creation as the DSIobserver was 96 ± 2%. Atlas-based and manual segmented bones were in excellent agreement with DSIatlas of 90 ± 3%. Articulated (DSIregsiter = 75 ± 2%, dHausdorff = 0.37 ± 0.08 cm) and deformable registration algorithms (DSIregister = 77 ± 3%, dHausdorff = 0.34 ± 0.08 cm) considerably outperformed the rigid registration algorithm (DSIregsiter = 59 ± 9%, dHausdorff = 0.69 ± 0.20 cm) in the skeleton registration as the rigid registration algorithm failed to capture the skeleton flexibility in the joints. Despite superior skeleton registration performance, deformable registration algorithm failed to preserve the local rigidity of bones as over 60% of the

  8. Development of a novel algorithm to determine adherence to chronic pain treatment guidelines using administrative claims

    Directory of Open Access Journals (Sweden)

    Margolis JM

    2017-02-01

    Full Text Available Jay M Margolis,1 Nicole Princic,2 David M Smith,2 Lucy Abraham,3 Joseph C Cappelleri,4 Sonali N Shah,5 Peter W Park5 1Truven Health Analytics, Bethesda, MD, 2Truven Health Analytics, Cambridge, MA, USA; 3Pfizer Ltd, Tadworth, UK; 4Pfizer Inc, Groton, CT, 5Pfizer Inc, New York, NY, USA Objective: To develop a claims-based algorithm for identifying patients who are adherent versus nonadherent to published guidelines for chronic pain management. Methods: Using medical and pharmacy health care claims from the MarketScan® Commercial and Medicare Supplemental Databases, patients were selected during July 1, 2010, to June 30, 2012, with the following chronic pain conditions: osteoarthritis (OA, gout (GT, painful diabetic peripheral neuropathy (pDPN, post-herpetic neuralgia (PHN, and fibromyalgia (FM. Patients newly diagnosed with 12 months of continuous medical and pharmacy benefits both before and after initial diagnosis (index date were categorized as adherent, nonadherent, or unsure according to the guidelines-based algorithm using disease-specific pain medication classes grouped as first-line, later-line, or not recommended. Descriptive and multivariate analyses compared patient outcomes with algorithm-derived categorization endpoints. Results: A total of 441,465 OA patients, 76,361 GT patients, 10,645 pDPN, 4,010 PHN patients, and 150,321 FM patients were included in the development of the algorithm. Patients found adherent to guidelines included 51.1% for OA, 25% for GT, 59.5% for pDPN, 54.9% for PHN, and 33.5% for FM. The majority (~90% of patients adherent to the guidelines initiated therapy with prescriptions for first-line pain medications written for a minimum of 30 days. Patients found nonadherent to guidelines included 30.7% for OA, 6.8% for GT, 34.9% for pDPN, 23.1% for PHN, and 34.7% for FM. Conclusion: This novel algorithm used real-world pharmacotherapy treatment patterns to evaluate adherence to pain management guidelines in five

  9. DEVELOPMENT AND TESTING OF ERRORS CORRECTION ALGORITHM IN ELECTRONIC DESIGN AUTOMATION

    Directory of Open Access Journals (Sweden)

    E. B. Romanova

    2016-03-01

    Full Text Available Subject of Research. We have developed and presented a method of design errors correction for printed circuit boards (PCB in electronic design automation (EDA. Control of process parameters of PCB in EDA is carried out by means of Design Rule Check (DRC program. The DRC program monitors compliance with the design rules (minimum width of the conductors and gaps, the parameters of pads and via-holes, the parameters of polygons, etc. and also checks the route tracing, short circuits, the presence of objects outside PCB edge and other design errors. The result of the DRC program running is the generated error report. For quality production of circuit boards DRC-errors should be corrected, that is ensured by the creation of error-free DRC report. Method. A problem of correction repeatability of DRC-errors was identified as a result of trial operation of P-CAD, Altium Designer and KiCAD programs. For its solution the analysis of DRC-errors was carried out; the methods of their correction were studied. DRC-errors were proposed to be clustered. Groups of errors include the types of errors, which correction sequence has no impact on the correction time. The algorithm for correction of DRC-errors is proposed. Main Results. The best correction sequence of DRC-errors has been determined. The algorithm has been tested in the following EDA: P-CAD, Altium Designer and KiCAD. Testing has been carried out on two and four-layer test PCB (digital and analog. Comparison of DRC-errors correction time with the algorithm application to the same time without it has been done. It has been shown that time saved for the DRC-errors correction increases with the number of error types up to 3.7 times. Practical Relevance. The proposed algorithm application will reduce PCB design time and improve the quality of the PCB design. We recommend using the developed algorithm when the number of error types is equal to four or more. The proposed algorithm can be used in different

  10. Development of the Algorithm for Energy Efficiency Improvement of Bulk Material Transport System

    Directory of Open Access Journals (Sweden)

    Milan Bebic

    2013-06-01

    Full Text Available The paper presents a control strategy for the system of belt conveyors with adjustable speed drives based on the principle of optimum energy consumption. Different algorithms are developed for generating the reference speed of the system of belt conveyors in order to achieve maximum material cross section on the belts and thus reduction of required electrical drive power. Control structures presented in the paper are developed and tested on the detailed mathematical model of the drive system with the rubber belt. The performed analyses indicate that the application of the algorithm based on fuzzy logic control (FLC which incorporates drive torque as an input variable is the proper solution. Therefore, this solution is implemented on the newvariable speed belt conveyor system with remote control on an open pit mine. Results of measurements on the system prove that the applied algorithm based on fuzzy logic control provides minimum electrical energy consumption of the drive under given constraints. The paper also presents the additional analytical verification of the achieved results trough a method based on the sequential quadratic programming for finding a minimum of a nonlinear function of multiple variables under given constraints.

  11. Development of estimation algorithm of loose parts and analysis of impact test data

    International Nuclear Information System (INIS)

    Kim, Jung Soo; Ham, Chang Sik; Jung, Chul Hwan; Hwang, In Koo; Kim, Tak Hwane; Kim, Tae Hwane; Park, Jin Ho

    1999-11-01

    Loose parts are produced by being parted from the structure of the reactor coolant system or by coming into RCS from the outside during test operation, refueling, and overhaul time. These loose parts are mixed with reactor coolant fluid and collide with RCS components. When loose parts are occurred within RCS, it is necessary to estimate the impact point and the mass of loose parts. In this report an analysis algorithm for the estimation of the impact point and mass of loose part is developed. The developed algorithm was tested with the impact test data of Yonggwang-3. The estimated impact point using the proposed algorithm in this report had 5 percent error to the real test data. The estimated mass was analyzed within 28 percent error bound using the same unit's data. We analyzed the characteristic frequency of each sensor because this frequency effected the estimation of impact point and mass. The characteristic frequency of the background noise during normal operation was compared with that of the impact test data. The result of the comparison illustrated that the characteristic frequency bandwidth of the impact test data was lower than that of the background noise during normal operation. by the comparison, the integrity of sensor and monitoring system could be checked, too. (author)

  12. Development of an image reconstruction algorithm for a few number of projection data

    International Nuclear Information System (INIS)

    Vieira, Wilson S.; Brandao, Luiz E.; Braz, Delson

    2007-01-01

    An image reconstruction algorithm was developed for specific cases of radiotracer applications in industry (rotating cylindrical mixers), involving a very few number of projection data. The algorithm was planned for imaging radioactive isotope distributions around the center of circular planes. The method consists of adapting the original expectation maximization algorithm (EM) to solve the ill-posed emission tomography inverse problem in order to reconstruct transversal 2D images of an object with only four projections. To achieve this aim, counts of photons emitted by selected radioactive sources in the plane, after they had been simulated using the commercial software MICROSHIELD 5.05, constitutes the projections and a computational code (SPECTEM) was developed to generate activity vectors or images related to those sources. SPECTEM is flexible to support simultaneous changes of the detectors's geometry, the medium under investigation and the properties of the gamma radiation. As a consequence of the code had been followed correctly the proposed method, good results were obtained and they encouraged us to continue the next step of the research: the validation of SPECTEM utilizing experimental data to check its real performance. We aim this code will improve considerably radiotracer methodology, making easier the diagnosis of fails in industrial processes. (author)

  13. Development of an image reconstruction algorithm for a few number of projection data

    Energy Technology Data Exchange (ETDEWEB)

    Vieira, Wilson S.; Brandao, Luiz E. [Instituto de Engenharia Nuclear (IEN-CNEN/RJ), Rio de Janeiro , RJ (Brazil)]. E-mails: wilson@ien.gov.br; brandao@ien.gov.br; Braz, Delson [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Coordenacao dos Programa de Pos-graduacao de Engenharia (COPPE). Lab. de Instrumentacao Nuclear]. E-mail: delson@mailhost.lin.ufrj.br

    2007-07-01

    An image reconstruction algorithm was developed for specific cases of radiotracer applications in industry (rotating cylindrical mixers), involving a very few number of projection data. The algorithm was planned for imaging radioactive isotope distributions around the center of circular planes. The method consists of adapting the original expectation maximization algorithm (EM) to solve the ill-posed emission tomography inverse problem in order to reconstruct transversal 2D images of an object with only four projections. To achieve this aim, counts of photons emitted by selected radioactive sources in the plane, after they had been simulated using the commercial software MICROSHIELD 5.05, constitutes the projections and a computational code (SPECTEM) was developed to generate activity vectors or images related to those sources. SPECTEM is flexible to support simultaneous changes of the detectors's geometry, the medium under investigation and the properties of the gamma radiation. As a consequence of the code had been followed correctly the proposed method, good results were obtained and they encouraged us to continue the next step of the research: the validation of SPECTEM utilizing experimental data to check its real performance. We aim this code will improve considerably radiotracer methodology, making easier the diagnosis of fails in industrial processes. (author)

  14. Development of a 3D modeling algorithm for tunnel deformation monitoring based on terrestrial laser scanning

    Directory of Open Access Journals (Sweden)

    Xiongyao Xie

    2017-03-01

    Full Text Available Deformation monitoring is vital for tunnel engineering. Traditional monitoring techniques measure only a few data points, which is insufficient to understand the deformation of the entire tunnel. Terrestrial Laser Scanning (TLS is a newly developed technique that can collect thousands of data points in a few minutes, with promising applications to tunnel deformation monitoring. The raw point cloud collected from TLS cannot display tunnel deformation; therefore, a new 3D modeling algorithm was developed for this purpose. The 3D modeling algorithm includes modules for preprocessing the point cloud, extracting the tunnel axis, performing coordinate transformations, performing noise reduction and generating the 3D model. Measurement results from TLS were compared to the results of total station and numerical simulation, confirming the reliability of TLS for tunnel deformation monitoring. Finally, a case study of the Shanghai West Changjiang Road tunnel is introduced, where TLS was applied to measure shield tunnel deformation over multiple sections. Settlement, segment dislocation and cross section convergence were measured and visualized using the proposed 3D modeling algorithm.

  15. Development of an Algorithm for Automatic Analysis of the Impedance Spectrum Based on a Measurement Model

    Science.gov (United States)

    Kobayashi, Kiyoshi; Suzuki, Tohru S.

    2018-03-01

    A new algorithm for the automatic estimation of an equivalent circuit and the subsequent parameter optimization is developed by combining the data-mining concept and complex least-squares method. In this algorithm, the program generates an initial equivalent-circuit model based on the sampling data and then attempts to optimize the parameters. The basic hypothesis is that the measured impedance spectrum can be reproduced by the sum of the partial-impedance spectra presented by the resistor, inductor, resistor connected in parallel to a capacitor, and resistor connected in parallel to an inductor. The adequacy of the model is determined by using a simple artificial-intelligence function, which is applied to the output function of the Levenberg-Marquardt module. From the iteration of model modifications, the program finds an adequate equivalent-circuit model without any user input to the equivalent-circuit model.

  16. Application of locally developed pavement temperature prediction algorithms in performance grade (PG) binder selection

    CSIR Research Space (South Africa)

    Denneman, E

    2007-07-01

    Full Text Available , in other words, data from outside the datasets against which the model was developed. The Viljoen algorithms form the basis of newly developed pavement temperature prediction software, called CSIR ThermalPADS. The use of this software in HMA... is provided as Equation 3. The ThermalPADS software contains a more accurate approximation of the daily solar declination. Declination=23.45º⋅cos[360º365⋅N 10] (3) where: N = day of the year (with 1st of January = 1) The equation for maximum asphalt...

  17. PREVIOUS SECOND TRIMESTER ABORTION

    African Journals Online (AJOL)

    PNLC

    PREVIOUS SECOND TRIMESTER ABORTION: A risk factor for third trimester uterine rupture in three ... for accurate diagnosis of uterine rupture. KEY WORDS: Induced second trimester abortion - Previous uterine surgery - Uterine rupture. ..... scarred uterus during second trimester misoprostol- induced labour for a missed ...

  18. Development of Nuclear Power Plant Safety Evaluation Method for the Automation Algorithm Application

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Seung Geun; Seong, Poong Hyun [KAIST, Daejeon (Korea, Republic of)

    2016-10-15

    It is commonly believed that replacing human operators to the automated system would guarantee greater efficiency, lower workloads, and fewer human error. Conventional machine learning techniques are considered as not capable to handle complex situations in NPP. Due to these kinds of issues, automation is not actively adopted although human error probability drastically increases during abnormal situations in NPP due to overload of information, high workload, and short time available for diagnosis. Recently, new machine learning techniques, which are known as ‘deep learning’ techniques have been actively applied to many fields, and the deep learning technique-based artificial intelligences (AIs) are showing better performance than conventional AIs. In 2015, deep Q-network (DQN) which is one of the deep learning techniques was developed and applied to train AI that automatically plays various Atari 2800 games, and this AI surpassed the human-level playing in many kind of games. Also in 2016, ‘Alpha-Go’, which was developed by ‘Google Deepmind’ based on deep learning technique to play the game of Go (i.e. Baduk), was defeated Se-dol Lee who is the World Go champion with score of 4:1. By the effort for reducing human error in NPPs, the ultimate goal of this study is the development of automation algorithm which can cover various situations in NPPs. As the first part, quantitative and real-time NPP safety evaluation method is being developed in order to provide the training criteria for automation algorithm. For that, EWS concept of medical field was adopted, and the applicability is investigated in this paper. Practically, the application of full automation (i.e. fully replaces human operators) may requires much more time for the validation and investigation of side-effects after the development of automation algorithm, and so the adoption in the form of full automation will take long time.

  19. Development of Nuclear Power Plant Safety Evaluation Method for the Automation Algorithm Application

    International Nuclear Information System (INIS)

    Kim, Seung Geun; Seong, Poong Hyun

    2016-01-01

    It is commonly believed that replacing human operators to the automated system would guarantee greater efficiency, lower workloads, and fewer human error. Conventional machine learning techniques are considered as not capable to handle complex situations in NPP. Due to these kinds of issues, automation is not actively adopted although human error probability drastically increases during abnormal situations in NPP due to overload of information, high workload, and short time available for diagnosis. Recently, new machine learning techniques, which are known as ‘deep learning’ techniques have been actively applied to many fields, and the deep learning technique-based artificial intelligences (AIs) are showing better performance than conventional AIs. In 2015, deep Q-network (DQN) which is one of the deep learning techniques was developed and applied to train AI that automatically plays various Atari 2800 games, and this AI surpassed the human-level playing in many kind of games. Also in 2016, ‘Alpha-Go’, which was developed by ‘Google Deepmind’ based on deep learning technique to play the game of Go (i.e. Baduk), was defeated Se-dol Lee who is the World Go champion with score of 4:1. By the effort for reducing human error in NPPs, the ultimate goal of this study is the development of automation algorithm which can cover various situations in NPPs. As the first part, quantitative and real-time NPP safety evaluation method is being developed in order to provide the training criteria for automation algorithm. For that, EWS concept of medical field was adopted, and the applicability is investigated in this paper. Practically, the application of full automation (i.e. fully replaces human operators) may requires much more time for the validation and investigation of side-effects after the development of automation algorithm, and so the adoption in the form of full automation will take long time

  20. Ameloblastic fibroma: a stage in the development of a hamartomatous odontoma or a true neoplasm? Critical analysis of 162 previously reported cases plus 10 new cases.

    Science.gov (United States)

    Buchner, Amos; Vered, Marilena

    2013-11-01

    To analyze neoplastic and hamartomatous variants of ameloblastic fibromas (AFs). Analysis of 172 cases (162 previously reported, 10 new). AF emerged as a lesion primarily of children and adolescents (mean age, 14.9 years), with about 80% diagnosed when odontogenesis is completed (age, 22 years are considered true neoplasms, while those in younger patients may be either true neoplasms or odontomas in early stages of development. Although the histopathology of hamartomatous and neoplastic variants of AF are indistinguishable, clinical and radiologic features can be of some help to distinguish between them. Asymptomatic small unilocular lesions with no or minimal bone expansion in young individuals are likely to be developing odontomas, and large, expansile lesions with extensive bone destruction are neoplasms. Copyright © 2013 Elsevier Inc. All rights reserved.

  1. Interpreting "Personality" Taxonomies: Why Previous Models Cannot Capture Individual-Specific Experiencing, Behaviour, Functioning and Development. Major Taxonomic Tasks Still Lay Ahead.

    Science.gov (United States)

    Uher, Jana

    2015-12-01

    As science seeks to make generalisations, a science of individual peculiarities encounters intricate challenges. This article explores these challenges by applying the Transdisciplinary Philosophy-of-Science Paradigm for Research on Individuals (TPS-Paradigm) and by exploring taxonomic "personality" research as an example. Analyses of researchers' interpretations of the taxonomic "personality" models, constructs and data that have been generated in the field reveal widespread erroneous assumptions about the abilities of previous methodologies to appropriately represent individual-specificity in the targeted phenomena. These assumptions, rooted in everyday thinking, fail to consider that individual-specificity and others' minds cannot be directly perceived, that abstract descriptions cannot serve as causal explanations, that between-individual structures cannot be isomorphic to within-individual structures, and that knowledge of compositional structures cannot explain the process structures of their functioning and development. These erroneous assumptions and serious methodological deficiencies in widely used standardised questionnaires have effectively prevented psychologists from establishing taxonomies that can comprehensively model individual-specificity in most of the kinds of phenomena explored as "personality", especially in experiencing and behaviour and in individuals' functioning and development. Contrary to previous assumptions, it is not universal models but rather different kinds of taxonomic models that are required for each of the different kinds of phenomena, variations and structures that are commonly conceived of as "personality". Consequently, to comprehensively explore individual-specificity, researchers have to apply a portfolio of complementary methodologies and develop different kinds of taxonomies, most of which have yet to be developed. Closing, the article derives some meta-desiderata for future research on individuals' "personality".

  2. Development of simulators algorithms of planar radioactive sources for use in computer models of exposure

    International Nuclear Information System (INIS)

    Vieira, Jose Wilson; Leal Neto, Viriato; Lima Filho, Jose de Melo; Lima, Fernando Roberto de Andrade

    2013-01-01

    This paper presents as algorithm of a planar and isotropic radioactive source and by rotating the probability density function (PDF) Gaussian standard subjected to a translatory method which displaces its maximum throughout its field changes its intensity and makes the dispersion around the mean right asymmetric. The algorithm was used to generate samples of photons emerging from a plane and reach a semicircle involving a phantom voxels. The PDF describing this problem is already known, but the generating function of random numbers (FRN) associated with it can not be deduced by direct MC techniques. This is a significant problem because it can be adjusted to simulations involving natural terrestrial radiation or accidents in medical establishments or industries where the radioactive material spreads in a plane. Some attempts to obtain a FRN for the PDF of the problem have already been implemented by the Research Group in Numerical Dosimetry (GND) from Recife-PE, Brazil, always using the technique rejection sampling MC. This article followed methodology of previous work, except on one point: The problem of the PDF was replaced by a normal PDF transferred. To perform dosimetric comparisons, we used two MCES: the MSTA (Mash standing, composed by the adult male voxel phantom in orthostatic position, MASH (male mesh), available from the Department of Nuclear Energy (DEN) of the Federal University of Pernambuco (UFPE), coupled to MC EGSnrc code and the GND planar source based on the rejection technique) and MSTA N T. The two MCES are similar in all but FRN used in planar source. The results presented and discussed in this paper establish the new algorithm for a planar source to be used by GND

  3. Algorithm development and verification of UASCM for multi-dimension and multi-group neutron kinetics model

    International Nuclear Information System (INIS)

    Si, S.

    2012-01-01

    The Universal Algorithm of Stiffness Confinement Method (UASCM) for neutron kinetics model of multi-dimensional and multi-group transport equations or diffusion equations has been developed. The numerical experiments based on transport theory code MGSNM and diffusion theory code MGNEM have demonstrated that the algorithm has sufficient accuracy and stability. (authors)

  4. Development and implementation of an automatic control algorithm for the University of Utah nuclear reactor

    International Nuclear Information System (INIS)

    Crawford, Kevan C.; Sandquist, Gary M.

    1990-01-01

    The emphasis of this work is the development and implementation of an automatic control philosophy which uses the classical operational philosophies as a foundation. Three control algorithms were derived based on various simplifying assumptions. Two of the algorithms were tested in computer simulations. After realizing the insensitivity of the system to the simplifications, the most reduced form of the algorithms was implemented on the computer control system at the University of Utah (UNEL). Since the operational philosophies have a higher priority than automatic control, they determine when automatic control may be utilized. Unlike the operational philosophies, automatic control is not concerned with component failures. The object of this philosophy is the movement of absorber rods to produce a requested power. When the current power level is compared to the requested power level, an error may be detected which will require the movement of a control rod to correct the error. The automatic control philosophy adds another dimension to the classical operational philosophies. Using this philosophy, normal operator interactions with the computer would be limited only to run parameters such as power, period, and run time. This eliminates subjective judgements, objective judgements under pressure, and distractions to the operator and insures the reactor will be operated in a safe and controlled manner as well as providing reproducible operations

  5. On developing B-spline registration algorithms for multi-core processors.

    Science.gov (United States)

    Shackleford, J A; Kandasamy, N; Sharp, G C

    2010-11-07

    Spline-based deformable registration methods are quite popular within the medical-imaging community due to their flexibility and robustness. However, they require a large amount of computing time to obtain adequate results. This paper makes two contributions towards accelerating B-spline-based registration. First, we propose a grid-alignment scheme and associated data structures that greatly reduce the complexity of the registration algorithm. Based on this grid-alignment scheme, we then develop highly data parallel designs for B-spline registration within the stream-processing model, suitable for implementation on multi-core processors such as graphics processing units (GPUs). Particular attention is focused on an optimal method for performing analytic gradient computations in a data parallel fashion. CPU and GPU versions are validated for execution time and registration quality. Performance results on large images show that our GPU algorithm achieves a speedup of 15 times over the single-threaded CPU implementation whereas our multi-core CPU algorithm achieves a speedup of 8 times over the single-threaded implementation. The CPU and GPU versions achieve near-identical registration quality in terms of RMS differences between the generated vector fields.

  6. Development of a general learning algorithm with applications in nuclear reactor systems

    Energy Technology Data Exchange (ETDEWEB)

    Brittain, C.R.; Otaduy, P.J.; Perez, R.B.

    1989-12-01

    The objective of this study was development of a generalized learning algorithm that can learn to predict a particular feature of a process by observation of a set of representative input examples. The algorithm uses pattern matching and statistical analysis techniques to find a functional relationship between descriptive attributes of the input examples and the feature to be predicted. The algorithm was tested by applying it to a set of examples consisting of performance descriptions for 277 fuel cycles of Oak Ridge National Laboratory's High Flux Isotope Reactor (HFIR). The program learned to predict the critical rod position for the HFIR from core configuration data prior to reactor startup. The functional relationship bases its predictions on initial core reactivity, the number of certain targets placed in the center of the reactor, and the total exposure of the control plates. Twelve characteristic fuel cycle clusters were identified. Nine fuel cycles were diagnosed as having noisy data, and one could not be predicted by the functional relationship. 13 refs., 6 figs.

  7. Development of a general learning algorithm with applications in nuclear reactor systems

    International Nuclear Information System (INIS)

    Brittain, C.R.; Otaduy, P.J.; Perez, R.B.

    1989-12-01

    The objective of this study was development of a generalized learning algorithm that can learn to predict a particular feature of a process by observation of a set of representative input examples. The algorithm uses pattern matching and statistical analysis techniques to find a functional relationship between descriptive attributes of the input examples and the feature to be predicted. The algorithm was tested by applying it to a set of examples consisting of performance descriptions for 277 fuel cycles of Oak Ridge National Laboratory's High Flux Isotope Reactor (HFIR). The program learned to predict the critical rod position for the HFIR from core configuration data prior to reactor startup. The functional relationship bases its predictions on initial core reactivity, the number of certain targets placed in the center of the reactor, and the total exposure of the control plates. Twelve characteristic fuel cycle clusters were identified. Nine fuel cycles were diagnosed as having noisy data, and one could not be predicted by the functional relationship. 13 refs., 6 figs

  8. Performance of a Real-time Multipurpose 2-Dimensional Clustering Algorithm Developed for the ATLAS Experiment

    CERN Document Server

    Gkaitatzis, Stamatios; The ATLAS collaboration

    2016-01-01

    In this paper the performance of the 2D pixel clustering algorithm developed for the Input Mezzanine card of the ATLAS Fast TracKer system is presented. Fast TracKer is an approved ATLAS upgrade that has the goal to provide a complete list of tracks to the ATLAS High Level Trigger for each level-1 accepted event, at up to 100 kHz event rate with a very small latency, in the order of 100 µs. The Input Mezzanine card is the input stage of the Fast TracKer system. Its role is to receive data from the silicon detector and perform real time clustering, thus to reduce the amount of data propagated to the subsequent processing levels with minimal information loss. We focus on the most challenging component on the Input Mezzanine card, the 2D clustering algorithm executed on the pixel data. We compare two different implementations of the algorithm. The first is one called the ideal one which searches clusters of pixels in the whole silicon module at once and calculates the cluster centroids exploiting the whole avai...

  9. On developing B-spline registration algorithms for multi-core processors

    International Nuclear Information System (INIS)

    Shackleford, J A; Kandasamy, N; Sharp, G C

    2010-01-01

    Spline-based deformable registration methods are quite popular within the medical-imaging community due to their flexibility and robustness. However, they require a large amount of computing time to obtain adequate results. This paper makes two contributions towards accelerating B-spline-based registration. First, we propose a grid-alignment scheme and associated data structures that greatly reduce the complexity of the registration algorithm. Based on this grid-alignment scheme, we then develop highly data parallel designs for B-spline registration within the stream-processing model, suitable for implementation on multi-core processors such as graphics processing units (GPUs). Particular attention is focused on an optimal method for performing analytic gradient computations in a data parallel fashion. CPU and GPU versions are validated for execution time and registration quality. Performance results on large images show that our GPU algorithm achieves a speedup of 15 times over the single-threaded CPU implementation whereas our multi-core CPU algorithm achieves a speedup of 8 times over the single-threaded implementation. The CPU and GPU versions achieve near-identical registration quality in terms of RMS differences between the generated vector fields.

  10. Performance of a Real-time Multipurpose 2-Dimensional Clustering Algorithm Developed for the ATLAS Experiment

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00372074; The ATLAS collaboration; Sotiropoulou, Calliope Louisa; Annovi, Alberto; Kordas, Kostantinos

    2016-01-01

    In this paper the performance of the 2D pixel clustering algorithm developed for the Input Mezzanine card of the ATLAS Fast TracKer system is presented. Fast TracKer is an approved ATLAS upgrade that has the goal to provide a complete list of tracks to the ATLAS High Level Trigger for each level-1 accepted event, at up to 100 kHz event rate with a very small latency, in the order of 100µs. The Input Mezzanine card is the input stage of the Fast TracKer system. Its role is to receive data from the silicon detector and perform real time clustering, thus to reduce the amount of data propagated to the subsequent processing levels with minimal information loss. We focus on the most challenging component on the Input Mezzanine card, the 2D clustering algorithm executed on the pixel data. We compare two different implementations of the algorithm. The first is one called the ideal one which searches clusters of pixels in the whole silicon module at once and calculates the cluster centroids exploiting the whole avail...

  11. Algorithm for evaluating the effectiveness of a high-rise development project based on current yield

    Science.gov (United States)

    Soboleva, Elena

    2018-03-01

    The article is aimed at the issues of operational evaluation of development project efficiency in high-rise construction under the current economic conditions in Russia. The author touches the following issues: problems of implementing development projects, the influence of the operational evaluation quality of high-rise construction projects on general efficiency, assessing the influence of the project's external environment on the effectiveness of project activities under crisis conditions and the quality of project management. The article proposes the algorithm and the methodological approach to the quality management of the developer project efficiency based on operational evaluation of the current yield efficiency. The methodology for calculating the current efficiency of a development project for high-rise construction has been updated.

  12. Development of a Smart Release Algorithm for Mid-Air Separation of Parachute Test Articles

    Science.gov (United States)

    Moore, James W.

    2011-01-01

    The Crew Exploration Vehicle Parachute Assembly System (CPAS) project is currently developing an autonomous method to separate a capsule-shaped parachute test vehicle from an air-drop platform for use in the test program to develop and validate the parachute system for the Orion spacecraft. The CPAS project seeks to perform air-drop tests of an Orion-like boilerplate capsule. Delivery of the boilerplate capsule to the test condition has proven to be a critical and complicated task. In the current concept, the boilerplate vehicle is extracted from an aircraft on top of a Type V pallet and then separated from the pallet in mid-air. The attitude of the vehicles at separation is critical to avoiding re-contact and successfully deploying the boilerplate into a heatshield-down orientation. Neither the pallet nor the boilerplate has an active control system. However, the attitude of the mated vehicle as a function of time is somewhat predictable. CPAS engineers have designed an avionics system to monitor the attitude of the mated vehicle as it is extracted from the aircraft and command a release when the desired conditions are met. The algorithm includes contingency capabilities designed to release the test vehicle before undesirable orientations occur. The algorithm was verified with simulation and ground testing. The pre-flight development and testing is discussed and limitations of ground testing are noted. The CPAS project performed a series of three drop tests as a proof-of-concept of the release technique. These tests helped to refine the attitude instrumentation and software algorithm to be used on future tests. The drop tests are described in detail and the evolution of the release system with each test is described.

  13. Integrative multicellular biological modeling: a case study of 3D epidermal development using GPU algorithms

    Directory of Open Access Journals (Sweden)

    Christley Scott

    2010-08-01

    Full Text Available Abstract Background Simulation of sophisticated biological models requires considerable computational power. These models typically integrate together numerous biological phenomena such as spatially-explicit heterogeneous cells, cell-cell interactions, cell-environment interactions and intracellular gene networks. The recent advent of programming for graphical processing units (GPU opens up the possibility of developing more integrative, detailed and predictive biological models while at the same time decreasing the computational cost to simulate those models. Results We construct a 3D model of epidermal development and provide a set of GPU algorithms that executes significantly faster than sequential central processing unit (CPU code. We provide a parallel implementation of the subcellular element method for individual cells residing in a lattice-free spatial environment. Each cell in our epidermal model includes an internal gene network, which integrates cellular interaction of Notch signaling together with environmental interaction of basement membrane adhesion, to specify cellular state and behaviors such as growth and division. We take a pedagogical approach to describing how modeling methods are efficiently implemented on the GPU including memory layout of data structures and functional decomposition. We discuss various programmatic issues and provide a set of design guidelines for GPU programming that are instructive to avoid common pitfalls as well as to extract performance from the GPU architecture. Conclusions We demonstrate that GPU algorithms represent a significant technological advance for the simulation of complex biological models. We further demonstrate with our epidermal model that the integration of multiple complex modeling methods for heterogeneous multicellular biological processes is both feasible and computationally tractable using this new technology. We hope that the provided algorithms and source code will be a

  14. Development of an improved genetic algorithm and its application in the optimal design of ship nuclear power system

    International Nuclear Information System (INIS)

    Jia Baoshan; Yu Jiyang; You Songbo

    2005-01-01

    This article focuses on the development of an improved genetic algorithm and its application in the optimal design of the ship nuclear reactor system, whose goal is to find a combination of system parameter values that minimize the mass or volume of the system given the power capacity requirement and safety criteria. An improved genetic algorithm (IGA) was developed using an 'average fitness value' grouping + 'specified survival probability' rank selection method and a 'separate-recombine' duplication operator. Combining with a simulated annealing algorithm (SAA) that continues the local search after the IGA reaches a satisfactory point, the algorithm gave satisfactory optimization results from both search efficiency and accuracy perspectives. This IGA-SAA algorithm successfully solved the design optimization problem of ship nuclear power system. It is an advanced and efficient methodology that can be applied to the similar optimization problems in other areas. (authors)

  15. Developing the algorithm for assessing the competitive abilities of functional foods in marketing

    Directory of Open Access Journals (Sweden)

    Nilova Liudmila

    2017-01-01

    Full Text Available A thorough analysis of competitive factors of functional foods has made it possible to develop an algorithm for assessing the competitive factors of functional food products, with respect to their essential consumer features — quality, safety and functionality. Questionnaires filled in by experts and the published results of surveys of consumers from different countries were used to help select the essential consumer features in functional foods. A “desirability of consumer features” model triangle, based on functional bread and bakery products, was constructed with the use of the Harrington function.

  16. Multidisciplinary Design, Analysis, and Optimization Tool Development Using a Genetic Algorithm

    Science.gov (United States)

    Pak, Chan-gi; Li, Wesley

    2009-01-01

    Multidisciplinary design, analysis, and optimization using a genetic algorithm is being developed at the National Aeronautics and Space Administration Dryden Flight Research Center (Edwards, California) to automate analysis and design process by leveraging existing tools to enable true multidisciplinary optimization in the preliminary design stage of subsonic, transonic, supersonic, and hypersonic aircraft. This is a promising technology, but faces many challenges in large-scale, real-world application. This report describes current approaches, recent results, and challenges for multidisciplinary design, analysis, and optimization as demonstrated by experience with the Ikhana fire pod design.!

  17. China's experimental pragmatics of "Scientific development" in wind power: Algorithmic struggles over software in wind turbines

    DEFF Research Database (Denmark)

    Kirkegaard, Julia

    2016-01-01

    adopts the micro-processual, socio-technical, relational and empiricist lens of Science & Technology Studies (STS). It illustrates how Sino-foreign collaborative relations around the core technology of software (in control systems and simulation tools) have become politicised, and how controversies...... unfold over issues associated with intellectual property rights (IPRs), certification and standardisation of software algorithms. The article concludes that the use of this STS lens makes a fresh contribution to the often path-dependent, structuralist and hierarchical China literature, offering instead...... a possibility- and agency-filled account that can shed light on the dynamics of China's fragmented governance and experimental market development....

  18. Developing a Direct Search Algorithm for Solving the Capacitated Open Vehicle Routing Problem

    Science.gov (United States)

    Simbolon, Hotman

    2011-06-01

    In open vehicle routing problems, the vehicles are not required to return to the depot after completing service. In this paper, we present the first exact optimization algorithm for the open version of the well-known capacitated vehicle routing problem (CVRP). The strategy of releasing nonbasic variables from their bounds, combined with the "active constraint" method and the notion of superbasics, has been developed for efficiently requirements; this strategy is used to force the appropriate non-integer basic variables to move to their neighborhood integer points. A study of criteria for choosing a nonbasic variable to work with in the integerizing strategy has also been made.

  19. Collaboration space division in collaborative product development based on a genetic algorithm

    Science.gov (United States)

    Qian, Xueming; Ma, Yanqiao; Feng, Huan

    2018-02-01

    The advance in the global environment, rapidly changing markets, and information technology has created a new stage for design. In such an environment, one strategy for success is the Collaborative Product Development (CPD). Organizing people effectively is the goal of Collaborative Product Development, and it solves the problem with certain foreseeability. The development group activities are influenced not only by the methods and decisions available, but also by correlation among personnel. Grouping the personnel according to their correlation intensity is defined as collaboration space division (CSD). Upon establishment of a correlation matrix (CM) of personnel and an analysis of the collaboration space, the genetic algorithm (GA) and minimum description length (MDL) principle may be used as tools in optimizing collaboration space. The MDL principle is used in setting up an object function, and the GA is used as a methodology. The algorithm encodes spatial information as a chromosome in binary. After repetitious crossover, mutation, selection and multiplication, a robust chromosome is found, which can be decoded into an optimal collaboration space. This new method can calculate the members in sub-spaces and individual groupings within the staff. Furthermore, the intersection of sub-spaces and public persons belonging to all sub-spaces can be determined simultaneously.

  20. The Development of Video Learning to Deliver a Basic Algorithm Learning

    Directory of Open Access Journals (Sweden)

    slamet kurniawan fahrurozi

    2017-12-01

    Full Text Available The world of education is currently entering the era of the media world, where learning activities demand reduction of lecture methods and Should be replaced by the use of many medias. In relation to the function of instructional media, it can be emphasized as follows: as a tool to make learning more effective, accelerate the teaching and learning process and improve the quality of teaching and learning process. This research aimed to develop a learning video programming basic materials algorithm that is appropriate to be applied as a learning resource in class X SMK. This study was also aimed to know the feasibility of learning video media developed. The research method used was research was research and development using development model developed by Alessi and Trollip (2001. The development model was divided into 3 stages namely Planning, Design, and Develpoment. Data collection techniques used interview method, literature method and instrument method. In the next stage, learning video was validated or evaluated by the material experts, media experts and users who are implemented to 30 Learners. The result of the research showed that video learning has been successfully made on basic programming subjects which consist of 8 scane video. Based on the learning video validation result, the percentage of learning video's eligibility is 90.5% from material experts, 95.9% of media experts, and 84% of users or learners. From the testing result that the learning videos that have been developed can be used as learning resources or instructional media programming subjects basic materials algorithm.

  1. The Possibility to Use Genetic Algorithms and Fuzzy Systems in the Development of Tutorial Systems

    Directory of Open Access Journals (Sweden)

    Anca Ioana ANDREESCU

    2006-01-01

    Full Text Available In this paper we are presenting state of the art information methods and techniques that can be applied in the development of efficient tutorial systems and also the possibility to use genetic algorithms and fuzzy systems in the construction of such systems. All this topics have been studied during the development of the research project INFOSOC entitled "Tutorial System based on Eduknowledge for Work Security and Health in SMEs According to the European Union Directives" accomplished by a teaching stuff from the Academy of Economic Studies, Bucharest, in collaboration with the National Institute for Research and Development in Work Security, the National Institute for Small and Middle Enterprises and SC Q’NET International srl.

  2. Algorithm Design and Validation for Adaptive Nonlinear Control Enhancement (ADVANCE) Technology Development for Resilient Flight Control, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — SSCI proposes to develop and test a framework referred to as the ADVANCE (Algorithm Design and Validation for Adaptive Nonlinear Control Enhancement), within which...

  3. Developing algorithms for predicting protein-protein interactions of homology modeled proteins.

    Energy Technology Data Exchange (ETDEWEB)

    Martin, Shawn Bryan; Sale, Kenneth L.; Faulon, Jean-Loup Michel; Roe, Diana C.

    2006-01-01

    The goal of this project was to examine the protein-protein docking problem, especially as it relates to homology-based structures, identify the key bottlenecks in current software tools, and evaluate and prototype new algorithms that may be developed to improve these bottlenecks. This report describes the current challenges in the protein-protein docking problem: correctly predicting the binding site for the protein-protein interaction and correctly placing the sidechains. Two different and complementary approaches are taken that can help with the protein-protein docking problem. The first approach is to predict interaction sites prior to docking, and uses bioinformatics studies of protein-protein interactions to predict theses interaction site. The second approach is to improve validation of predicted complexes after docking, and uses an improved scoring function for evaluating proposed docked poses, incorporating a solvation term. This scoring function demonstrates significant improvement over current state-of-the art functions. Initial studies on both these approaches are promising, and argue for full development of these algorithms.

  4. Development of a Detection Algorithm for Use with Reflectance-Based, Real-Time Chemical Sensing

    Directory of Open Access Journals (Sweden)

    Anthony P. Malanoski

    2016-11-01

    Full Text Available Here, we describe our efforts focused on development of an algorithm for identification of detection events in a real-time sensing application relying on reporting of color values using commercially available color sensing chips. The effort focuses on the identification of event occurrence, rather than target identification, and utilizes approaches suitable to onboard device incorporation to facilitate portable and autonomous use. The described algorithm first excludes electronic noise generated by the sensor system and determines response thresholds. This automatic adjustment provides the potential for use with device variations as well as accommodating differing indicator behaviors. Multiple signal channels (RGB as well as multiple indicator array elements are combined for reporting of an event with a minimum of false responses. While the method reported was developed for use with paper-supported porphyrin and metalloporphyrin indicators, it should be equally applicable to other colorimetric indicators. Depending on device configurations, receiver operating characteristic (ROC sensitivities of 1 could be obtained with specificities of 0.87 (threshold 160 ppb, ethanol.

  5. Development of Gis Tool for the Solution of Minimum Spanning Tree Problem using Prim's Algorithm

    Science.gov (United States)

    Dutta, S.; Patra, D.; Shankar, H.; Alok Verma, P.

    2014-11-01

    minimum spanning tree (MST) of a connected, undirected and weighted network is a tree of that network consisting of all its nodes and the sum of weights of all its edges is minimum among all such possible spanning trees of the same network. In this study, we have developed a new GIS tool using most commonly known rudimentary algorithm called Prim's algorithm to construct the minimum spanning tree of a connected, undirected and weighted road network. This algorithm is based on the weight (adjacency) matrix of a weighted network and helps to solve complex network MST problem easily, efficiently and effectively. The selection of the appropriate algorithm is very essential otherwise it will be very hard to get an optimal result. In case of Road Transportation Network, it is very essential to find the optimal results by considering all the necessary points based on cost factor (time or distance). This paper is based on solving the Minimum Spanning Tree (MST) problem of a road network by finding it's minimum span by considering all the important network junction point. GIS technology is usually used to solve the network related problems like the optimal path problem, travelling salesman problem, vehicle routing problems, location-allocation problems etc. Therefore, in this study we have developed a customized GIS tool using Python script in ArcGIS software for the solution of MST problem for a Road Transportation Network of Dehradun city by considering distance and time as the impedance (cost) factors. It has a number of advantages like the users do not need a greater knowledge of the subject as the tool is user-friendly and that allows to access information varied and adapted the needs of the users. This GIS tool for MST can be applied for a nationwide plan called Prime Minister Gram Sadak Yojana in India to provide optimal all weather road connectivity to unconnected villages (points). This tool is also useful for constructing highways or railways spanning several

  6. Transition of Care Practices from Emergency Department to Inpatient: Survey Data and Development of Algorithm

    Directory of Open Access Journals (Sweden)

    Sangil Lee

    2017-01-01

    Full Text Available We aimed to assess the current scope of handoff education and practice among resident physicians in academic centers and to propose a standardized handoff algorithm for the transition of care from the emergency department (ED to an inpatient setting. This was a cross-sectional survey targeted at the program directors, associate or assistant program directors, and faculty members of emergency medicine (EM residency programs in the United States (U.S.. The web-based survey was distributed to potential subjects through a listserv. A panel of experts used a modified Delphi approach to develop a standardized algorithm for ED to inpatient handoff. 121 of 172 programs responded to the survey for an overall response rate of 70.3%. Our survey showed that most EM programs in the U.S. have some form of handoff training, and the majority of them occur either during orientation or in the clinical setting. The handoff structure from ED to inpatient is not well standardized, and in those places with a formalized handoff system, over 70% of residents do not uniformly follow it. Approximately half of responding programs felt that their current handoff system was safe and effective. About half of the programs did not formally assess the handoff proficiency of trainees. Handoffs most commonly take place over the phone, though respondents disagree about the ideal place for a handoff to occur, with nearly equivalent responses between programs favoring the bedside over the phone or faceto-face on a computer. Approximately two-thirds of responding programs reported that their residents were competent in performing ED to inpatient handoffs. Based on this survey and on the review of the literature, we developed a five-step algorithm for the transition of care from the ED to the inpatient setting. Our results identified the current trends of education and practice in transitions of care, from the ED to the inpatient setting in U.S. academic medical centers. An algorithm

  7. THE ALGORITHM OF THE CASE FORMATION DURING THE DEVELOPMENT OF CLINICAL DISCIPLINES IN MEDICAL SCHOOL

    Directory of Open Access Journals (Sweden)

    Andrey A. Garanin

    2016-01-01

    Full Text Available The aim of the study is to develop the algorithm of formation of the case on discipline «Clinical Medicine». Methods. The methods involve the effectiveness analysis of the self-diagnosed levels of professional and personal abilities of students in the process of self-study. Results. The article deals with the organization of independent work of students of case-method, which is one of the most important and complex active learning methods. When implementing the method of case analysis in the educational process the main job of the teacher focused on the development of individual cases. While developing the case study of medical character the teacher needs to pay special attention to questions of pathogenesis and pathological anatomy for students’ formation of the fundamental clinical thinking allowing to estimate the patient’s condition as a complete organism, taking into account all its features, to understand the relationships of cause and effect arising at development of a concrete disease, to master new and to improve the available techniques of statement of the differential diagnosis. Scientific novelty and practical significance. The structure of a medical case study to be followed in the development of the case on discipline «Clinical Medicine» is proposed. Unification algorithm formation cases is necessary for the full implementation of the introduction in the educational process in the higher medical school as one of the most effective active ways of learning – method of case analysis, in accordance with the requirements that apply to higher professional education modern reforms and, in particular, the introduction of new Federal State Educational Standards. 

  8. Development of a computational algorithm for the linearization of decay and transmutation chains

    International Nuclear Information System (INIS)

    Cruz L, C. A.; Francois L, J. L.

    2017-09-01

    One of the most used methodologies to solve Bate man equations, in the problem of burning, is the Tta (Transmutation Trajectory Analysis) method. In this method, a network of decays is broken down into linear elements known as trajectories, through a process known as linearization. In this work an alternative algorithm is shown to find and construct these trajectories, which considers three aspects of linearization: the information -a priori- about the elements that make up decay and transmutation network, the use of a new notation, and in the functions for the treatment of text strings (which are common in most programming languages). One of the main advantages of the algorithm is that can condense the information of a decay and transmutation network into only two vectors. From these is possible to determine how many linear chains can be extracted from the network and even their length (in the case they are not cyclical). Unlike the Deep First Search method, which is widely used for the linearization process, the method proposed in the present work does not have a backward routine and instead occupies a process of compilation, since completes fragments chain instead of going back to the beginning of the trajectories. The developed algorithm can be applied in a general way to the information search and to the linearization of the computational data structures known as trees. It can also be applied to engineering problems where one seeks to calculate the concentration of some substance as a function of time, starting from linear differential equations of balance. (Author)

  9. Laparoscopy After Previous Laparotomy

    Directory of Open Access Journals (Sweden)

    Zulfo Godinjak

    2006-11-01

    Full Text Available Following the abdominal surgery, extensive adhesions often occur and they can cause difficulties during laparoscopic operations. However, previous laparotomy is not considered to be a contraindication for laparoscopy. The aim of this study is to present that an insertion of Veres needle in the region of umbilicus is a safe method for creating a pneumoperitoneum for laparoscopic operations after previous laparotomy. In the last three years, we have performed 144 laparoscopic operations in patients that previously underwent one or two laparotomies. Pathology of digestive system, genital organs, Cesarean Section or abdominal war injuries were the most common causes of previouslaparotomy. During those operations or during entering into abdominal cavity we have not experienced any complications, while in 7 patients we performed conversion to laparotomy following the diagnostic laparoscopy. In all patients an insertion of Veres needle and trocar insertion in the umbilical region was performed, namely a technique of closed laparoscopy. Not even in one patient adhesions in the region of umbilicus were found, and no abdominal organs were injured.

  10. Analysis and Development of Walking Algorithm Kinematic Model for 5-Degree of Freedom Bipedal Robot

    Directory of Open Access Journals (Sweden)

    Gerald Wahyudi Setiono

    2012-12-01

    Full Text Available A design of walking diagram and the calculation of a bipedal robot have been developed. The bipedal robot was designed and constructed with several kinds of servo bracket for the legs, two feet and a hip. Each of the bipedal robot leg was 5-degrees of freedom, three pitches (hip joint, knee joint and ankle joint and two rolls (hip joint and ankle joint. The walking algorithm of this bipedal robot was based on the triangle formulation of cosine law to get the angle value at each joint. The hip height, height of the swinging leg and the step distance are derived based on linear equation. This paper discussed the kinematic model analysis and the development of the walking diagram of the bipedal robot. Kinematics equations were derived, the joint angles were simulated and coded into Arduino board to be executed to the robot.

  11. Recent developments in structure-preserving algorithms for oscillatory differential equations

    CERN Document Server

    Wu, Xinyuan

    2018-01-01

    The main theme of this book is recent progress in structure-preserving algorithms for solving initial value problems of oscillatory differential equations arising in a variety of research areas, such as astronomy, theoretical physics, electronics, quantum mechanics and engineering. It systematically describes the latest advances in the development of structure-preserving integrators for oscillatory differential equations, such as structure-preserving exponential integrators, functionally fitted energy-preserving integrators, exponential Fourier collocation methods, trigonometric collocation methods, and symmetric and arbitrarily high-order time-stepping methods. Most of the material presented here is drawn from the recent literature. Theoretical analysis of the newly developed schemes shows their advantages in the context of structure preservation. All the new methods introduced in this book are proven to be highly effective compared with the well-known codes in the scientific literature. This book also addre...

  12. Conceptual aspects: analyses law, ethical, human, technical, social factors of development ICT, e-learning and intercultural development in different countries setting out the previous new theoretical model and preliminary findings

    NARCIS (Netherlands)

    Kommers, Petrus A.M.; Smyrnova-Trybulska, Eugenia; Morze, Natalia; Issa, Tomayess; Issa, Theodora

    2015-01-01

    This paper, prepared by an international team of authors focuses on the conceptual aspects: analyses law, ethical, human, technical, social factors of ICT development, e-learning and intercultural development in different countries, setting out the previous and new theoretical model and preliminary

  13. A false-alarm aware methodology to develop robust and efficient multi-scale infrared small target detection algorithm

    Science.gov (United States)

    Moradi, Saed; Moallem, Payman; Sabahi, Mohamad Farzan

    2018-03-01

    False alarm rate and detection rate are still two contradictory metrics for infrared small target detection in an infrared search and track system (IRST), despite the development of new detection algorithms. In certain circumstances, not detecting true targets is more tolerable than detecting false items as true targets. Hence, considering background clutter and detector noise as the sources of the false alarm in an IRST system, in this paper, a false alarm aware methodology is presented to reduce false alarm rate while the detection rate remains undegraded. To this end, advantages and disadvantages of each detection algorithm are investigated and the sources of the false alarms are determined. Two target detection algorithms having independent false alarm sources are chosen in a way that the disadvantages of the one algorithm can be compensated by the advantages of the other one. In this work, multi-scale average absolute gray difference (AAGD) and Laplacian of point spread function (LoPSF) are utilized as the cornerstones of the desired algorithm of the proposed methodology. After presenting a conceptual model for the desired algorithm, it is implemented through the most straightforward mechanism. The desired algorithm effectively suppresses background clutter and eliminates detector noise. Also, since the input images are processed through just four different scales, the desired algorithm has good capability for real-time implementation. Simulation results in term of signal to clutter ratio and background suppression factor on real and simulated images prove the effectiveness and the performance of the proposed methodology. Since the desired algorithm was developed based on independent false alarm sources, our proposed methodology is expandable to any pair of detection algorithms which have different false alarm sources.

  14. Development and Implementation of a Hardware In-the-Loop Test Bed for Unmanned Aerial Vehicle Control Algorithms

    Science.gov (United States)

    Nyangweso, Emmanuel; Bole, Brian

    2014-01-01

    Successful prediction and management of battery life using prognostic algorithms through ground and flight tests is important for performance evaluation of electrical systems. This paper details the design of test beds suitable for replicating loading profiles that would be encountered in deployed electrical systems. The test bed data will be used to develop and validate prognostic algorithms for predicting battery discharge time and battery failure time. Online battery prognostic algorithms will enable health management strategies. The platform used for algorithm demonstration is the EDGE 540T electric unmanned aerial vehicle (UAV). The fully designed test beds developed and detailed in this paper can be used to conduct battery life tests by controlling current and recording voltage and temperature to develop a model that makes a prediction of end-of-charge and end-of-life of the system based on rapid state of health (SOH) assessment.

  15. Development of Quantum Devices and Algorithms for Radiation Detection and Radiation Signal Processing

    International Nuclear Information System (INIS)

    El Tokhy, M.E.S.M.E.S.

    2012-01-01

    The main functions of spectroscopy system are signal detection, filtering and amplification, pileup detection and recovery, dead time correction, amplitude analysis and energy spectrum analysis. Safeguards isotopic measurements require the best spectrometer systems with excellent resolution, stability, efficiency and throughput. However, the resolution and throughput, which depend mainly on the detector, amplifier and the analog-to-digital converter (ADC), can still be improved. These modules have been in continuous development and improvement. For this reason we are interested with both the development of quantum detectors and efficient algorithms of the digital processing measurement. Therefore, the main objective of this thesis is concentrated on both 1. Study quantum dot (QD) devices behaviors under gamma radiation 2. Development of efficient algorithms for handling problems of gamma-ray spectroscopy For gamma radiation detection, a detailed study of nanotechnology QD sources and infrared photodetectors (QDIP) for gamma radiation detection is introduced. There are two different types of quantum scintillator detectors, which dominate the area of ionizing radiation measurements. These detectors are QD scintillator detectors and QDIP scintillator detectors. By comparison with traditional systems, quantum systems have less mass, require less volume, and consume less power. These factors are increasing the need for efficient detector for gamma-ray applications such as gamma-ray spectroscopy. Consequently, the nanocomposite materials based on semiconductor quantum dots has potential for radiation detection via scintillation was demonstrated in the literature. Therefore, this thesis presents a theoretical analysis for the characteristics of QD sources and infrared photodetectors (QDIPs). A model of QD sources under incident gamma radiation detection is developed. A novel methodology is introduced to characterize the effect of gamma radiation on QD devices. The rate

  16. Description of ALARMA: the alarm algorithm developed for the Nuclear Car Wash

    International Nuclear Information System (INIS)

    Luu, T; Biltoft, P; Church, J; Descalle, M; Hall, J; Manatt, D; Mauger, J; Norman, E; Petersen, D; Pruet, J; Prussin, S; Slaughter, D

    2006-01-01

    The goal of any alarm algorithm should be that it provide the necessary tools to derive confidence limits on whether the existence of fissile materials is present in cargo containers. It should be able to extract these limits from (usually) noisy and/or weak data while maintaining a false alarm rate (FAR) that is economically suitable for port operations. It should also be able to perform its analysis within a reasonably short amount of time (i.e. ∼ seconds). To achieve this, it is essential that the algorithm be able to identify and subtract any interference signature that might otherwise be confused with a fissile signature. Lastly, the algorithm itself should be user-intuitive and user-friendly so that port operators with little or no experience with detection algorithms may use it with relative ease. In support of the Nuclear Car Wash project at Lawrence Livermore Laboratory, we have developed an alarm algorithm that satisfies the above requirements. The description of the this alarm algorithm, dubbed ALARMA, is the purpose of this technical report. The experimental setup of the nuclear car wash has been well documented [1, 2, 3]. The presence of fissile materials is inferred by examining the β-delayed gamma spectrum induced after a brief neutron irradiation of cargo, particularly in the high-energy region above approximately 2.5 MeV. In this region naturally occurring gamma rays are virtually non-existent. Thermal-neutron induced fission of 235 U and 239 P, on the other hand, leaves a unique β-delayed spectrum [4]. This spectrum comes from decays of fission products having half-lives as large as 30 seconds, many of which have high Q-values. Since high-energy photons penetrate matter more freely, it is natural to look for unique fissile signatures in this energy region after neutron irradiation. The goal of this interrogation procedure is a 95% success rate of detection of as little as 5 kilograms of fissile material while retaining at most .1% false alarm

  17. Algorithm Development for Multi-Energy SXR based Electron Temperature Profile Reconstruction

    Science.gov (United States)

    Clayton, D. J.; Tritz, K.; Finkenthal, M.; Kumar, D.; Stutman, D.

    2012-10-01

    New techniques utilizing computational tools such as neural networks and genetic algorithms are being developed to infer plasma electron temperature profiles on fast time scales (> 10 kHz) from multi-energy soft-x-ray (ME-SXR) diagnostics. Traditionally, a two-foil SXR technique, using the ratio of filtered continuum emission measured by two SXR detectors, has been employed on fusion devices as an indirect method of measuring electron temperature. However, these measurements can be susceptible to large errors due to uncertainties in time-evolving impurity density profiles, leading to unreliable temperature measurements. To correct this problem, measurements using ME-SXR diagnostics, which use three or more filtered SXR arrays to distinguish line and continuum emission from various impurities, in conjunction with constraints from spectroscopic diagnostics, can be used to account for unknown or time evolving impurity profiles [K. Tritz et al, Bull. Am. Phys. Soc. Vol. 56, No. 12 (2011), PP9.00067]. On NSTX, ME-SXR diagnostics can be used for fast (10-100 kHz) temperature profile measurements, using a Thomson scattering diagnostic (60 Hz) for periodic normalization. The use of more advanced algorithms, such as neural network processing, can decouple the reconstruction of the temperature profile from spectral modeling.

  18. Development of pattern recognition algorithms for particles detection from atmospheric images

    International Nuclear Information System (INIS)

    Khatchadourian, S.

    2010-01-01

    The HESS experiment consists of a system of telescopes destined to observe cosmic rays. Since the project has achieved a high level of performances, a second phase of the project has been initiated. This implies the addition of a new telescope which is more sensitive than its predecessors and which is capable of collecting a huge amount of images. In this context, all data collected by the telescope can not be retained because of storage limitations. Therefore, a new real-time system trigger must be designed in order to select interesting events on the fly. The purpose of this thesis was to propose a trigger solution to efficiently discriminate events (images) which are captured by the telescope. The first part of this thesis was to develop pattern recognition algorithms to be implemented within the trigger. A processing chain based on neural networks and Zernike moments has been validated. The second part of the thesis has focused on the implementation of the proposed algorithms onto an FPGA target, taking into account the application constraints in terms of resources and execution time. (author)

  19. Algorithm Development for the Optimum Rainfall Estimation Using Polarimetric Variables in Korea

    Directory of Open Access Journals (Sweden)

    Cheol-Hwan You

    2015-01-01

    Full Text Available In this study, to get an optimum rainfall estimation using polarimetric variables observed from Bislsan radar which is the first polarimetric radar in Korea, rainfall cases for 84 hours caused by different conditions, which are Changma front and typhoon, Changma front only, and typhoon only, occurred in 2011, were analyzed. And rainfall algorithms were developed by using long period drop size distributions with six different raindrop axis ratio relations. The combination of the relations between R and Z, ZDR, R and KDP, ZDR, and R and KDP with different rainfall intensity would be an optimum rainfall algorithm if the reference of rainfall would be defined correctly. In the case the reference is not defined adequately, the relation between R and Z, ZDR, KDP, AH and R and Z, KDP, AH can be used as a representative rainfall relation. Particularly if the qualified ZDR is not available, the relation between R and Z, KDP, AH can be used as an optimum rainfall relation in Korea.

  20. Development of Predictive QSAR Models of 4-Thiazolidinones Antitrypanosomal Activity using Modern Machine Learning Algorithms.

    Science.gov (United States)

    Kryshchyshyn, Anna; Devinyak, Oleg; Kaminskyy, Danylo; Grellier, Philippe; Lesyk, Roman

    2017-11-14

    This paper presents novel QSAR models for the prediction of antitrypanosomal activity among thiazolidines and related heterocycles. The performance of four machine learning algorithms: Random Forest regression, Stochastic gradient boosting, Multivariate adaptive regression splines and Gaussian processes regression have been studied in order to reach better levels of predictivity. The results for Random Forest and Gaussian processes regression are comparable and outperform other studied methods. The preliminary descriptor selection with Boruta method improved the outcome of machine learning methods. The two novel QSAR-models developed with Random Forest and Gaussian processes regression algorithms have good predictive ability, which was proved by the external evaluation of the test set with corresponding Q 2 ext =0.812 and Q 2 ext =0.830. The obtained models can be used further for in silico screening of virtual libraries in the same chemical domain in order to find new antitrypanosomal agents. Thorough analysis of descriptors influence in the QSAR models and interpretation of their chemical meaning allows to highlight a number of structure-activity relationships. The presence of phenyl rings with electron-withdrawing atoms or groups in para-position, increased number of aromatic rings, high branching but short chains, high HOMO energy, and the introduction of 1-substituted 2-indolyl fragment into the molecular structure have been recognized as trypanocidal activity prerequisites. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. A heuristic algorithm to approximate dynamic program of a novel new product development process

    Directory of Open Access Journals (Sweden)

    Hamed Fazlollahtabar

    2016-01-01

    Full Text Available We are concerned with a new product development (NPD network in digital environment in which the aim is to find integrated attributes for value added purposes. Different views exist for new product development. Here, the effective factors are categorized into customers, competitors and the company’s own past experience. Also, various attributes are considered for the development of a product. Thus, using digital data of attributes, the optimal set of attributes is chosen for user in the new product development. Regarding the multi stage decision making process of the customer, competitor and company’s own past experience, we develop a multi-dimensional dynamic program as a useful tool for multi stage decision making. To counteract the dynamism of the digital data in different time periods, two concepts of state and policy direction are introduced to determine the cost of moving through the stages of the proposed NPD digital network. Since the space requirements and value function computations become impractical for even moderate size, we approximate the optimal value function developing a heuristic algorithm.

  2. Development of a neonate lung reconstruction algorithm using a wavelet AMG and estimated boundary form

    International Nuclear Information System (INIS)

    Bayford, R; Tizzard, A; Yerworth, R; Kantartzis, P; Liatsis, P; Demosthenous, A

    2008-01-01

    Objective, non-invasive measures of lung maturity and development, oxygen requirements and lung function, suitable for use in small, unsedated infants, are urgently required to define the nature and severity of persisting lung disease, and to identify risk factors for developing chronic lung problems. Disorders of lung growth, maturation and control of breathing are among the most important problems faced by the neonatologists. At present, no system for continuous monitoring of neonate lung function to reduce the risk of chronic lung disease in infancy in intensive care units exists. We are in the process of developing a new integrated electrical impedance tomography (EIT) system based on wearable technology to integrate measures of the boundary diameter from the boundary form for neonates into the reconstruction algorithm. In principle, this approach could provide a reduction of image artefacts in the reconstructed image associated with incorrect boundary form assumptions. In this paper, we investigate the required accuracy of the boundary form that would be suitable to minimize artefacts in the reconstruction for neonate lung function. The number of data points needed to create the required boundary form is automatically determined using genetic algorithms. The approach presented in this paper is to assist quality of the reconstruction using different approximations to the ideal boundary form. We also investigate the use of a wavelet algebraic multi-grid (WAMG) preconditioner to reduce the reconstruction computation requirements. Results are presented that demonstrate a full 3D model is required to minimize artefact in the reconstructed image and the implementation of a WAMG for EIT

  3. Evaluation of Two Robot Vision Control Algorithms Developed Based on N-R and EKG Methods for Slender Bar Placement

    Energy Technology Data Exchange (ETDEWEB)

    Son, Jae Kyung; Jang, Wan Shik; Hong, Sung Mun [Gwangju (Korea, Republic of)

    2013-04-15

    Many problems need to be solved before vision systems can actually be applied in industry, such as the precision of the kinematics model of the robot control algorithm based on visual information, active compensation of the camera's focal length and orientation during the movement of the robot, and understanding the mapping of the physical 3-D space into 2-D camera coordinates. An algorithm is proposed to enable robot to move actively even if the relative positions between the camera and the robot is unknown. To solve the correction problem, this study proposes vision system model with six camera parameters. To develop the robot vision control algorithm, the N-R and EKG methods are applied to the vision system model. Finally, the position accuracy and processing time of the two algorithms developed based based on the EKG and the N-R methods are compared experimentally by making the robot perform slender bar placement task.

  4. Crytosystem Program Planning for Securing Data/Information of the Results of Research and Development using Triple DES Algorithm

    International Nuclear Information System (INIS)

    Tumpal P; Naga, Dali S.; David

    2004-01-01

    This software is a cryptosystem that uses triple DES algorithm and uses ECB (Electronic Code Book) mode. This cryptosystem can send a file with any extension whether it is encrypted or not, encrypt the data that representing the picture of bitmap file or text, as well as view the calculation that can be written. Triple DES is an efficient and effective developments of DES because same algorithm but the three times repeated operation causing the key become 168 bit from 56 bit. (author)

  5. Rapid Mental Сomputation System as a Tool for Algorithmic Thinking of Elementary School Students Development

    Directory of Open Access Journals (Sweden)

    Rushan Ziatdinov

    2012-07-01

    Full Text Available In this paper, we describe the possibilities of using a rapid mental computation system in elementary education. The system consists of a number of readily memorized operations that allow one to perform arithmetic computations very quickly. These operations are actually simple algorithms which can develop or improve the algorithmic thinking of pupils. Using a rapid mental computation system allows forming the basis for the study of computer science in secondary school.

  6. Development and performance of track reconstruction algorithms at the energy frontier with the ATLAS detector

    Science.gov (United States)

    Gagnon, Louis-Guillaume; ATLAS Collaboration

    2017-10-01

    ATLAS track reconstruction software is continuously evolving to match the demands from the increasing instantaneous luminosity of the LHC, as well as the increased center-of-mass energy. These conditions result in a higher abundance of events with dense track environments, such as the core of jets or boosted tau leptons undergoing three-prong decays. These environments are characterised by charged particle separations on the order of the ATLAS inner detector sensor dimensions and are created by the decay of boosted objects. Significant upgrades were made to the track reconstruction software to cope with the expected conditions during LHC Run 2. In particular, new algorithms targeting dense environments were developed. These changes lead to a substantial reduction of reconstruction time while at the same time improving physics performance. The employed methods are presented and physics performance studies are shown, including a measurement of the fraction of lost tracks in jets with high transverse momentum.

  7. Development and performance of track reconstruction algorithms at the energy frontier with the ATLAS detector

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00441787; The ATLAS collaboration

    2017-01-01

    ATLAS track reconstruction software is continuously evolving to match the demands from the increasing instantaneous luminosity of the LHC, as well as the increased center-of-mass energy. These conditions result in a higher abundance of events with dense track environments, such as the core of jets or boosted tau leptons undergoing three-prong decays. These environments are characterised by charged particle separations on the order of the ATLAS inner detector sensor dimensions and are created by the decay of boosted objects. Significant upgrades were made to the track reconstruction software to cope with the expected conditions during LHC Run 2. In particular, new algorithms targeting dense environments were developed. These changes lead to a substantial reduction of reconstruction time while at the same time improving physics performance. The employed methods are presented and physics performance studies are shown, including a measurement of the fraction of lost tracks in jets with high transverse momentum.

  8. Development of Web-Based Menu Planning Support System and its Solution Using Genetic Algorithm

    Science.gov (United States)

    Kashima, Tomoko; Matsumoto, Shimpei; Ishii, Hiroaki

    2009-10-01

    Recently lifestyle-related diseases have become an object of public concern, while at the same time people are being more health conscious. As an essential factor for causing the lifestyle-related diseases, we assume that the knowledge circulation on dietary habits is still insufficient. This paper focuses on everyday meals close to our life and proposes a well-balanced menu planning system as a preventive measure of lifestyle-related diseases. The system is developed by using a Web-based frontend and it provides multi-user services and menu information sharing capabilities like social networking services (SNS). The system is implemented on a Web server running Apache (HTTP server software), MySQL (database management system), and PHP (scripting language for dynamic Web pages). For the menu planning, a genetic algorithm is applied by understanding this problem as multidimensional 0-1 integer programming.

  9. Models and Algorithms for Production Planning and Scheduling in Foundries – Current State and Development Perspectives

    Directory of Open Access Journals (Sweden)

    A. Stawowy

    2012-04-01

    Full Text Available Mathematical programming, constraint programming and computational intelligence techniques, presented in the literature in the field of operations research and production management, are generally inadequate for planning real-life production process. These methods are in fact dedicated to solving the standard problems such as shop floor scheduling or lot-sizing, or their simple combinations such as scheduling with batching. Whereas many real-world production planning problems require the simultaneous solution of several problems (in addition to task scheduling and lot-sizing, the problems such as cutting, workforce scheduling, packing and transport issues, including the problems that are difficult to structure. The article presents examples and classification of production planning and scheduling systems in the foundry industry described in the literature, and also outlines the possible development directions of models and algorithms used in such systems.

  10. Development of parallel GPU based algorithms for problems in nuclear area

    International Nuclear Information System (INIS)

    Almeida, Adino Americo Heimlich

    2009-01-01

    Graphics Processing Units (GPU) are high performance co-processors intended, originally, to improve the use and quality of computer graphics applications. Since researchers and practitioners realized the potential of using GPU for general purpose, their application has been extended to other fields out of computer graphics scope. The main objective of this work is to evaluate the impact of using GPU in two typical problems of Nuclear area. The neutron transport simulation using Monte Carlo method and solve heat equation in a bi-dimensional domain by finite differences method. To achieve this, we develop parallel algorithms for GPU and CPU in the two problems described above. The comparison showed that the GPU-based approach is faster than the CPU in a computer with two quad core processors, without precision loss. (author)

  11. Soft sensor development for Mooney viscosity prediction in rubber mixing process based on GMMDJITGPR algorithm

    Science.gov (United States)

    Yang, Kai; Chen, Xiangguang; Wang, Li; Jin, Huaiping

    2017-01-01

    In rubber mixing process, the key parameter (Mooney viscosity), which is used to evaluate the property of the product, can only be obtained with 4-6h delay offline. It is quite helpful for the industry, if the parameter can be estimate on line. Various data driven soft sensors have been used to prediction in the rubber mixing. However, it always not functions well due to the phase and nonlinear property in the process. The purpose of this paper is to develop an efficient soft sensing algorithm to solve the problem. Based on the proposed GMMD local sample selecting criterion, the phase information is extracted in the local modeling. Using the Gaussian local modeling method within Just-in-time (JIT) learning framework, nonlinearity of the process is well handled. Efficiency of the new method is verified by comparing the performance with various mainstream soft sensors, using the samples from real industrial rubber mixing process.

  12. Development of an Aerosol Opacity Retrieval Algorithm for Use with Multi-Angle Land Surface Images

    Science.gov (United States)

    Diner, D.; Paradise, S.; Martonchik, J.

    1994-01-01

    In 1998, the Multi-angle Imaging SpectroRadiometer (MISR) will fly aboard the EOS-AM1 spacecraft. MISR will enable unique methods for retrieving the properties of atmospheric aerosols, by providing global imagery of the Earth at nine viewing angles in four visible and near-IR spectral bands. As part of the MISR algorithm development, theoretical methods of analyzing multi-angle, multi-spectral data are being tested using images acquired by the airborne Advanced Solid-State Array Spectroradiometer (ASAS). In this paper we derive a method to be used over land surfaces for retrieving the change in opacity between spectral bands, which can then be used in conjunction with an aerosol model to derive a bound on absolute opacity.

  13. Development and performance of track reconstruction algorithms at the energy frontier with the ATLAS detector

    CERN Document Server

    Gagnon, Louis-Guillaume; The ATLAS collaboration

    2016-01-01

    ATLAS track reconstruction code is continuously evolving to match the demands from the increasing instantaneous luminosity of LHC, as well as the increased centre-of-mass energy. With the increase in energy, events with dense environments, e.g. the cores of jets or boosted tau leptons, become much more abundant. These environments are characterised by charged particle separations on the order of ATLAS inner detector sensor dimensions and are created by the decay of boosted objects. Significant upgrades were made to the track reconstruction code to cope with the expected conditions during LHC Run 2. In particular, new algorithms targeting dense environments were developed. These changes lead to a substantial reduction of reconstruction time while at the same time improving physics performance. The employed methods are presented. In addition, physics performance studies are shown, e.g. a measurement of the fraction of lost tracks in jets with high transverse momentum.

  14. Developing image processing meta-algorithms with data mining of multiple metrics.

    Science.gov (United States)

    Leung, Kelvin; Cunha, Alexandre; Toga, A W; Parker, D Stott

    2014-01-01

    People often use multiple metrics in image processing, but here we take a novel approach of mining the values of batteries of metrics on image processing results. We present a case for extending image processing methods to incorporate automated mining of multiple image metric values. Here by a metric we mean any image similarity or distance measure, and in this paper we consider intensity-based and statistical image measures and focus on registration as an image processing problem. We show how it is possible to develop meta-algorithms that evaluate different image processing results with a number of different metrics and mine the results in an automated fashion so as to select the best results. We show that the mining of multiple metrics offers a variety of potential benefits for many image processing problems, including improved robustness and validation.

  15. Game-based programming towards developing algorithmic thinking skills in primary education

    Directory of Open Access Journals (Sweden)

    Hariklia Tsalapatas

    2012-06-01

    Full Text Available This paper presents cMinds, a learning intervention that deploys game-based visual programming towards building analytical, computational, and critical thinking skills in primary education. The proposed learning method exploits the structured nature of programming, which is inherently logical and transcends cultural barriers, towards inclusive learning that exposes learners to algorithmic thinking. A visual programming environment, entitled ‘cMinds Learning Suite’, has been developed aimed for classroom use. Feedback from the deployment of the learning methods and tools in classrooms in several European countries demonstrates elevated learner motivation for engaging in logical learning activities, fostering of creativity and an entrepreneurial spirit, and promotion of problem-solving capacity

  16. Development of Mathematical Models for Investigating Maximal Power Point Tracking Algorithms

    Directory of Open Access Journals (Sweden)

    Dominykas Vasarevičius

    2012-04-01

    Full Text Available Solar cells generate maximum power only when the load is optimized according insolation and module temperature. This function is performed by MPPT systems. While developing MPPT, it is useful to create a mathematical model that allows the simulation of different weather conditions affecting solar modules. Solar insolation, cloud cover imitation and solar cell models have been created in Matlab/Simulink environment. Comparing the simulation of solar insolation on a cloudy day with the measurements made using a pyrometer show that the model generates signal changes according to the laws similar to those of a real life signal. The model can generate solar insolation values in real time, which is useful for predicting the amount of electrical energy produced from solar power. The model can operate with the help of using the stored signal, thus a comparison of different MPPT algorithms can be provided.Article in Lithuanian

  17. DEVELOPMENT OF GENETIC ALGORITHM-BASED METHODOLOGY FOR SCHEDULING OF MOBILE ROBOTS

    DEFF Research Database (Denmark)

    Dang, Vinh Quang

    problem and finding optimal solutions for each one. However, the formulated mathematical models could only be applicable to small-scale problems in practice due to the significant increase of computation time as the problem size grows. Note that making schedules of mobile robots is part of real......-time operations of production managers. Hence to deal with large-scale applications, each heuristic based on genetic algorithms is then developed to find near-optimal solutions within a reasonable computation time for each problem. The quality of these solutions is then compared and evaluated by using......This thesis addresses the issues of scheduling of mobile robot(s) at operational levels of manufacturing systems. More specifically, two problems of scheduling of a single mobile robot with part-feeding tasks and scheduling of multiple mobile robots with preemptive tasks are taken into account...

  18. Development of a generally applicable morphokinetic algorithm capable of predicting the implantation potential of embryos transferred on Day 3

    Science.gov (United States)

    Petersen, Bjørn Molt; Boel, Mikkel; Montag, Markus; Gardner, David K.

    2016-01-01

    STUDY QUESTION Can a generally applicable morphokinetic algorithm suitable for Day 3 transfers of time-lapse monitored embryos originating from different culture conditions and fertilization methods be developed for the purpose of supporting the embryologist's decision on which embryo to transfer back to the patient in assisted reproduction? SUMMARY ANSWER The algorithm presented here can be used independently of culture conditions and fertilization method and provides predictive power not surpassed by other published algorithms for ranking embryos according to their blastocyst formation potential. WHAT IS KNOWN ALREADY Generally applicable algorithms have so far been developed only for predicting blastocyst formation. A number of clinics have reported validated implantation prediction algorithms, which have been developed based on clinic-specific culture conditions and clinical environment. However, a generally applicable embryo evaluation algorithm based on actual implantation outcome has not yet been reported. STUDY DESIGN, SIZE, DURATION Retrospective evaluation of data extracted from a database of known implantation data (KID) originating from 3275 embryos transferred on Day 3 conducted in 24 clinics between 2009 and 2014. The data represented different culture conditions (reduced and ambient oxygen with various culture medium strategies) and fertilization methods (IVF, ICSI). The capability to predict blastocyst formation was evaluated on an independent set of morphokinetic data from 11 218 embryos which had been cultured to Day 5. PARTICIPANTS/MATERIALS, SETTING, METHODS The algorithm was developed by applying automated recursive partitioning to a large number of annotation types and derived equations, progressing to a five-fold cross-validation test of the complete data set and a validation test of different incubation conditions and fertilization methods. The results were expressed as receiver operating characteristics curves using the area under the

  19. An algorithm for hyperspectral remote sensing of aerosols: 1. Development of theoretical framework

    International Nuclear Information System (INIS)

    Hou, Weizhen; Wang, Jun; Xu, Xiaoguang; Reid, Jeffrey S.; Han, Dong

    2016-01-01

    This paper describes the first part of a series of investigations to develop algorithms for simultaneous retrieval of aerosol parameters and surface reflectance from a newly developed hyperspectral instrument, the GEOstationary Trace gas and Aerosol Sensor Optimization (GEO-TASO), by taking full advantage of available hyperspectral measurement information in the visible bands. We describe the theoretical framework of an inversion algorithm for the hyperspectral remote sensing of the aerosol optical properties, in which major principal components (PCs) for surface reflectance is assumed known, and the spectrally dependent aerosol refractive indices are assumed to follow a power-law approximation with four unknown parameters (two for real and two for imaginary part of refractive index). New capabilities for computing the Jacobians of four Stokes parameters of reflected solar radiation at the top of the atmosphere with respect to these unknown aerosol parameters and the weighting coefficients for each PC of surface reflectance are added into the UNified Linearized Vector Radiative Transfer Model (UNL-VRTM), which in turn facilitates the optimization in the inversion process. Theoretical derivations of the formulas for these new capabilities are provided, and the analytical solutions of Jacobians are validated against the finite-difference calculations with relative error less than 0.2%. Finally, self-consistency check of the inversion algorithm is conducted for the idealized green-vegetation and rangeland surfaces that were spectrally characterized by the U.S. Geological Survey digital spectral library. It shows that the first six PCs can yield the reconstruction of spectral surface reflectance with errors less than 1%. Assuming that aerosol properties can be accurately characterized, the inversion yields a retrieval of hyperspectral surface reflectance with an uncertainty of 2% (and root-mean-square error of less than 0.003), which suggests self-consistency in the

  20. Validation and Development of a New Automatic Algorithm for Time-Resolved Segmentation of the Left Ventricle in Magnetic Resonance Imaging

    Directory of Open Access Journals (Sweden)

    Jane Tufvesson

    2015-01-01

    Full Text Available Introduction. Manual delineation of the left ventricle is clinical standard for quantification of cardiovascular magnetic resonance images despite being time consuming and observer dependent. Previous automatic methods generally do not account for one major contributor to stroke volume, the long-axis motion. Therefore, the aim of this study was to develop and validate an automatic algorithm for time-resolved segmentation covering the whole left ventricle, including basal slices affected by long-axis motion. Methods. Ninety subjects imaged with a cine balanced steady state free precession sequence were included in the study (training set n=40, test set n=50. Manual delineation was reference standard and second observer analysis was performed in a subset (n=25. The automatic algorithm uses deformable model with expectation-maximization, followed by automatic removal of papillary muscles and detection of the outflow tract. Results. The mean differences between automatic segmentation and manual delineation were EDV −11 mL, ESV 1 mL, EF −3%, and LVM 4 g in the test set. Conclusions. The automatic LV segmentation algorithm reached accuracy comparable to interobserver for manual delineation, thereby bringing automatic segmentation one step closer to clinical routine. The algorithm and all images with manual delineations are available for benchmarking.

  1. Developing a Rapid Algorithm to Enable Rapid Characterization of Alginate Microcapsules

    Science.gov (United States)

    Chan, Ka Hei; Krishnan, Rahul; Alexander, Michael; Lakey, Jonathan R. T.

    2017-01-01

    The islets of Langerhans are endocrine tissue clusters that secrete hormones that regulate the body's glucose, carbohydrate, and fat metabolism, the most important of which is insulin, a hormone secreted by β-cells within the islets. In certain instances, a person's own immune system attacks and destroys them, leading to the development of type 1 diabetes (T1D), a life-long condition that needs daily insulin administration to maintain health and prolong survival. Islet transplantation is a surgical procedure that has demonstrated the ability to normalize blood sugar levels for up to a few years, but the need for chronic immunosuppression relegates it to a last resort that is often only used sparingly and in seriously ill patients. Islet microencapsulation is a biomedical innovation designed to protect islets from the immune system by coating them with a biocompatible polymer, and this new technology has demonstrated various degrees of success in small- and large-animal studies. This success is significantly impacted by microcapsule morphology and encapsulation efficiency. Since hundreds of thousands of microcapsules are generated during the process, characterization of encapsulated islets without the help of some degree of automation would be difficult, time-consuming, and error prone due to inherent observer bias. We have developed an image analysis algorithm that can analyze hundreds of microencapsulated islets and characterize their size, shape, circularity, and distortion with minimal observer bias. This algorithm can be easily adapted to similar nano- or microencapsulation technologies to implement stricter quality control and improve biomaterial device design and success. PMID:27729095

  2. Developing a Rapid Algorithm to Enable Rapid Characterization of Alginate Microcapsules.

    Science.gov (United States)

    Chan, Ka Hei; Krishnan, Rahul; Alexander, Michael; Lakey, Jonathan R T

    2017-05-09

    The islets of Langerhans are endocrine tissue clusters that secrete hormones that regulate the body's glucose, carbohydrate, and fat metabolism, the most important of which is insulin, a hormone secreted by β-cells within the islets. In certain instances, a person's own immune system attacks and destroys them, leading to the development of type 1 diabetes (T1D), a life-long condition that needs daily insulin administration to maintain health and prolong survival. Islet transplantation is a surgical procedure that has demonstrated the ability to normalize blood sugar levels for up to a few years, but the need for chronic immunosuppression relegates it to a last resort that is often only used sparingly and in seriously ill patients. Islet microencapsulation is a biomedical innovation designed to protect islets from the immune system by coating them with a biocompatible polymer, and this new technology has demonstrated various degrees of success in small- and large-animal studies. This success is significantly impacted by microcapsule morphology and encapsulation efficiency. Since hundreds of thousands of microcapsules are generated during the process, characterization of encapsulated islets without the help of some degree of automation would be difficult, time-consuming, and error prone due to inherent observer bias. We have developed an image analysis algorithm that can analyze hundreds of microencapsulated islets and characterize their size, shape, circularity, and distortion with minimal observer bias. This algorithm can be easily adapted to similar nano- or microencapsulation technologies to implement stricter quality control and improve biomaterial device design and success.

  3. Developing Multiple Diverse Potential Designs for Heat Transfer Utilizing Graph Based Evolutionary Algorithms

    Energy Technology Data Exchange (ETDEWEB)

    David J. Muth Jr.

    2006-09-01

    This paper examines the use of graph based evolutionary algorithms (GBEAs) to find multiple acceptable solutions for heat transfer in engineering systems during the optimization process. GBEAs are a type of evolutionary algorithm (EA) in which a topology, or geography, is imposed on an evolving population of solutions. The rates at which solutions can spread within the population are controlled by the choice of topology. As in nature geography can be used to develop and sustain diversity within the solution population. Altering the choice of graph can create a more or less diverse population of potential solutions. The choice of graph can also affect the convergence rate for the EA and the number of mating events required for convergence. The engineering system examined in this paper is a biomass fueled cookstove used in developing nations for household cooking. In this cookstove wood is combusted in a small combustion chamber and the resulting hot gases are utilized to heat the stove’s cooking surface. The spatial temperature profile of the cooking surface is determined by a series of baffles that direct the flow of hot gases. The optimization goal is to find baffle configurations that provide an even temperature distribution on the cooking surface. Often in engineering, the goal of optimization is not to find the single optimum solution but rather to identify a number of good solutions that can be used as a starting point for detailed engineering design. Because of this a key aspect of evolutionary optimization is the diversity of the solutions found. The key conclusion in this paper is that GBEA’s can be used to create multiple good solutions needed to support engineering design.

  4. The development and concurrent validity of a real-time algorithm for temporal gait analysis using inertial measurement units.

    Science.gov (United States)

    Allseits, E; Lučarević, J; Gailey, R; Agrawal, V; Gaunaurd, I; Bennett, C

    2017-04-11

    The use of inertial measurement units (IMUs) for gait analysis has emerged as a tool for clinical applications. Shank gyroscope signals have been utilized to identify heel-strike and toe-off, which serve as the foundation for calculating temporal parameters of gait such as single and double limb support time. Recent publications have shown that toe-off occurs later than predicted by the dual minima method (DMM), which has been adopted as an IMU-based gait event detection algorithm.In this study, a real-time algorithm, Noise-Zero Crossing (NZC), was developed to accurately compute temporal gait parameters. Our objective was to determine the concurrent validity of temporal gait parameters derived from the NZC algorithm against parameters measured by an instrumented walkway. The accuracy and precision of temporal gait parameters derived using NZC were compared to those derived using the DMM. The results from Bland-Altman Analysis showed that the NZC algorithm had excellent agreement with the instrumented walkway for identifying the temporal gait parameters of Gait Cycle Time (GCT), Single Limb Support (SLS) time, and Double Limb Support (DLS) time. By utilizing the moment of zero shank angular velocity to identify toe-off, the NZC algorithm performed better than the DMM algorithm in measuring SLS and DLS times. Utilizing the NZC algorithm's gait event detection preserves DLS time, which has significant clinical implications for pathologic gait assessment. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Developing a modified SEBAL algorithm that is responsive to advection by using limited weather data

    Science.gov (United States)

    Mkhwanazi, Mcebisi

    The use of Remote Sensing ET algorithms in water management, especially for agricultural purposes is increasing, and there are more models being introduced. The Surface Energy Balance Algorithm for Land (SEBAL) and its variant, Mapping Evapotranspiration with Internalized Calibration (METRIC) are some of the models that are being widely used. While SEBAL has several advantages over other RS models, including that it does not require prior knowledge of soil, crop and other ground details, it has the downside of underestimating evapotranspiration (ET) on days when there is advection, which may be in most cases in arid and semi-arid areas. METRIC, however has been modified to be able to account for advection, but in doing so it requires hourly weather data. In most developing countries, while accurate estimates of ET are required, the weather data necessary to use METRIC may not be available. This research therefore was meant to develop a modified version of SEBAL that would require minimal weather data that may be available in these areas, and still estimate ET accurately. The data that were used to develop this model were minimum and maximum temperatures, wind data, preferably the run of wind in the afternoon, and wet bulb temperature. These were used to quantify the advected energy that would increase ET in the field. This was a two-step process; the first was developing the model for standard conditions, which was described as a healthy cover of alfalfa, 40-60 cm tall and not short of water. Under standard conditions, when estimated ET using modified SEBAL was compared with lysimeter-measured ET, the modified SEBAL model had a Mean Bias Error (MBE) of 2.2 % compared to -17.1 % from the original SEBAL. The Root Mean Square Error (RMSE) was lower for the modified SEBAL model at 10.9 % compared to 25.1 % for the original SEBAL. The modified SEBAL model, developed on an alfalfa field in Rocky Ford, was then tested on other crops; beans and wheat. It was also tested on

  6. Development of a parallel genetic algorithm using MPI and its application in a nuclear reactor core. Design optimization; Desenvolvimento de um algoritmo genetico paralelo utilizando MPI e sua aplicacao na otimizacao de um projeto neutronico

    Energy Technology Data Exchange (ETDEWEB)

    Waintraub, Marcel; Pereira, Claudio M.N.A. [Instituto de Engenharia Nuclear (IEN), Rio de Janeiro, RJ (Brazil)]. E-mail: marcel@ien.gov.br; cmnap@ien.gov.br; Baptista, Rafael P. [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia. Programa de Engenharia Nuclear]. E-mail: rafael@ien.gov.br

    2005-07-01

    This work presents the development of a distributed parallel genetic algorithm applied to a nuclear reactor core design optimization. In the implementation of the parallelism, a 'Message Passing Interface' (MPI) library, standard for parallel computation in distributed memory platforms, has been used. Another important characteristic of MPI is its portability for various architectures. The main objectives of this paper are: validation of the results obtained by the application of this algorithm in a nuclear reactor core optimization problem, through comparisons with previous results presented by Pereira et al.; and performance test of the Brazilian Nuclear Engineering Institute (IEN) cluster in reactors physics optimization problems. The experiments demonstrated that the developed parallel genetic algorithm using the MPI library presented significant gains in the obtained results and an accentuated reduction of the processing time. Such results ratify the use of the parallel genetic algorithms for the solution of nuclear reactor core optimization problems. (author)

  7. An algorithm to improve diagnostic accuracy in diabetes in computerised problem orientated medical records (POMR compared with an established algorithm developed in episode orientated records (EOMR

    Directory of Open Access Journals (Sweden)

    Simon de Lusignan

    2015-06-01

    Full Text Available An algorithm that detects errors in diagnosis, classification or coding of diabetes in primary care computerised medial record (CMR systems is currently available.  However, this was developed on CMR systems that are “Episode orientated” medical records (EOMR; and don’t force the user to always code a problem or link data to an existing one.  More strictly problem orientated medical record (POMR systems mandate recording a problem and linking consultation data to them.  

  8. Development of pelvis phantom for verification of treatment planning system using convolution, fast superposition, and superposition algorithms

    Directory of Open Access Journals (Sweden)

    Michael Onoriode Akpochafor

    2017-01-01

    Full Text Available Background: The cost of commercial pelvis phantom is a burden to the quality assurance in radiotherapy of small and/or low-income radiotherapy centers. That an algorithm is accurate with short treatment time is a prized asset in treatment planning. Objectives: The purpose of this study was to develop a hybrid algorithm that has balance between accuracy and treatment time and design a pelvis phantom for evaluating the accuracy of a linear accelerator monitor unit. Materials and Methods: A pelvis phantom was designed using Plaster of Paris, styrofoam and water with six hollows for inserting materials mimicking different biological tissues, and the ionization chamber. Computed tomography images of the phantom were transferred to the CMS XiO treatment planning system with three different algorithms. Monitor units were obtained with clinical linear accelerator with isocentric setup. The phantom was tested using convolution (C, fast superposition (FSS, and superposition (S algorithms with respect to an established reference dose of 1 Gy from a large water phantom. Data analysis value was done using GraphPad Prism 5.0. Results: FSS algorithm showed better accuracy than C and S with bone, lung, and solid water inhomogeneous insert. C algorithm was better in terms of treatment time than S. There was no statistically significant difference between the mean doses for all the three algorithms against the reference dose. The maximum percentage deviation was ±4%, which was below ±5% International Commission on Radiation Units and Measurement minimal limit. Conclusion: This algorithm can be employed in the calculation of dose in advance techniques such as intensity-modulated radiation therapy and RapidArc by radiotherapy centers with multiple algorithm system because it is easy to implement. The materials used for the construction of the phantom are very affordable and simple for low-budget radiotherapy centers.

  9. Development and application of an algorithm to compute weighted multiple glycan alignments.

    Science.gov (United States)

    Hosoda, Masae; Akune, Yukie; Aoki-Kinoshita, Kiyoko F

    2017-05-01

    A glycan consists of monosaccharides linked by glycosidic bonds, has branches and forms complex molecular structures. Databases have been developed to store large amounts of glycan-binding experiments, including glycan arrays with glycan-binding proteins. However, there are few bioinformatics techniques to analyze large amounts of data for glycans because there are few tools that can handle the complexity of glycan structures. Thus, we have developed the MCAW (Multiple Carbohydrate Alignment with Weights) tool that can align multiple glycan structures, to aid in the understanding of their function as binding recognition molecules. We have described in detail the first algorithm to perform multiple glycan alignments by modeling glycans as trees. To test our tool, we prepared several data sets, and as a result, we found that the glycan motif could be successfully aligned without any prior knowledge applied to the tool, and the known recognition binding sites of glycans could be aligned at a high rate amongst all our datasets tested. We thus claim that our tool is able to find meaningful glycan recognition and binding patterns using data obtained by glycan-binding experiments. The development and availability of an effective multiple glycan alignment tool opens possibilities for many other glycoinformatics analysis, making this work a big step towards furthering glycomics analysis. http://www.rings.t.soka.ac.jp. kkiyoko@soka.ac.jp. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press.

  10. Development of thunderstorm monitoring technologies and algorithms by integration of radar, sensors, and satellite images

    Science.gov (United States)

    Adzhieva, Aida A.; Shapovalov, Vitaliy A.; Boldyreff, Anton S.

    2017-10-01

    In the context of rising the frequency of natural disasters and catastrophes humanity has to develop methods and tools to ensure safe living conditions. Effectiveness of preventive measures greatly depends on quality and lead time of the forecast of disastrous natural phenomena, which is based on the amount of knowledge about natural hazards, their causes, manifestations, and impact. To prevent them it is necessary to get complete and comprehensive information about the extent of spread and severity of natural processes that can act within a defined territory. For these purposes the High Mountain Geophysical Institute developed the automated workplace for mining, analysis and archiving of radar, satellite, lightning sensors information and terrestrial (automatic weather station) weather data. The combination and aggregation of data from different sources of meteorological data provides a more informativity of the system. Satellite data shows the global cloud region in visible and infrared ranges, but have an uncertainty in terms of weather events and large time interval between the two periods of measurements, which complicates the use of this information for very short range forecasts of weather phenomena. Radar and lightning sensors data provide the detection of weather phenomena and their localization on the background of the global pattern of cloudiness in the region and have a low period measurement of atmospheric phenomena (hail, thunderstorms, showers, squalls, tornadoes). The authors have developed the improved algorithms for recognition of dangerous weather phenomena, based on the complex analysis of incoming information using the mathematical apparatus of pattern recognition.

  11. Prediction system of hydroponic plant growth and development using algorithm Fuzzy Mamdani method

    Science.gov (United States)

    Sudana, I. Made; Purnawirawan, Okta; Arief, Ulfa Mediaty

    2017-03-01

    Hydroponics is a method of farming without soil. One of the Hydroponic plants is Watercress (Nasturtium Officinale). The development and growth process of hydroponic Watercress was influenced by levels of nutrients, acidity and temperature. The independent variables can be used as input variable system to predict the value level of plants growth and development. The prediction system is using Fuzzy Algorithm Mamdani method. This system was built to implement the function of Fuzzy Inference System (Fuzzy Inference System/FIS) as a part of the Fuzzy Logic Toolbox (FLT) by using MATLAB R2007b. FIS is a computing system that works on the principle of fuzzy reasoning which is similar to humans' reasoning. Basically FIS consists of four units which are fuzzification unit, fuzzy logic reasoning unit, base knowledge unit and defuzzification unit. In addition to know the effect of independent variables on the plants growth and development that can be visualized with the function diagram of FIS output surface that is shaped three-dimensional, and statistical tests based on the data from the prediction system using multiple linear regression method, which includes multiple linear regression analysis, T test, F test, the coefficient of determination and donations predictor that are calculated using SPSS (Statistical Product and Service Solutions) software applications.

  12. Development of the Tardivo Algorithm to Predict Amputation Risk of Diabetic Foot.

    Directory of Open Access Journals (Sweden)

    João Paulo Tardivo

    Full Text Available Diabetes is a chronic disease that affects almost 19% of the elderly population in Brazil and similar percentages around the world. Amputation of lower limbs in diabetic patients who present foot complications is a common occurrence with a significant reduction of life quality, and heavy costs on the health system. Unfortunately, there is no easy protocol to define the conditions that should be considered to proceed to amputation. The main objective of the present study is to create a simple prognostic score to evaluate the diabetic foot, which is called Tardivo Algorithm. Calculation of the score is based on three main factors: Wagner classification, signs of peripheral arterial disease (PAD, which is evaluated by using Peripheral Arterial Disease Classification, and the location of ulcers. The final score is obtained by multiplying the value of the individual factors. Patients with good peripheral vascularization received a value of 1, while clinical signs of ischemia received a value of 2 (PAD 2. Ulcer location was defined as forefoot, midfoot and hind foot. The conservative treatment used in patients with scores below 12 was based on a recently developed Photodynamic Therapy (PDT protocol. 85.5% of these patients presented a good outcome and avoided amputation. The results showed that scores 12 or higher represented a significantly higher probability of amputation (Odds ratio and logistic regression-IC 95%, 12.2-1886.5. The Tardivo algorithm is a simple prognostic score for the diabetic foot, easily accessible by physicians. It helps to determine the amputation risk and the best treatment, whether it is conservative or surgical management.

  13. Development and evaluation of a micro-macro algorithm for the simulation of polymer flow

    International Nuclear Information System (INIS)

    Feigl, Kathleen; Tanner, Franz X.

    2006-01-01

    A micro-macro algorithm for the calculation of polymer flow is developed and numerically evaluated. The system being solved consists of the momentum and mass conservation equations from continuum mechanics coupled with a microscopic-based rheological model for polymer stress. Standard finite element techniques are used to solve the conservation equations for velocity and pressure, while stochastic simulation techniques are used to compute polymer stress from the simulated polymer dynamics in the rheological model. The rheological model considered combines aspects of reptation, network and continuum models. Two types of spatial approximation are considered for the configuration fields defining the dynamics in the model: piecewise constant and piecewise linear. The micro-macro algorithm is evaluated by simulating the abrupt planar die entry flow of a polyisobutylene solution described in the literature. The computed velocity and stress fields are found to be essentially independent of mesh size and ensemble size, while there is some dependence of the results on the order of spatial approximation to the configuration fields close to the die entry. Comparison with experimental data shows that the piecewise linear approximation leads to better predictions of the centerline first normal stress difference. Finally, the computational time associated with the piecewise constant spatial approximation is found to be about 2.5 times lower than that associated with the piecewise linear approximation. This is the result of the more efficient time integration scheme that is possible with the former type of approximation due to the pointwise incompressibility guaranteed by the choice of velocity-pressure finite element

  14. Development of the Tardivo Algorithm to Predict Amputation Risk of Diabetic Foot.

    Science.gov (United States)

    Tardivo, João Paulo; Baptista, Maurício S; Correa, João Antonio; Adami, Fernando; Pinhal, Maria Aparecida Silva

    2015-01-01

    Diabetes is a chronic disease that affects almost 19% of the elderly population in Brazil and similar percentages around the world. Amputation of lower limbs in diabetic patients who present foot complications is a common occurrence with a significant reduction of life quality, and heavy costs on the health system. Unfortunately, there is no easy protocol to define the conditions that should be considered to proceed to amputation. The main objective of the present study is to create a simple prognostic score to evaluate the diabetic foot, which is called Tardivo Algorithm. Calculation of the score is based on three main factors: Wagner classification, signs of peripheral arterial disease (PAD), which is evaluated by using Peripheral Arterial Disease Classification, and the location of ulcers. The final score is obtained by multiplying the value of the individual factors. Patients with good peripheral vascularization received a value of 1, while clinical signs of ischemia received a value of 2 (PAD 2). Ulcer location was defined as forefoot, midfoot and hind foot. The conservative treatment used in patients with scores below 12 was based on a recently developed Photodynamic Therapy (PDT) protocol. 85.5% of these patients presented a good outcome and avoided amputation. The results showed that scores 12 or higher represented a significantly higher probability of amputation (Odds ratio and logistic regression-IC 95%, 12.2-1886.5). The Tardivo algorithm is a simple prognostic score for the diabetic foot, easily accessible by physicians. It helps to determine the amputation risk and the best treatment, whether it is conservative or surgical management.

  15. Development of the knowledge-based and empirical combined scoring algorithm (KECSA) to score protein-ligand interactions.

    Science.gov (United States)

    Zheng, Zheng; Merz, Kenneth M

    2013-05-24

    We describe a novel knowledge-based protein-ligand scoring function that employs a new definition for the reference state, allowing us to relate a statistical potential to a Lennard-Jones (LJ) potential. In this way, the LJ potential parameters were generated from protein-ligand complex structural data contained in the Protein Databank (PDB). Forty-nine (49) types of atomic pairwise interactions were derived using this method, which we call the knowledge-based and empirical combined scoring algorithm (KECSA). Two validation benchmarks were introduced to test the performance of KECSA. The first validation benchmark included two test sets that address the training set and enthalpy/entropy of KECSA. The second validation benchmark suite included two large-scale and five small-scale test sets, to compare the reproducibility of KECSA, with respect to two empirical score functions previously developed in our laboratory (LISA and LISA+), as well as to other well-known scoring methods. Validation results illustrate that KECSA shows improved performance in all test sets when compared with other scoring methods, especially in its ability to minimize the root mean square error (RMSE). LISA and LISA+ displayed similar performance using the correlation coefficient and Kendall τ as the metric of quality for some of the small test sets. Further pathways for improvement are discussed for which would allow KECSA to be more sensitive to subtle changes in ligand structure.

  16. Development of a Scaling Algorithm for Remotely Sensed and In-situ Soil Moisture Data across Complex Terrain

    Science.gov (United States)

    Shin, Y.; Mohanty, B. P.

    2012-12-01

    Spatial scaling algorithms have been developed/improved for increasing the availability of remotely sensed (RS) and in-situ soil moisture data for hydrologic applications. Existing approaches have their own drawbacks such as application in complex terrains, complexity of coupling downscaling and upscaling approaches, etc. In this study, we developed joint downscaling and upscaling algorithm for remotely sensed and in-situ soil moisture data. Our newly developed algorithm can downscale RS soil moisture footprints as well as upscale in-situ data simultaneously in complex terrains. This scheme is based on inverse modeling with a genetic algorithm. Normalized digital elevation model (NDEM) and normalized difference vegetation index (NDVI) that represent the heterogeneity of topography and vegetation covers, were used to characterize the variability of land surface. Our approach determined soil hydraulic parameters from RS and in-situ soil moisture at the airborne-/satellite footprint scales. Predicted soil moisture estimates were driven by derived soil hydraulic properties using a hydrological model (Soil-Water-Atmosphere-Plant, SWAP). As model simulated soil moisture predictions were generated for different elevations and NDVI values across complex terrains at a finer-scale (30 m 30 m), downscaled and upscaled soil moisture estimates were obtained. We selected the Little Washita watershed in Oklahoma for validating our proposed methodology at multiple scales. This newly developed joint downscaling and upscaling algorithm performed well across topographically complex regions and improved the availability of RS and in-situ soil moisture at appropriate scales for agriculture and water resources management efficiently.

  17. Accessing primary care Big Data: the development of a software algorithm to explore the rich content of consultation records.

    Science.gov (United States)

    MacRae, J; Darlow, B; McBain, L; Jones, O; Stubbe, M; Turner, N; Dowell, A

    2015-08-21

    To develop a natural language processing software inference algorithm to classify the content of primary care consultations using electronic health record Big Data and subsequently test the algorithm's ability to estimate the prevalence and burden of childhood respiratory illness in primary care. Algorithm development and validation study. To classify consultations, the algorithm is designed to interrogate clinical narrative entered as free text, diagnostic (Read) codes created and medications prescribed on the day of the consultation. Thirty-six consenting primary care practices from a mixed urban and semirural region of New Zealand. Three independent sets of 1200 child consultation records were randomly extracted from a data set of all general practitioner consultations in participating practices between 1 January 2008-31 December 2013 for children under 18 years of age (n=754,242). Each consultation record within these sets was independently classified by two expert clinicians as respiratory or non-respiratory, and subclassified according to respiratory diagnostic categories to create three 'gold standard' sets of classified records. These three gold standard record sets were used to train, test and validate the algorithm. Sensitivity, specificity, positive predictive value and F-measure were calculated to illustrate the algorithm's ability to replicate judgements of expert clinicians within the 1200 record gold standard validation set. The algorithm was able to identify respiratory consultations in the 1200 record validation set with a sensitivity of 0.72 (95% CI 0.67 to 0.78) and a specificity of 0.95 (95% CI 0.93 to 0.98). The positive predictive value of algorithm respiratory classification was 0.93 (95% CI 0.89 to 0.97). The positive predictive value of the algorithm classifying consultations as being related to specific respiratory diagnostic categories ranged from 0.68 (95% CI 0.40 to 1.00; other respiratory conditions) to 0.91 (95% CI 0.79 to 1

  18. Development of a control algorithm for the ultrasound scanning robot (NCCUSR) using ultrasound image and force feedback.

    Science.gov (United States)

    Kim, Yeoun Jae; Seo, Jong Hyun; Kim, Hong Rae; Kim, Kwang Gi

    2017-06-01

    Clinicians who frequently perform ultrasound scanning procedures often suffer from musculoskeletal disorders, arthritis, and myalgias. To minimize their occurrence and to assist clinicians, ultrasound scanning robots have been developed worldwide. Although, to date, there is still no commercially available ultrasound scanning robot, many control methods have been suggested and researched. These control algorithms are either image based or force based. If the ultrasound scanning robot control algorithm was a combination of the two algorithms, it could benefit from the advantage of each one. However, there are no existing control methods for ultrasound scanning robots that combine force control and image analysis. Therefore, in this work, a control algorithm is developed for an ultrasound scanning robot using force feedback and ultrasound image analysis. A manipulator-type ultrasound scanning robot named 'NCCUSR' is developed and a control algorithm for this robot is suggested and verified. First, conventional hybrid position-force control is implemented for the robot and the hybrid position-force control algorithm is combined with ultrasound image analysis to fully control the robot. The control method is verified using a thyroid phantom. It was found that the proposed algorithm can be applied to control the ultrasound scanning robot and experimental outcomes suggest that the images acquired using the proposed control method can yield a rating score that is equivalent to images acquired directly by the clinicians. The proposed control method can be applied to control the ultrasound scanning robot. However, more work must be completed to verify the proposed control method in order to become clinically feasible. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  19. Modelling and development of estimation and control algorithms: application to a bio process; Modelisation et elaboration d`algorithmes d`estimation et de commande: application a un bioprocede

    Energy Technology Data Exchange (ETDEWEB)

    Maher, M.

    1995-02-03

    Modelling, estimation and control of an alcoholic fermentation process is the purpose of this thesis. A simple mathematical model of a fermentation process is established by using experimental results obtained on the plant. This nonlinear model is used for numerical simulation, analysis and synthesis of estimation and control algorithms. The problem of state and parameter nonlinear estimation of bio-processes is studied. Two estimation techniques are developed and proposed to bypass the lack of sensors for certain physical variables. Their performances are studied by numerical simulation. One of these estimators is validated on experimental results of batch and continuous fermentations. An adaptive control by law is proposed for the regulation and tracking of the substrate concentration of the plant by acting on the dilution rate. It is a nonlinear control strategy coupled with the previous validated estimator. The performance of this control law is evaluated by a real application to a continuous flow fermentation process. (author) refs.

  20. Development of imaging and reconstructions algorithms on parallel processing architectures for applications in non-destructive testing

    International Nuclear Information System (INIS)

    Pedron, Antoine

    2013-01-01

    This thesis work is placed between the scientific domain of ultrasound non-destructive testing and algorithm-architecture adequation. Ultrasound non-destructive testing includes a group of analysis techniques used in science and industry to evaluate the properties of a material, component, or system without causing damage. In order to characterise possible defects, determining their position, size and shape, imaging and reconstruction tools have been developed at CEA-LIST, within the CIVA software platform. Evolution of acquisition sensors implies a continuous growth of datasets and consequently more and more computing power is needed to maintain interactive reconstructions. General purpose processors (GPP) evolving towards parallelism and emerging architectures such as GPU allow large acceleration possibilities than can be applied to these algorithms. The main goal of the thesis is to evaluate the acceleration than can be obtained for two reconstruction algorithms on these architectures. These two algorithms differ in their parallelization scheme. The first one can be properly parallelized on GPP whereas on GPU, an intensive use of atomic instructions is required. Within the second algorithm, parallelism is easier to express, but loop ordering on GPP, as well as thread scheduling and a good use of shared memory on GPU are necessary in order to obtain efficient results. Different API or libraries, such as OpenMP, CUDA and OpenCL are evaluated through chosen benchmarks. An integration of both algorithms in the CIVA software platform is proposed and different issues related to code maintenance and durability are discussed. (author) [fr

  1. Development of algorithm for continuous generation of a computer game in terms of usability and optimization of developed code in computer science

    Directory of Open Access Journals (Sweden)

    Tibor Skala

    2018-03-01

    Full Text Available As both hardware and software have become increasingly available and constantly developed, they globally contribute to improvements in technology in every field of technology and arts. Digital tools for creation and processing of graphical contents are very developed and they have been designed to shorten the time required for content creation, which is, in this case, animation. Since contemporary animation has experienced a surge in various visual styles and visualization methods, programming is built-in in everything that is currently in use. There is no doubt that there is a variety of algorithms and software which are the brain and the moving force behind any idea created for a specific purpose and applicability in society. Art and technology combined make a direct and oriented medium for publishing and marketing in every industry, including those which are not necessarily closely related to those that rely heavily on visual aspect of work. Additionally, quality and consistency of an algorithm will also depend on proper integration into the system that will be powered by that algorithm as well as on the way the algorithm is designed. Development of an endless algorithm and its effective use will be shown during the use of the computer game. In order to present the effect of various parameters, in the final phase of the computer game development an endless algorithm was tested with varying number of key input parameters (achieved time, score reached, pace of the game.

  2. Development of a deformable dosimetric phantom to verify dose accumulation algorithms for adaptive radiotherapy

    Directory of Open Access Journals (Sweden)

    Hualiang Zhong

    2016-01-01

    Full Text Available Adaptive radiotherapy may improve treatment outcomes for lung cancer patients. Because of the lack of an effective tool for quality assurance, this therapeutic modality is not yet accepted in clinic. The purpose of this study is to develop a deformable physical phantom for validation of dose accumulation algorithms in regions with heterogeneous mass. A three-dimensional (3D deformable phantom was developed containing a tissue-equivalent tumor and heterogeneous sponge inserts. Thermoluminescent dosimeters (TLDs were placed at multiple locations in the phantom each time before dose measurement. Doses were measured with the phantom in both the static and deformed cases. The deformation of the phantom was actuated by a motor driven piston. 4D computed tomography images were acquired to calculate 3D doses at each phase using Pinnacle and EGSnrc/DOSXYZnrc. These images were registered using two registration software packages: VelocityAI and Elastix. With the resultant displacement vector fields (DVFs, the calculated 3D doses were accumulated using a mass-and energy congruent mapping method and compared to those measured by the TLDs at four typical locations. In the static case, TLD measurements agreed with all the algorithms by 1.8% at the center of the tumor volume and by 4.0% in the penumbra. In the deformable case, the phantom's deformation was reproduced within 1.1 mm. For the 3D dose calculated by Pinnacle, the total dose accumulated with the Elastix DVF agreed well to the TLD measurements with their differences <2.5% at four measured locations. When the VelocityAI DVF was used, their difference increased up to 11.8%. For the 3D dose calculated by EGSnrc/DOSXYZnrc, the total doses accumulated with the two DVFs were within 5.7% of the TLD measurements which are slightly over the rate of 5% for clinical acceptance. The detector-embedded deformable phantom allows radiation dose to be measured in a dynamic environment, similar to deforming lung

  3. Utilization of Ancillary Data Sets for Conceptual SMAP Mission Algorithm Development and Product Generation

    Science.gov (United States)

    O'Neill, P.; Podest, E.

    2011-01-01

    The planned Soil Moisture Active Passive (SMAP) mission is one of the first Earth observation satellites being developed by NASA in response to the National Research Council's Decadal Survey, Earth Science and Applications from Space: National Imperatives for the Next Decade and Beyond [1]. Scheduled to launch late in 2014, the proposed SMAP mission would provide high resolution and frequent revisit global mapping of soil moisture and freeze/thaw state, utilizing enhanced Radio Frequency Interference (RFI) mitigation approaches to collect new measurements of the hydrological condition of the Earth's surface. The SMAP instrument design incorporates an L-band radar (3 km) and an L band radiometer (40 km) sharing a single 6-meter rotating mesh antenna to provide measurements of soil moisture and landscape freeze/thaw state [2]. These observations would (1) improve our understanding of linkages between the Earth's water, energy, and carbon cycles, (2) benefit many application areas including numerical weather and climate prediction, flood and drought monitoring, agricultural productivity, human health, and national security, (3) help to address priority questions on climate change, and (4) potentially provide continuity with brightness temperature and soil moisture measurements from ESA's SMOS (Soil Moisture Ocean Salinity) and NASA's Aquarius missions. In the planned SMAP mission prelaunch time frame, baseline algorithms are being developed for generating (1) soil moisture products both from radiometer measurements on a 36 km grid and from combined radar/radiometer measurements on a 9 km grid, and (2) freeze/thaw products from radar measurements on a 3 km grid. These retrieval algorithms need a variety of global ancillary data, both static and dynamic, to run the retrieval models, constrain the retrievals, and provide flags for indicating retrieval quality. The choice of which ancillary dataset to use for a particular SMAP product would be based on a number of factors

  4. Development and Validation of a Portable Platform for Deploying Decision-Support Algorithms in Prehospital Settings

    Science.gov (United States)

    Reisner, A. T.; Khitrov, M. Y.; Chen, L.; Blood, A.; Wilkins, K.; Doyle, W.; Wilcox, S.; Denison, T.; Reifman, J.

    2013-01-01

    Summary Background Advanced decision-support capabilities for prehospital trauma care may prove effective at improving patient care. Such functionality would be possible if an analysis platform were connected to a transport vital-signs monitor. In practice, there are technical challenges to implementing such a system. Not only must each individual component be reliable, but, in addition, the connectivity between components must be reliable. Objective We describe the development, validation, and deployment of the Automated Processing of Physiologic Registry for Assessment of Injury Severity (APPRAISE) platform, intended to serve as a test bed to help evaluate the performance of decision-support algorithms in a prehospital environment. Methods We describe the hardware selected and the software implemented, and the procedures used for laboratory and field testing. Results The APPRAISE platform met performance goals in both laboratory testing (using a vital-sign data simulator) and initial field testing. After its field testing, the platform has been in use on Boston MedFlight air ambulances since February of 2010. Conclusion These experiences may prove informative to other technology developers and to healthcare stakeholders seeking to invest in connected electronic systems for prehospital as well as in-hospital use. Our experiences illustrate two sets of important questions: are the individual components reliable (e.g., physical integrity, power, core functionality, and end-user interaction) and is the connectivity between components reliable (e.g., communication protocols and the metadata necessary for data interpretation)? While all potential operational issues cannot be fully anticipated and eliminated during development, thoughtful design and phased testing steps can reduce, if not eliminate, technical surprises. PMID:24155791

  5. Development of tight-binding based GW algorithm and its computational implementation for graphene

    Energy Technology Data Exchange (ETDEWEB)

    Majidi, Muhammad Aziz [Departemen Fisika, FMIPA, Universitas Indonesia, Kampus UI Depok (Indonesia); NUSNNI-NanoCore, Department of Physics, National University of Singapore (NUS), Singapore 117576 (Singapore); Singapore Synchrotron Light Source (SSLS), National University of Singapore (NUS), 5 Research Link, Singapore 117603 (Singapore); Naradipa, Muhammad Avicenna, E-mail: muhammad.avicenna11@ui.ac.id; Phan, Wileam Yonatan; Syahroni, Ahmad [Departemen Fisika, FMIPA, Universitas Indonesia, Kampus UI Depok (Indonesia); Rusydi, Andrivo [NUSNNI-NanoCore, Department of Physics, National University of Singapore (NUS), Singapore 117576 (Singapore); Singapore Synchrotron Light Source (SSLS), National University of Singapore (NUS), 5 Research Link, Singapore 117603 (Singapore)

    2016-04-19

    Graphene has been a hot subject of research in the last decade as it holds a promise for various applications. One interesting issue is whether or not graphene should be classified into a strongly or weakly correlated system, as the optical properties may change upon several factors, such as the substrate, voltage bias, adatoms, etc. As the Coulomb repulsive interactions among electrons can generate the correlation effects that may modify the single-particle spectra (density of states) and the two-particle spectra (optical conductivity) of graphene, we aim to explore such interactions in this study. The understanding of such correlation effects is important because eventually they play an important role in inducing the effective attractive interactions between electrons and holes that bind them into excitons. We do this study theoretically by developing a GW method implemented on the basis of the tight-binding (TB) model Hamiltonian. Unlike the well-known GW method developed within density functional theory (DFT) framework, our TB-based GW implementation may serve as an alternative technique suitable for systems which Hamiltonian is to be constructed through a tight-binding based or similar models. This study includes theoretical formulation of the Green’s function G, the renormalized interaction function W from random phase approximation (RPA), and the corresponding self energy derived from Feynman diagrams, as well as the development of the algorithm to compute those quantities. As an evaluation of the method, we perform calculations of the density of states and the optical conductivity of graphene, and analyze the results.

  6. Decoding neural events from fMRI BOLD signal: a comparison of existing approaches and development of a new algorithm.

    Science.gov (United States)

    Bush, Keith; Cisler, Josh

    2013-07-01

    Neuroimaging methodology predominantly relies on the blood oxygenation level dependent (BOLD) signal. While the BOLD signal is a valid measure of neuronal activity, variances in fluctuations of the BOLD signal are not only due to fluctuations in neural activity. Thus, a remaining problem in neuroimaging analyses is developing methods that ensure specific inferences about neural activity that are not confounded by unrelated sources of noise in the BOLD signal. Here, we develop and test a new algorithm for performing semiblind (i.e., no knowledge of stimulus timings) deconvolution of the BOLD signal that treats the neural event as an observable, but intermediate, probabilistic representation of the system's state. We test and compare this new algorithm against three other recent deconvolution algorithms under varied levels of autocorrelated and Gaussian noise, hemodynamic response function (HRF) misspecification and observation sampling rate. Further, we compare the algorithms' performance using two models to simulate BOLD data: a convolution of neural events with a known (or misspecified) HRF versus a biophysically accurate balloon model of hemodynamics. We also examine the algorithms' performance on real task data. The results demonstrated good performance of all algorithms, though the new algorithm generally outperformed the others (3.0% improvement) under simulated resting-state experimental conditions exhibiting multiple, realistic confounding factors (as well as 10.3% improvement on a real Stroop task). The simulations also demonstrate that the greatest negative influence on deconvolution accuracy is observation sampling rate. Practical and theoretical implications of these results for improving inferences about neural activity from fMRI BOLD signal are discussed. Copyright © 2013 Elsevier Inc. All rights reserved.

  7. Development of Variational Guiding Center Algorithms for Parallel Calculations in Experimental Magnetic Equilibria

    Energy Technology Data Exchange (ETDEWEB)

    Ellison, C. Leland [PPPL; Finn, J. M. [LANL; Qin, H. [PPPL; Tang, William M. [PPPL

    2014-10-01

    Structure-preserving algorithms obtained via discrete variational principles exhibit strong promise for the calculation of guiding center test particle trajectories. The non-canonical Hamiltonian structure of the guiding center equations forms a novel and challenging context for geometric integration. To demonstrate the practical relevance of these methods, a prototypical variational midpoint algorithm is applied to an experimental magnetic equilibrium. The stability characteristics, conservation properties, and implementation requirements associated with the variational algorithms are addressed. Furthermore, computational run time is reduced for large numbers of particles by parallelizing the calculation on GPU hardware.

  8. Cognitive Development Optimization Algorithm Based Support Vector Machines for Determining Diabetes

    Directory of Open Access Journals (Sweden)

    Utku Kose

    2016-03-01

    Full Text Available The definition, diagnosis and classification of Diabetes Mellitus and its complications are very important. First of all, the World Health Organization (WHO and other societies, as well as scientists have done lots of studies regarding this subject. One of the most important research interests of this subject is the computer supported decision systems for diagnosing diabetes. In such systems, Artificial Intelligence techniques are often used for several disease diagnostics to streamline the diagnostic process in daily routine and avoid misdiagnosis. In this study, a diabetes diagnosis system, which is formed via both Support Vector Machines (SVM and Cognitive Development Optimization Algorithm (CoDOA has been proposed. Along the training of SVM, CoDOA was used for determining the sigma parameter of the Gauss (RBF kernel function, and eventually, a classification process was made over the diabetes data set, which is related to Pima Indians. The proposed approach offers an alternative solution to the field of Artificial Intelligence-based diabetes diagnosis, and contributes to the related literature on diagnosis processes.

  9. Development of Efficient Resource Allocation Algorithm in Chunk Based OFDMA System

    Directory of Open Access Journals (Sweden)

    Yadav Mukesh Kumar

    2016-01-01

    Full Text Available The emerging demand for diverse data applications in next generation wireless networks entails both high data rate wireless connections and intelligent multiuser scheduling designs. The orthogonal frequency division multiple access based system is capable of delivering high speed data rate and can operate in a multipath environment. OFDMA based system dividing an entire channel into many orthogonal narrow band subcarriers. Due to this, it is useful to eliminate inter symbol interferences which is a limit of total available data rates. In this paper, investigation about resource allocation problem for the chunk based Orthogonal Frequency Division Multiple Access (OFDMA wireless multicast systems is done. In this paper, it is expected that the Base Station (BS has multiple antennas in a Distributed Antenna System (DAS. The allocation unit is a group of contiguous subcarriers (chunk in conventional OFDMA systems. The aim of this investigation is to develop an efficient resource allocation algorithm to maximize the total throughput and minimize the average outage probability over a chunk with respect to average Bit Error Rate (BER and total available power.

  10. Development of tomographic reconstruction algorithms for the PIXE analysis of biological samples

    International Nuclear Information System (INIS)

    Nguyen, D.T.

    2008-05-01

    The development of 3-dimensional microscopy techniques offering a spatial resolution of 1 μm or less has opened a large field of investigation in Cell Biology. Amongst them, an interesting advantage of ion beam micro-tomography is its ability to give quantitative results in terms of local concentrations in a direct way, using Particle Induced X-ray Emission (PIXET) combined to Scanning Transmission Ion Microscopy (STIMT) Tomography. After a brief introduction of existing reconstruction techniques, we present the principle of the DISRA code, the most complete written so far, which is the basis of the present work. We have modified and extended the DISRA algorithm by considering the specific aspects of biologic specimens. Moreover, correction procedures were added in the code to reduce noise in the tomograms. For portability purpose, a Windows graphic interface was designed to easily enter and modify experimental parameters used in the reconstruction, and control the several steps of data reduction. Results of STIMT and PIXET experiments on reference specimens and on human cancer cells will be also presented. (author)

  11. Spline Trajectory Algorithm Development: Bezier Curve Control Point Generation for UAVs

    Science.gov (United States)

    Howell, Lauren R.; Allen, B. Danette

    2016-01-01

    A greater need for sophisticated autonomous piloting systems has risen in direct correlation with the ubiquity of Unmanned Aerial Vehicle (UAV) technology. Whether surveying unknown or unexplored areas of the world, collecting scientific data from regions in which humans are typically incapable of entering, locating lost or wanted persons, or delivering emergency supplies, an unmanned vehicle moving in close proximity to people and other vehicles, should fly smoothly and predictably. The mathematical application of spline interpolation can play an important role in autopilots' on-board trajectory planning. Spline interpolation allows for the connection of Three-Dimensional Euclidean Space coordinates through a continuous set of smooth curves. This paper explores the motivation, application, and methodology used to compute the spline control points, which shape the curves in such a way that the autopilot trajectory is able to meet vehicle-dynamics limitations. The spline algorithms developed used to generate these curves supply autopilots with the information necessary to compute vehicle paths through a set of coordinate waypoints.

  12. Development of Ray Tracing Algorithms for Scanning Plane and Transverse Plane Analysis for Satellite Multibeam Application

    Directory of Open Access Journals (Sweden)

    N. H. Abd Rahman

    2014-01-01

    Full Text Available Reflector antennas have been widely used in many areas. In the implementation of parabolic reflector antenna for broadcasting satellite applications, it is essential for the spacecraft antenna to provide precise contoured beam to effectively serve the required region. For this purpose, combinations of more than one beam are required. Therefore, a tool utilizing ray tracing method is developed to calculate precise off-axis beams for multibeam antenna system. In the multibeam system, each beam will be fed from different feed positions to allow the main beam to be radiated at the exact direction on the coverage area. Thus, detailed study on caustics of a parabolic reflector antenna is performed and presented in this paper, which is to investigate the behaviour of the rays and its relation to various antenna parameters. In order to produce accurate data for the analysis, the caustic behaviours are investigated in two distinctive modes: scanning plane and transverse plane. This paper presents the detailed discussions on the derivation of the ray tracing algorithms, the establishment of the equations of caustic loci, and the verification of the method through calculation of radiation pattern.

  13. Prediction Model for Object Oriented Software Development Effort Estimation Using One Hidden Layer Feed Forward Neural Network with Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Chandra Shekhar Yadav

    2014-01-01

    Full Text Available The budget computation for software development is affected by the prediction of software development effort and schedule. Software development effort and schedule can be predicted precisely on the basis of past software project data sets. In this paper, a model for object-oriented software development effort estimation using one hidden layer feed forward neural network (OHFNN has been developed. The model has been further optimized with the help of genetic algorithm by taking weight vector obtained from OHFNN as initial population for the genetic algorithm. Convergence has been obtained by minimizing the sum of squared errors of each input vector and optimal weight vector has been determined to predict the software development effort. The model has been empirically validated on the PROMISE software engineering repository dataset. Performance of the model is more accurate than the well-established constructive cost model (COCOMO.

  14. A case of cutaneous squamous cell carcinoma associated with small cell carcinoma of lung developing a skin metastasis on previously irradiated area

    International Nuclear Information System (INIS)

    Kohda, Mamoru; Takei, Yoji; Ueki, Hiroaki

    1983-01-01

    Squamous cell carcinoma which occurred in the penis of a 61-year-old male patient was treated surgically and by Linac (a total of 10,400 rad). However, it was not cured. Abnormal shadows in the lung and multiple liver tumor was noted one month before death. Autopsy revealed generalized metastases of pulmonary small-cell carcinoma, and persistent squamous cell carcinoma of the penis with no metastases. Skin metastasis of lung carcinoma occurred only in the area previously irradiated. (Ueda, J.)

  15. Development of an operationally efficient PTC braking enforcement algorithm for freight trains.

    Science.gov (United States)

    2013-08-01

    Software algorithms used in positive train control (PTC) systems designed to predict freight train stopping distance and enforce a penalty brake application have been shown to be overly conservative, which can lead to operational inefficiencies by in...

  16. Development of Liquid Capacity Measuring Algorithm Based on Data Integration from Multiple Sensors

    Directory of Open Access Journals (Sweden)

    Kiwoong Park

    2016-01-01

    Full Text Available This research proposes an algorithm using a process of integrating data from multiple sensors to measure the liquid capacity in real time regardless of the position of the liquid tank. The algorithm for measuring the capacity was created with a complementary filter using a Kalman filter in order to revise the level sensor data and IMU sensor data. The measuring precision of the proposed algorithm was assessed through repetitive experiments by varying the liquid capacity and the rotation angle of the liquid tank. The measurements of the capacity within the liquid tank were precise, even when the liquid tank was rotated. Using the proposed algorithm, one can obtain highly precise measurements, and it is affordable since an existing level sensor is used.

  17. Probabilistic analysis algorithm for UA slope software program.

    Science.gov (United States)

    2013-12-01

    A reliability-based computational algorithm for using a single row and equally spaced drilled shafts to : stabilize an unstable slope has been developed in this research. The Monte-Carlo simulation (MCS) : technique was used in the previously develop...

  18. Algorithms for Zonal Methods and Development of Three Dimensional Mesh Generation Procedures.

    Science.gov (United States)

    1984-02-01

    Spac a f Pln Fiur 1., WelOree WapdSheia)ri ap C ?I -17(x .Y, z (x.___Z Figure 2. Example of Mesh Embedding in Two Dimensions Figure 3. Example of...Navier-Stokes algorithm with a well stretched grid. Such negative arguments may be offset by additional advantages for zonal schemes. For one, steady...Stanford, California ! 11 ABSTRACT T superscript indicating transpose of a matrix An algorithm for generating computational grids x independent variable in

  19. A preliminary report on the development of MATLAB tensor classes for fast algorithm prototyping.

    Energy Technology Data Exchange (ETDEWEB)

    Bader, Brett William; Kolda, Tamara Gibson (Sandia National Laboratories, Livermore, CA)

    2004-07-01

    We describe three MATLAB classes for manipulating tensors in order to allow fast algorithm prototyping. A tensor is a multidimensional or N-way array. We present a tensor class for manipulating tensors which allows for tensor multiplication and 'matricization.' We have further added two classes for representing tensors in decomposed format: cp{_}tensor and tucker{_}tensor. We demonstrate the use of these classes by implementing several algorithms that have appeared in the literature.

  20. Recent Development of Multigrid Algorithms for Mixed and Noncomforming Methods for Second Order Elliptical Problems

    Science.gov (United States)

    Chen, Zhangxin; Ewing, Richard E.

    1996-01-01

    Multigrid algorithms for nonconforming and mixed finite element methods for second order elliptic problems on triangular and rectangular finite elements are considered. The construction of several coarse-to-fine intergrid transfer operators for nonconforming multigrid algorithms is discussed. The equivalence between the nonconforming and mixed finite element methods with and without projection of the coefficient of the differential problems into finite element spaces is described.

  1. Development of Turbulent Diffusion Transfer Algorithms to Estimate Lake Tahoe Water Budget

    Science.gov (United States)

    Sahoo, G. B.; Schladow, S. G.; Reuter, J. E.

    2012-12-01

    The evaporative loss is a dominant component in the Lake Tahoe hydrologic budget because watershed area (813km2) is very small compared to the lake surface area (501 km2). The 5.5 m high dam built at the lake's only outlet, the Truckee River at Tahoe City can increase the lake's capacity by approximately 0.9185 km3. The lake serves as a flood protection for downstream areas and source of water supply for downstream cities, irrigation, hydropower, and instream environmental requirements. When the lake water level falls below the natural rim, cessation of flows from the lake cause problems for water supply, irrigation, and fishing. Therefore, it is important to develop algorithms to correctly estimate the lake hydrologic budget. We developed a turbulent diffusion transfer model and coupled to the dynamic lake model (DLM-WQ). We generated the stream flows and pollutants loadings of the streams using the US Environmental Protection Agency (USEPA) supported watershed model, Loading Simulation Program in C++ (LSPC). The bulk transfer coefficients were calibrated using correlation coefficient (R2) as the objective function. Sensitivity analysis was conducted for the meteorological inputs and model parameters. The DLM-WQ estimated lake water level and water temperatures were in agreement to those of measured records with R2 equal to 0.96 and 0.99, respectively for the period 1994 to 2008. The estimated average evaporation from the lake, stream inflow, precipitation over the lake, groundwater fluxes, and outflow from the lake during 1994 to 2008 were found to be 32.0%, 25.0%, 19.0%, 0.3%, and 11.7%, respectively.

  2. Development of effluent removal prediction model efficiency in septic sludge treatment plant through clonal selection algorithm.

    Science.gov (United States)

    Ting, Sie Chun; Ismail, A R; Malek, M A

    2013-11-15

    This study aims at developing a novel effluent removal management tool for septic sludge treatment plants (SSTP) using a clonal selection algorithm (CSA). The proposed CSA articulates the idea of utilizing an artificial immune system (AIS) to identify the behaviour of the SSTP, that is, using a sequence batch reactor (SBR) technology for treatment processes. The novelty of this study is the development of a predictive SSTP model for effluent discharge adopting the human immune system. Septic sludge from the individual septic tanks and package plants will be desuldged and treated in SSTP before discharging the wastewater into a waterway. The Borneo Island of Sarawak is selected as the case study. Currently, there are only two SSTPs in Sarawak, namely the Matang SSTP and the Sibu SSTP, and they are both using SBR technology. Monthly effluent discharges from 2007 to 2011 in the Matang SSTP are used in this study. Cross-validation is performed using data from the Sibu SSTP from April 2011 to July 2012. Both chemical oxygen demand (COD) and total suspended solids (TSS) in the effluent were analysed in this study. The model was validated and tested before forecasting the future effluent performance. The CSA-based SSTP model was simulated using MATLAB 7.10. The root mean square error (RMSE), mean absolute percentage error (MAPE), and correction coefficient (R) were used as performance indexes. In this study, it was found that the proposed prediction model was successful up to 84 months for the COD and 109 months for the TSS. In conclusion, the proposed CSA-based SSTP prediction model is indeed beneficial as an engineering tool to forecast the long-run performance of the SSTP and in turn, prevents infringement of future environmental balance in other towns in Sarawak. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. A treatment algorithm for children with lupus nephritis to prevent developing renal failure

    Directory of Open Access Journals (Sweden)

    Nilofar Hajizadeh

    2014-01-01

    Full Text Available Chronic kidney disease is one of the most common complication of systemic lupus erythematosus, which if untreated can lead to the end stage renal disease (ESRD. Early diagnosis and adequate treatment of lupus nephritis (LN is critical to prevent the chronic kidney disease incidence and to reduce the development of ESRD. The treatment of LN has changed significantly over the past decade. In patients with active proliferative LN (Classes III and IV intravenous methylprednisolone 1 g/m2/day for 1 3 days then prednisone 0.5 1.0 mg/kg/day, tapered to <0.5 mg/kg/day after 10 12 weeks of treatment plus mycophenolate mofetile (MMF 1.2 g/m2/day for 6 months followed by maintenance lower doses of MMF 1 2 g/day or azathioprine (AZA 2 mg/kg/day for 3 years have proven to be efficacy and less toxic than cyclophosphamide (CYC therapy. Patients with membranous LN (Class V plus diffuse or local proliferative LN (Class III and Class IV should receive either the standard 6 monthly pulses of CYC (0.5 1 g/m2/month then every 3rd month or to a shorter treatment course consisting of 0.5 g/m2 IV CYC every 2 weeks for six doses (total dose 3 g followed by maintenance therapy with daily AZA (2 mg/kg/day or MMF (0.6 g/m2/day for 3 years. Combination of MMF plus rituximab or MMF plus calcineurin inhibitors may be an effective co therapy for those refractory to induction or maintenance therapies. This report introduces a new treatment algorithm to prevent the development of ESRD in children with LN.

  4. TIA: algorithms for development of identity-linked SNP islands for analysis by massively parallel DNA sequencing.

    Science.gov (United States)

    Farris, M Heath; Scott, Andrew R; Texter, Pamela A; Bartlett, Marta; Coleman, Patricia; Masters, David

    2018-04-11

    Single nucleotide polymorphisms (SNPs) located within the human genome have been shown to have utility as markers of identity in the differentiation of DNA from individual contributors. Massively parallel DNA sequencing (MPS) technologies and human genome SNP databases allow for the design of suites of identity-linked target regions, amenable to sequencing in a multiplexed and massively parallel manner. Therefore, tools are needed for leveraging the genotypic information found within SNP databases for the discovery of genomic targets that can be evaluated on MPS platforms. The SNP island target identification algorithm (TIA) was developed as a user-tunable system to leverage SNP information within databases. Using data within the 1000 Genomes Project SNP database, human genome regions were identified that contain globally ubiquitous identity-linked SNPs and that were responsive to targeted resequencing on MPS platforms. Algorithmic filters were used to exclude target regions that did not conform to user-tunable SNP island target characteristics. To validate the accuracy of TIA for discovering these identity-linked SNP islands within the human genome, SNP island target regions were amplified from 70 contributor genomic DNA samples using the polymerase chain reaction. Multiplexed amplicons were sequenced using the Illumina MiSeq platform, and the resulting sequences were analyzed for SNP variations. 166 putative identity-linked SNPs were targeted in the identified genomic regions. Of the 309 SNPs that provided discerning power across individual SNP profiles, 74 previously undefined SNPs were identified during evaluation of targets from individual genomes. Overall, DNA samples of 70 individuals were uniquely identified using a subset of the suite of identity-linked SNP islands. TIA offers a tunable genome search tool for the discovery of targeted genomic regions that are scalable in the population frequency and numbers of SNPs contained within the SNP island regions

  5. Development and validation of a novel algorithm based on the ECG magnet response for rapid identification of any unknown pacemaker.

    Science.gov (United States)

    Squara, Fabien; Chik, William W; Benhayon, Daniel; Maeda, Shingo; Latcu, Decebal Gabriel; Lacaze-Gadonneix, Jonathan; Tibi, Thierry; Thomas, Olivier; Cooper, Joshua M; Duthoit, Guillaume

    2014-08-01

    Pacemaker (PM) interrogation requires correct manufacturer identification. However, an unidentified PM is a frequent occurrence, requiring time-consuming steps to identify the device. The purpose of this study was to develop and validate a novel algorithm for PM manufacturer identification, using the ECG response to magnet application. Data on the magnet responses of all recent PM models (≤15 years) from the 5 major manufacturers were collected. An algorithm based on the ECG response to magnet application to identify the PM manufacturer was subsequently developed. Patients undergoing ECG during magnet application in various clinical situations were prospectively recruited in 7 centers. The algorithm was applied in the analysis of every ECG by a cardiologist blinded to PM information. A second blinded cardiologist analyzed a sample of randomly selected ECGs in order to assess the reproducibility of the results. A total of 250 ECGs were analyzed during magnet application. The algorithm led to the correct single manufacturer choice in 242 ECGs (96.8%), whereas 7 (2.8%) could only be narrowed to either 1 of 2 manufacturer possibilities. Only 2 (0.4%) incorrect manufacturer identifications occurred. The algorithm identified Medtronic and Sorin Group PMs with 100% sensitivity and specificity, Biotronik PMs with 100% sensitivity and 99.5% specificity, and St. Jude and Boston Scientific PMs with 92% sensitivity and 100% specificity. The results were reproducible between the 2 blinded cardiologists with 92% concordant findings. Unknown PM manufacturers can be accurately identified by analyzing the ECG magnet response using this newly developed algorithm. Copyright © 2014 Heart Rhythm Society. Published by Elsevier Inc. All rights reserved.

  6. Developing the science product algorithm testbed for Chinese next-generation geostationary meteorological satellites: Fengyun-4 series

    Science.gov (United States)

    Min, Min; Wu, Chunqiang; Li, Chuan; Liu, Hui; Xu, Na; Wu, Xiao; Chen, Lin; Wang, Fu; Sun, Fenglin; Qin, Danyu; Wang, Xi; Li, Bo; Zheng, Zhaojun; Cao, Guangzhen; Dong, Lixin

    2017-08-01

    Fengyun-4A (FY-4A), the first of the Chinese next-generation geostationary meteorological satellites, launched in 2016, offers several advances over the FY-2: more spectral bands, faster imaging, and infrared hyperspectral measurements. To support the major objective of developing the prototypes of FY-4 science algorithms, two science product algorithm testbeds for imagers and sounders have been developed by the scientists in the FY-4 Algorithm Working Group (AWG). Both testbeds, written in FORTRAN and C programming languages for Linux or UNIX systems, have been tested successfully by using Intel/g compilers. Some important FY-4 science products, including cloud mask, cloud properties, and temperature profiles, have been retrieved successfully through using a proxy imager, Himawari-8/Advanced Himawari Imager (AHI), and sounder data, obtained from the Atmospheric InfraRed Sounder, thus demonstrating their robustness. In addition, in early 2016, the FY-4 AWG was developed based on the imager testbed—a near real-time processing system for Himawari-8/AHI data for use by Chinese weather forecasters. Consequently, robust and flexible science product algorithm testbeds have provided essential and productive tools for popularizing FY-4 data and developing substantial improvements in FY-4 products.

  7. Development and validation of algorithms to differentiate ductal carcinoma in situ from invasive breast cancer within administrative claims data.

    Science.gov (United States)

    Hirth, Jacqueline M; Hatch, Sandra S; Lin, Yu-Li; Giordano, Sharon H; Silva, H Colleen; Kuo, Yong-Fang

    2018-04-18

    Overtreatment is a common concern for patients with ductal carcinoma in situ (DCIS), but this entity is difficult to distinguish from invasive breast cancers in administrative claims data sets because DCIS often is coded as invasive breast cancer. Therefore, the authors developed and validated algorithms to select DCIS cases from administrative claims data to enable outcomes research in this type of data. This retrospective cohort using invasive breast cancer and DCIS cases included women aged 66 to 70 years in the 2004 through 2011 Texas Cancer Registry (TCR) data linked to Medicare administrative claims data. TCR records were used as "gold" standards to evaluate the sensitivity, specificity, and positive predictive value (PPV) of 2 algorithms. Women with a biopsy enrolled in Medicare parts A and B at 12 months before and 6 months after their first biopsy without a second incident diagnosis of DCIS or invasive breast cancer within 12 months in the TCR were included. Women in 2010 Medicare data were selected to test the algorithms in a general sample. In the TCR data set, a total of 6907 cases met inclusion criteria, with 1244 DCIS cases. The first algorithm had a sensitivity of 79%, a specificity of 89%, and a PPV of 62%. The second algorithm had a sensitivity of 50%, a specificity of 97%. and a PPV of 77%. Among women in the general sample, the specificity was high and the sensitivity was similar for both algorithms. However, the PPV was approximately 6% to 7% lower. DCIS frequently is miscoded as invasive breast cancer, and thus the proposed algorithms are useful to examine DCIS outcomes using data sets not linked to cancer registries. Cancer 2018. © 2018 American Cancer Society. © 2018 American Cancer Society.

  8. Development and Evaluation of an Algorithm for the Computer-Assisted Segmentation of the Human Hypothalamus on 7-Tesla Magnetic Resonance Images

    Science.gov (United States)

    Schmidt, Laura; Anwander, Alfred; Strauß, Maria; Trampel, Robert; Bazin, Pierre-Louis; Möller, Harald E.; Hegerl, Ulrich; Turner, Robert; Geyer, Stefan

    2013-01-01

    Post mortem studies have shown volume changes of the hypothalamus in psychiatric patients. With 7T magnetic resonance imaging this effect can now be investigated in vivo in detail. To benefit from the sub-millimeter resolution requires an improved segmentation procedure. The traditional anatomical landmarks of the hypothalamus were refined using 7T T1-weighted magnetic resonance images. A detailed segmentation algorithm (unilateral hypothalamus) was developed for colour-coded, histogram-matched images, and evaluated in a sample of 10 subjects. Test-retest and inter-rater reliabilities were estimated in terms of intraclass-correlation coefficients (ICC) and Dice's coefficient (DC). The computer-assisted segmentation algorithm ensured test-retest reliabilities of ICC≥.97 (DC≥96.8) and inter-rater reliabilities of ICC≥.94 (DC = 95.2). There were no significant volume differences between the segmentation runs, raters, and hemispheres. The estimated volumes of the hypothalamus lie within the range of previous histological and neuroimaging results. We present a computer-assisted algorithm for the manual segmentation of the human hypothalamus using T1-weighted 7T magnetic resonance imaging. Providing very high test-retest and inter-rater reliabilities, it outperforms former procedures established at 1.5T and 3T magnetic resonance images and thus can serve as a gold standard for future automated procedures. PMID:23935821

  9. Planning fuel-conservative descents in an airline environmental using a small programmable calculator: Algorithm development and flight test results

    Science.gov (United States)

    Knox, C. E.; Vicroy, D. D.; Simmon, D. A.

    1985-01-01

    A simple, airborne, flight-management descent algorithm was developed and programmed into a small programmable calculator. The algorithm may be operated in either a time mode or speed mode. The time mode was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The speed model was designed for planning fuel-conservative descents when time is not a consideration. The descent path for both modes was calculated for a constant with considerations given for the descent Mach/airspeed schedule, gross weight, wind, wind gradient, and nonstandard temperature effects. Flight tests, using the algorithm on the programmable calculator, showed that the open-loop guidance could be useful to airline flight crews for planning and executing fuel-conservative descents.

  10. Planning fuel-conservative descents in an airline environmental using a small programmable calculator: algorithm development and flight test results

    Energy Technology Data Exchange (ETDEWEB)

    Knox, C.E.; Vicroy, D.D.; Simmon, D.A.

    1985-05-01

    A simple, airborne, flight-management descent algorithm was developed and programmed into a small programmable calculator. The algorithm may be operated in either a time mode or speed mode. The time mode was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The speed model was designed for planning fuel-conservative descents when time is not a consideration. The descent path for both modes was calculated for a constant with considerations given for the descent Mach/airspeed schedule, gross weight, wind, wind gradient, and nonstandard temperature effects. Flight tests, using the algorithm on the programmable calculator, showed that the open-loop guidance could be useful to airline flight crews for planning and executing fuel-conservative descents.

  11. Development and validation of risk prediction algorithms to estimate future risk of common cancers in men and women: prospective cohort study

    Science.gov (United States)

    Hippisley-Cox, Julia; Coupland, Carol

    2015-01-01

    Objective To derive and validate a set of clinical risk prediction algorithm to estimate the 10-year risk of 11 common cancers. Design Prospective open cohort study using routinely collected data from 753 QResearch general practices in England. We used 565 practices to develop the scores and 188 for validation. Subjects 4.96 million patients aged 25–84 years in the derivation cohort; 1.64 million in the validation cohort. Patients were free of the relevant cancer at baseline. Methods Cox proportional hazards models in the derivation cohort to derive 10-year risk algorithms. Risk factors considered included age, ethnicity, deprivation, body mass index, smoking, alcohol, previous cancer diagnoses, family history of cancer, relevant comorbidities and medication. Measures of calibration and discrimination in the validation cohort. Outcomes Incident cases of blood, breast, bowel, gastro-oesophageal, lung, oral, ovarian, pancreas, prostate, renal tract and uterine cancers. Cancers were recorded on any one of four linked data sources (general practitioner (GP), mortality, hospital or cancer records). Results We identified 228 241 incident cases during follow-up of the 11 types of cancer. Of these 25 444 were blood; 41 315 breast; 32 626 bowel, 12 808 gastro-oesophageal; 32 187 lung; 4811 oral; 6635 ovarian; 7119 pancreatic; 35 256 prostate; 23 091 renal tract; 6949 uterine cancers. The lung cancer algorithm had the best performance with an R2 of 64.2%; D statistic of 2.74; receiver operating characteristic curve statistic of 0.91 in women. The sensitivity for the top 10% of women at highest risk of lung cancer was 67%. Performance of the algorithms in men was very similar to that for women. Conclusions We have developed and validated a prediction models to quantify absolute risk of 11 common cancers. They can be used to identify patients at high risk of cancers for prevention or further assessment. The algorithms could be integrated into clinical

  12. Development of a Mobile Robot Test Platform and Methods for Validation of Prognostics-Enabled Decision Making Algorithms

    Directory of Open Access Journals (Sweden)

    Jose R. Celaya

    2013-01-01

    Full Text Available As fault diagnosis and prognosis systems in aerospace applications become more capable, the ability to utilize information supplied by them becomes increasingly important. While certain types of vehicle health data can be effectively processed and acted upon by crew or support personnel, others, due to their complexity or time constraints, require either automated or semi-automated reasoning. Prognostics-enabled Decision Making (PDM is an emerging research area that aims to integrate prognostic health information and knowledge about the future operating conditions into the process of selecting subsequent actions for the system. The newly developed PDM algorithms require suitable software and hardware platforms for testing under realistic fault scenarios. The paper describes the development of such a platform, based on the K11 planetary rover prototype. A variety of injectable fault modes are being investigated for electrical, mechanical, and power subsystems of the testbed, along with methods for data collection and processing. In addition to the hardware platform, a software simulator with matching capabilities has been developed. The simulator allows for prototyping and initial validation of the algorithms prior to their deployment on the K11. The simulator is also available to the PDM algorithms to assist with the reasoning process. A reference set of diagnostic, prognostic, and decision making algorithms is also described, followed by an overview of the current test scenarios and the results of their execution on the simulator.

  13. Development of an EORTC-8D utility algorithm for Sri Lanka.

    Science.gov (United States)

    Kularatna, Sanjeewa; Whitty, Jennifer A; Johnson, Newell W; Jayasinghe, Ruwan; Scuffham, Paul A

    2015-04-01

    Currently there are no reported cancer-specific health state valuations in low- and middle-income countries using a validated preference-based measure. The EORTC-8D, a cancer-specific preference-based measure, has 81,920 health states and is useful for economic evaluations in cancer care. The aim of this study was to develop a utility algorithm to value EORTC-8D health states using preferences derived from a representative population sample in Sri Lanka. The time-tradeoff method was used to elicit preferences from a general population sample of 780 in Sri Lanka. A block design of 85 health states, with a time horizon of 10 years, was used for the direct valuation. Data were analyzed using generalized least squares with random effects. All respondents with at least one logical inconsistency were excluded from the analysis. After logical inconsistencies were excluded, 4520 observations were available from 717 respondents for the analysis. The preferred model specified main effects with an interaction term for any level 4 or worse descriptor within a health state. Worsening of physical functioning had a substantially greater utility decrement than any other dimension in this population. Limitations are that the data collection could not include the whole country and that females formed a large part of the sample. Preference weights for EORTC-8D health states for Sri Lanka have been derived: These will be very useful in economic evaluations of cancer-related interventions in a range of low- and middle-income countries. © The Author(s) 2014.

  14. Development of hybrid fog detection algorithm (FDA) using satellite and ground observation data for nighttime

    Science.gov (United States)

    Kim, So-Hyeong; Han, Ji-Hae; Suh, Myoung-Seok

    2017-04-01

    In this study, we developed a hybrid fog detection algorithm (FDA) using AHI/Himawari-8 satellite and ground observation data for nighttime. In order to detect fog at nighttime, Dual Channel Difference (DCD) method based on the emissivity difference between SWIR and IR1 is most widely used. DCD is good at discriminating fog from other things (middle/high clouds, clear sea and land). However, it is difficult to distinguish fog from low clouds. In order to separate the low clouds from the pixels that satisfy the thresholds of fog in the DCD test, we conducted supplementary tests such as normalized local standard derivation (NLSD) of BT11 and the difference of fog top temperature (BT11) and air temperature (Ta) from NWP data (SST from OSTIA data). These tests are based on the larger homogeneity of fog top than low cloud tops and the similarity of fog top temperature and Ta (SST). Threshold values for the three tests were optimized through ROC analysis for the selected fog cases. In addition, considering the spatial continuity of fog, post-processing was performed to detect the missed pixels, in particular, at edge of fog or sub-pixel size fog. The final fog detection results are presented by fog probability (0 100 %). Validation was conducted by comparing fog detection probability with the ground observed visibility data from KMA. The validation results showed that POD and FAR are ranged from 0.70 0.94 and 0.45 0.72, respectively. The quantitative validation and visual inspection indicate that current FDA has a tendency to over-detect the fog. So, more works which reducing the FAR is needed. In the future, we will also validate sea fog using CALIPSO data.

  15. Diagnosis and treatment of acute ankle injuries: development of an evidence-based algorithm

    Directory of Open Access Journals (Sweden)

    Hans Polzer

    2012-01-01

    Full Text Available Acute ankle injuries are among the most common injuries in emergency departments. However, a standardized examination and an evidence-based treatment are missing. Therefore, aim of this study was to systematically search the current literature, classify the evidence and develop an algorithm for diagnosis and treatment of acute ankle injuries. We systematically searched PubMed and the Cochrane Database for randomized controlled trials, meta-analysis, systematic reviews, or if applicable observational studies and classified them according to their level of evidence. According to the currently available literature, the following recommendations are given. The Ottawa Ankle/Foot Rule should be applied in order to rule out fractures, Physical examination is sufficient for diagnosing injuries to the lateral ligament complex. Classification into stable and unstable injuries is applicable and of clinical importance. The squeeze-, crossed leg- and external rotation test are indicative for injuries of the syndesmosis. Magnetic resonance imaging is recommended to verify such injuries. Stable ankle sprains have a good prognosis, while for unstable ankle sprains conservative treatment is at least as effective as operative treatment without carrying possible complications. Early functional treatment leads to the fastest recovery and the least rate of re-injury. Supervised rehabilitation reduces residual symptoms and re-injuries. Taken these recommendations into account, we here present an applicable and evidence-based step by step decision pathway for the diagnosis and treatment of acute ankle injuries, which can be implemented in any emergency department or doctor’s practice. It provides quality assurance for the patient and confidence for the attending physician.

  16. Prepatellar and olecranon bursitis: literature review and development of a treatment algorithm.

    Science.gov (United States)

    Baumbach, Sebastian F; Lobo, Christopher M; Badyine, Ilias; Mutschler, Wolf; Kanz, Karl-Georg

    2014-03-01

    Olecranon bursitis and prepatellar bursitis are common entities, with a minimum annual incidence of 10/100,000, predominantly affecting male patients (80 %) aged 40-60 years. Approximately 1/3 of cases are septic (SB) and 2/3 of cases are non-septic (NSB), with substantial variations in treatment regimens internationally. The aim of the study was the development of a literature review-based treatment algorithm for prepatellar and olecranon bursitis. Following a systematic review of Pubmed, the Cochrane Library, textbooks of emergency medicine and surgery, and a manual reference search, 52 relevant papers were identified. The initial differentiation between SB and NSB was based on clinical presentation, bursal aspirate, and blood sampling analysis. Physical findings suggesting SB were fever >37.8 °C, prebursal temperature difference greater 2.2 °C, and skin lesions. Relevant findings for bursal aspirate were purulent aspirate, fluid-to-serum glucose ratio 3,000 cells/μl, polymorphonuclear cells >50 %, positive Gram staining, and positive culture. General treatment measures for SB and NSB consist of bursal aspiration, NSAIDs, and PRICE. For patients with confirmed NSB and high athletic or occupational demands, intrabursal steroid injection may be performed. In the case of SB, antibiotic therapy should be initiated. Surgical treatment, i.e., incision, drainage, or bursectomy, should be restricted to severe, refractory, or chronic/recurrent cases. The available evidence did not support the central European concept of immediate bursectomy in cases of SB. A conservative treatment regimen should be pursued, following bursal aspirate-based differentiation between SB and NSB.

  17. Diagnosis and treatment of acute ankle injuries: development of an evidence-based algorithm

    Science.gov (United States)

    Polzer, Hans; Kanz, Karl Georg; Prall, Wolf Christian; Haasters, Florian; Ockert, Ben; Mutschler, Wolf; Grote, Stefan

    2011-01-01

    Acute ankle injuries are among the most common injuries in emergency departments. However, there are still no standardized examination procedures or evidence-based treatment. Therefore, the aim of this study was to systematically search the current literature, classify the evidence, and develop an algorithm for the diagnosis and treatment of acute ankle injuries. We systematically searched PubMed and the Cochrane Database for randomized controlled trials, meta-analyses, systematic reviews or, if applicable, observational studies and classified them according to their level of evidence. According to the currently available literature, the following recommendations have been formulated: i) the Ottawa Ankle/Foot Rule should be applied in order to rule out fractures; ii) physical examination is sufficient for diagnosing injuries to the lateral ligament complex; iii) classification into stable and unstable injuries is applicable and of clinical importance; iv) the squeeze-, crossed leg- and external rotation test are indicative for injuries of the syndesmosis; v) magnetic resonance imaging is recommended to verify injuries of the syndesmosis; vi) stable ankle sprains have a good prognosis while for unstable ankle sprains, conservative treatment is at least as effective as operative treatment without the related possible complications; vii) early functional treatment leads to the fastest recovery and the least rate of reinjury; viii) supervised rehabilitation reduces residual symptoms and re-injuries. Taken these recommendations into account, we present an applicable and evidence-based, step by step, decision pathway for the diagnosis and treatment of acute ankle injuries, which can be implemented in any emergency department or doctor's practice. It provides quality assurance for the patient and promotes confidence in the attending physician. PMID:22577506

  18. Developing algorithms for healthcare insurers to systematically monitor surgical site infection rates

    Directory of Open Access Journals (Sweden)

    Livingston James M

    2007-06-01

    Full Text Available Abstract Background Claims data provide rapid indicators of SSIs for coronary artery bypass surgery and have been shown to successfully rank hospitals by SSI rates. We now operationalize this method for use by payers without transfer of protected health information, or any insurer data, to external analytic centers. Results We performed a descriptive study testing the operationalization of software for payers to routinely assess surgical infection rates among hospitals where enrollees receive cardiac procedures. We developed five SAS programs and a user manual for direct use by health plans and payers. The manual and programs were refined following provision to two national insurers who applied the programs to claims databases, following instructions on data preparation, data validation, analysis, and verification and interpretation of program output. A final set of programs and user manual successfully guided health plan programmer analysts to apply SSI algorithms to claims databases. Validation steps identified common problems such as incomplete preparation of data, missing data, insufficient sample size, and other issues that might result in program failure. Several user prompts enabled health plans to select time windows, strata such as insurance type, and the threshold number of procedures performed by a hospital before inclusion in regression models assessing relative SSI rates among hospitals. No health plan data was transferred to outside entities. Programs, on default settings, provided descriptive tables of SSI indicators stratified by hospital, insurer type, SSI indicator (inpatient, outpatient, antibiotic, and six-month period. Regression models provided rankings of hospital SSI indicator rates by quartiles, adjusted for comorbidities. Programs are publicly available without charge. Conclusion We describe a free, user-friendly software package that enables payers to routinely assess and identify hospitals with potentially high SSI

  19. DEVELOPMENT OF THE SOCIAL TENSION RISK PREDICTING ALGORITHM IN THE POPULATION OF CERTAIN REGIONS OF RUSSIA

    Directory of Open Access Journals (Sweden)

    A. B. Mulik

    2017-01-01

    Full Text Available Aim. The aim of the study was development of approaches to predict the risk of social tension for population of the Russian Federation regions.Methods. Theoretical studies based on the analysis of cartographic material from the National Atlas of Russia. The use of geo-information technologies has provided modeling of environmental load in the territory of certain regions of Russia. Experimental studies were performed using standard methods of psycho-physiological testing involving 336 persons 18-23 years old of both sexes.Results. As a fundamental biologically significant factor of the environment, differentiating the Russian Federation territory to areas with discrete actual physical effects, total solar radiation was determined. The subsequent allocation of model regions (Republic of Crimea, Rostov and Saratov regions based on the principle of minimizing regional differences associated factors of environmental pressure per person. Experimental studies have revealed persistent systemic relationships of phenotypic characteristics and tendency of person to neuropsychic tension. The risk of social tension for the study area population is predicted on the condition of finding more than two thirds of the representatives of sample within the borders of a high level of general non-specific reactivity of an organism.Main conclusions. The expediency of using the northern latitude as an integral index of differentiation of areas on the specifics of the severity of the physical factors of environmental impact on human activity is justified. The possibility of the application for the level of general nonspecific reactivity of an organism as a phenotypic trait marker of social tension risk is identified. An algorithm for predicting the risk of social tension among the population, compactly living in certain territories of the Russian Federation is designed. 

  20. Assessment of numerical optimization algorithms for the development of molecular models

    Science.gov (United States)

    Hülsmann, Marco; Vrabec, Jadran; Maaß, Astrid; Reith, Dirk

    2010-05-01

    In the pursuit to study the parameterization problem of molecular models with a broad perspective, this paper is focused on an isolated aspect: It is investigated, by which algorithms parameters can be best optimized simultaneously to different types of target data (experimental or theoretical) over a range of temperatures with the lowest number of iteration steps. As an example, nitrogen is regarded, where the intermolecular interactions are well described by the quadrupolar two-center Lennard-Jones model that has four state-independent parameters. The target data comprise experimental values for saturated liquid density, enthalpy of vaporization, and vapor pressure. For the purpose of testing algorithms, molecular simulations are entirely replaced by fit functions of vapor-liquid equilibrium (VLE) properties from the literature to assess efficiently the diverse numerical optimization algorithms investigated, being state-of-the-art gradient-based methods with very good convergency qualities. Additionally, artificial noise was superimposed onto the VLE fit results to evaluate the numerical optimization algorithms so that the calculation of molecular simulation data was mimicked. Large differences in the behavior of the individual optimization algorithms are found and some are identified to be capable to handle noisy function values.

  1. The Algorithmic Imaginary

    DEFF Research Database (Denmark)

    Bucher, Taina

    2017-01-01

    of algorithms affect people's use of these platforms, if at all? To help answer these questions, this article examines people's personal stories about the Facebook algorithm through tweets and interviews with 25 ordinary users. To understand the spaces where people and algorithms meet, this article develops....... Examining how algorithms make people feel, then, seems crucial if we want to understand their social power....

  2. Development of GPS Receiver Kalman Filter Algorithms for Stationary, Low-Dynamics, and High-Dynamics Applications

    Science.gov (United States)

    2016-06-01

    UNCLASSIFIED Development of GPS Receiver Kalman Filter Algorithms for Stationary, Low-Dynamics, and High-Dynamics Applications Peter W. Sarunic 1 1...determine instantaneous estimates of receiver position and then goes on to develop three Kalman filter based estimators, which use stationary receiver...used in actual GPS receivers, and cover a wide range of applications. While the standard form of the Kalman filter , of which the three filters just

  3. Development and validation of QRISK3 risk prediction algorithms to estimate future risk of cardiovascular disease: prospective cohort study

    OpenAIRE

    Hippisley-Cox, Julia; Coupland, Carol; Brindle, Peter

    2017-01-01

    Objectives: To develop and validate updated QRISK3 prediction algorithms to estimate the 10 year risk of cardiovascular disease in women and men accounting for potential new risk factors.\\ud \\ud Design: Prospective open cohort study.\\ud \\ud Setting: General practices in England providing data for the QResearch database.\\ud \\ud Participants: 1309 QResearch general practices in England: 981 practices were used to develop the scores and a separate set of 328 practices were used to validate the s...

  4. The Performance and Development of the Inner Detector Trigger Algorithms at ATLAS for LHC Run 2

    CERN Document Server

    Sowden, Benjamin Charles; The ATLAS collaboration

    2015-01-01

    A description of the design and performance of the newly reimplemented tracking algorithms for the ATLAS trigger for LHC Run 2, to commence in spring 2015, is provided. The ATLAS High Level Trigger (HLT) has been restructured to run as a more flexible single stage process, rather than the two separate Level 2 and Event Filter stages used during Run 1. To make optimal use of this new scenario, a new tracking strategy has been implemented for Run 2 for the HLT. This new strategy will use a Fast Track Finder (FTF) algorithm to directly seed the subsequent Precision Tracking, and will result in improved track parameter resolution and significantly faster execution times than achieved during Run 1 but with no significant reduction in efficiency. The performance and timing of the algorithms for numerous physics signatures in the trigger are presented. The profiling infrastructure, constructed to provide prompt feedback from the optimisation, is described, including the methods used to monitor the relative performan...

  5. Algorithms Development in Detection of the Gelatinization Process during Enzymatic ‘Dodol’ Processing

    Directory of Open Access Journals (Sweden)

    Azman Hamzah

    2013-09-01

    Full Text Available Computer vision systems have found wide application in foods processing industry to perform quality evaluation. The systems enable to replace human inspectors for the evaluation of a variety of quality attributes. This paper describes the implementation of the Fast Fourier Transform and Kalman filtering algorithms to detect the glutinous rice flour slurry (GRFS gelatinization in an enzymatic „dodol. processing. The onset of the GRFS gelatinization is critical in determining the quality of an enzymatic „dodol.. Combinations of these two algorithms were able to detect the gelatinization of the GRFS. The result shows that the gelatinization of the GRFS was at the time range of 11.75 minutes to 14.75 minutes for 24 batches of processing. This paper will highlight the capability of computer vision using our proposed algorithms in monitoring and controlling of an enzymatic „dodol. processing via image processing technology.

  6. Development of a meta-algorithm for guiding primary care encounters for patients with multimorbidity using evidence-based and case-based guideline development methodology.

    Science.gov (United States)

    Muche-Borowski, Cathleen; Lühmann, Dagmar; Schäfer, Ingmar; Mundt, Rebekka; Wagner, Hans-Otto; Scherer, Martin

    2017-06-22

    The study aimed to develop a comprehensive algorithm (meta-algorithm) for primary care encounters of patients with multimorbidity. We used a novel, case-based and evidence-based procedure to overcome methodological difficulties in guideline development for patients with complex care needs. Systematic guideline development methodology including systematic evidence retrieval (guideline synopses), expert opinions and informal and formal consensus procedures. Primary care. The meta-algorithm was developed in six steps:1. Designing 10 case vignettes of patients with multimorbidity (common, epidemiologically confirmed disease patterns and/or particularly challenging health care needs) in a multidisciplinary workshop.2. Based on the main diagnoses, a systematic guideline synopsis of evidence-based and consensus-based clinical practice guidelines was prepared. The recommendations were prioritised according to the clinical and psychosocial characteristics of the case vignettes.3. Case vignettes along with the respective guideline recommendations were validated and specifically commented on by an external panel of practicing general practitioners (GPs).4. Guideline recommendations and experts' opinions were summarised as case specific management recommendations (N-of-one guidelines).5. Healthcare preferences of patients with multimorbidity were elicited from a systematic literature review and supplemented with information from qualitative interviews.6. All N-of-one guidelines were analysed using pattern recognition to identify common decision nodes and care elements. These elements were put together to form a generic meta-algorithm. The resulting meta-algorithm reflects the logic of a GP's encounter of a patient with multimorbidity regarding decision-making situations, communication needs and priorities. It can be filled with the complex problems of individual patients and hereby offer guidance to the practitioner. Contrary to simple, symptom-oriented algorithms, the meta-algorithm

  7. Development of regularized expectation maximization algorithms for fan-beam SPECT data

    International Nuclear Information System (INIS)

    Kim, Soo Mee; Lee, Jae Sung; Lee, Dong Soo; Lee, Soo Jin; Kim, Kyeong Min

    2005-01-01

    SPECT using a fan-beam collimator improves spatial resolution and sensitivity. For the reconstruction from fan-beam projections, it is necessary to implement direct fan-beam reconstruction methods without transforming the data into the parallel geometry. In this study, various fan-beam reconstruction algorithms were implemented and their performances were compared. The projector for fan-beam SPECT was implemented using a ray-tracing method. The direct reconstruction algorithms implemented for fan-beam projection data were FBP (filtered backprojection), EM (expectation maximization), OS-EM (ordered subsets EM) and MAP-EM OSL (maximum a posteriori EM using the one-step late method) with membrane and thin-plate models as priors. For comparison, the fan-beam projection data were also rebinned into the parallel data using various interpolation methods, such as the nearest neighbor, bilinear and bicubic interpolations, and reconstructed using the conventional EM algorithm for parallel data. Noiseless and noisy projection data from the digital Hoffman brain and Shepp/Logan phantoms were reconstructed using the above algorithms. The reconstructed images were compared in terms of a percent error metric. For the fan-beam data with Poisson noise, the MAP-EM OSL algorithm with the thin-plate prior showed the best result in both percent error and stability. Bilinear interpolation was the most effective method for rebinning from the fan-beam to parallel geometry when the accuracy and computation load were considered. Direct fan-beam EM reconstructions were more accurate than the standard EM reconstructions obtained from rebinned parallel data. Direct fan-beam reconstruction algorithms were implemented, which provided significantly improved reconstructions

  8. Developing a Random Forest Algorithm for MODIS Global Burned Area Classification

    Directory of Open Access Journals (Sweden)

    Rubén Ramo

    2017-11-01

    Full Text Available This paper aims to develop a global burned area (BA algorithm for MODIS BRDF-corrected images based on the Random Forest (RF classifier. Two RF models were generated, including: (1 all MODIS reflective bands; and (2 only the red (R and near infrared (NIR bands. Active fire information, vegetation indices and auxiliary variables were taken into account as well. Both RF models were trained using a statistically designed sample of 130 reference sites, which took into account the global diversity of fire conditions. For each site, fire perimeters were obtained from multitemporal pairs of Landsat TM/ETM+ images acquired in 2008. Those fire perimeters were used to extract burned and unburned areas to train the RF models. Using the standard MD43A4 resolution (500 × 500 m, the training dataset included 48,365 burned pixels and 6,293,205 unburned pixels. Different combinations of number of trees and number of parameters were tested. The final RF models included 600 trees and 5 attributes. The RF full model (considering all bands provided a balanced accuracy of 0.94, while the RF RNIR model had 0.93. As a first assessment of these RF models, they were used to classify daily MCD43A4 images in three test sites for three consecutive years (2006–2008. The selected sites included different ecosystems: Australia (Tropical, Boreal (Canada and Temperate (California, and extended coverage (totaling more than 2,500,000 km2. Results from both RF models for those sites were compared with national fire perimeters, as well as with two existing BA MODIS products; the MCD45 and MCD64. Considering all three years and three sites, commission error for the RF Full model was 0.16, with an omission error of 0.23. For the RF RNIR model, these errors were 0.19 and 0.21, respectively. The existing MODIS BA products had lower commission errors, but higher omission errors (0.09 and 0.33 for the MCD45 and 0.10 and 0.29 for the MCD64 than those obtained with the RF models, and

  9. Algorithming the Algorithm

    DEFF Research Database (Denmark)

    Mahnke, Martina; Uprichard, Emma

    2014-01-01

    changes: it’s not the ocean, it’s the internet we’re talking about, and it’s not a TV show producer, but algorithms that constitute a sort of invisible wall. Building on this assumption, most research is trying to ‘tame the algorithmic tiger’. While this is a valuable and often inspiring approach, we...

  10. Development and validation of a prediction algorithm for the onset of common mental disorders in a working population.

    Science.gov (United States)

    Fernandez, Ana; Salvador-Carulla, Luis; Choi, Isabella; Calvo, Rafael; Harvey, Samuel B; Glozier, Nicholas

    2018-01-01

    Common mental disorders are the most common reason for long-term sickness absence in most developed countries. Prediction algorithms for the onset of common mental disorders may help target indicated work-based prevention interventions. We aimed to develop and validate a risk algorithm to predict the onset of common mental disorders at 12 months in a working population. We conducted a secondary analysis of the Household, Income and Labour Dynamics in Australia Survey, a longitudinal, nationally representative household panel in Australia. Data from the 6189 working participants who did not meet the criteria for a common mental disorders at baseline were non-randomly split into training and validation databases, based on state of residence. Common mental disorders were assessed with the mental component score of 36-Item Short Form Health Survey questionnaire (score ⩽45). Risk algorithms were constructed following recommendations made by the Transparent Reporting of a multivariable prediction model for Prevention Or Diagnosis statement. Different risk factors were identified among women and men for the final risk algorithms. In the training data, the model for women had a C-index of 0.73 and effect size (Hedges' g) of 0.91. In men, the C-index was 0.76 and the effect size was 1.06. In the validation data, the C-index was 0.66 for women and 0.73 for men, with positive predictive values of 0.28 and 0.26, respectively Conclusion: It is possible to develop an algorithm with good discrimination for the onset identifying overall and modifiable risks of common mental disorders among working men. Such models have the potential to change the way that prevention of common mental disorders at the workplace is conducted, but different models may be required for women.

  11. Development and validation of a computerized algorithm for International Standards for Neurological Classification of Spinal Cord Injury (ISNCSCI)

    DEFF Research Database (Denmark)

    Walden, K; Bélanger, L M; Biering-Sørensen, F

    2016-01-01

    STUDY DESIGN: Validation study. OBJECTIVES: To describe the development and validation of a computerized application of the international standards for neurological classification of spinal cord injury (ISNCSCI). SETTING: Data from acute and rehabilitation care. METHODS: The Rick Hansen Institute......-ISNCSCI Algorithm (RHI-ISNCSCI Algorithm) was developed based on the 2011 version of the ISNCSCI and the 2013 version of the worksheet. International experts developed the design and logic with a focus on usability and features to standardize the correct classification of challenging cases. A five-phased process...... a standardized method to accurately derive the level and severity of SCI from the raw data of the ISNCSCI examination. The web interface assists in maximizing usability while minimizing the impact of human error in classifying SCI. SPONSORSHIP: This study is sponsored by the Rick Hansen Institute and supported...

  12. Development and Verification of the Tire/Road Friction Estimation Algorithm for Antilock Braking System

    Directory of Open Access Journals (Sweden)

    Jian Zhao

    2014-01-01

    Full Text Available Road friction information is very important for vehicle active braking control systems such as ABS, ASR, or ESP. It is not easy to estimate the tire/road friction forces and coefficient accurately because of the nonlinear system, parameters uncertainties, and signal noises. In this paper, a robust and effective tire/road friction estimation algorithm for ABS is proposed, and its performance is further discussed by simulation and experiment. The tire forces were observed by the discrete Kalman filter, and the road friction coefficient was estimated by the recursive least square method consequently. Then, the proposed algorithm was analysed and verified by simulation and road test. A sliding mode based ABS with smooth wheel slip ratio control and a threshold based ABS by pulse pressure control with significant fluctuations were used for the simulation. Finally, road tests were carried out in both winter and summer by the car equipped with the same threshold based ABS, and the algorithm was evaluated on different road surfaces. The results show that the proposed algorithm can identify the variation of road conditions with considerable accuracy and response speed.

  13. Development and Evaluation of Algorithms to Improve Small- and Medium-Size Commercial Building Operations

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Woohyun [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Katipamula, Srinivas [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Lutes, Robert G. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Underhill, Ronald M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-10-31

    Small- and medium-sized (<100,000 sf) commercial buildings (SMBs) represent over 95% of the U.S. commercial building stock and consume over 60% of total site energy consumption. Many of these buildings use rudimentary controls that are mostly manual, with limited scheduling capability, no monitoring or failure management. Therefore, many of these buildings are operated inefficiently and consume excess energy. SMBs typically utilize packaged rooftop units (RTUs) that are controlled by an individual thermostat. There is increased urgency to improve the operating efficiency of existing commercial building stock in the U.S. for many reasons, chief among them is to mitigate the climate change impacts. Studies have shown that managing set points and schedules of the RTUs will result in up to 20% energy and cost savings. Another problem associated with RTUs is short-cycling, where an RTU goes through ON and OFF cycles too frequently. Excessive cycling can lead to excessive wear and lead to premature failure of the compressor or its components. The short cycling can result in a significantly decreased average efficiency (up to 10%), even if there are no physical failures in the equipment. Also, SMBs use a time-of-day scheduling is to start the RTUs before the building will be occupied and shut it off when unoccupied. Ensuring correct use of the zone set points and eliminating frequent cycling of RTUs thereby leading to persistent building operations can significantly increase the operational efficiency of the SMBs. A growing trend is to use low-cost control infrastructure that can enable scalable and cost-effective intelligent building operations. The work reported in this report describes three algorithms for detecting the zone set point temperature, RTU cycling rate and occupancy schedule detection that can be deployed on the low-cost infrastructure. These algorithms only require the zone temperature data for detection. The algorithms have been tested and validated using

  14. Development of a rapid lateral flow immunoassay test for detection of exosomes previously enriched from cell culture medium and body fluids.

    Science.gov (United States)

    Oliveira-Rodríguez, Myriam; López-Cobo, Sheila; Reyburn, Hugh T; Costa-García, Agustín; López-Martín, Soraya; Yáñez-Mó, María; Cernuda-Morollón, Eva; Paschen, Annette; Valés-Gómez, Mar; Blanco-López, Maria Carmen

    2016-01-01

    Exosomes are cell-secreted nanovesicles (40-200 nm) that represent a rich source of novel biomarkers in the diagnosis and prognosis of certain diseases. Despite the increasingly recognized relevance of these vesicles as biomarkers, their detection has been limited due in part to current technical challenges in the rapid isolation and analysis of exosomes. The complexity of the development of analytical platforms relies on the heterogeneous composition of the exosome membrane. One of the most attractive tests is the inmunochromatographic strips, which allow rapid detection by unskilled operators. We have successfully developed a novel lateral flow immunoassay (LFIA) for the detection of exosomes based on the use of tetraspanins as targets. We have applied this platform for the detection of exosomes purified from different sources: cell culture supernatants, human plasma and urine. As proof of concept, we explored the analytical potential of this LFIA platform to accurately quantify exosomes purified from a human metastatic melanoma cell line. The one-step assay can be completed in 15 min, with a limit of detection of 8.54×10(5) exosomes/µL when a blend of anti-CD9 and anti-CD81 were selected as capture antibodies and anti-CD63 labelled with gold nanoparticles as detection antibody. Based on our results, this platform could be well suited to be used as a rapid exosome quantification tool, with promising diagnostic applications, bearing in mind that the detection of exosomes from different sources may require adaptation of the analytical settings to their specific composition.

  15. Development of a rapid lateral flow immunoassay test for detection of exosomes previously enriched from cell culture medium and body fluids

    Directory of Open Access Journals (Sweden)

    Myriam Oliveira-Rodríguez

    2016-08-01

    Full Text Available Exosomes are cell-secreted nanovesicles (40–200 nm that represent a rich source of novel biomarkers in the diagnosis and prognosis of certain diseases. Despite the increasingly recognized relevance of these vesicles as biomarkers, their detection has been limited due in part to current technical challenges in the rapid isolation and analysis of exosomes. The complexity of the development of analytical platforms relies on the heterogeneous composition of the exosome membrane. One of the most attractive tests is the inmunochromatographic strips, which allow rapid detection by unskilled operators. We have successfully developed a novel lateral flow immunoassay (LFIA for the detection of exosomes based on the use of tetraspanins as targets. We have applied this platform for the detection of exosomes purified from different sources: cell culture supernatants, human plasma and urine. As proof of concept, we explored the analytical potential of this LFIA platform to accurately quantify exosomes purified from a human metastatic melanoma cell line. The one-step assay can be completed in 15 min, with a limit of detection of 8.54×105 exosomes/µL when a blend of anti-CD9 and anti-CD81 were selected as capture antibodies and anti-CD63 labelled with gold nanoparticles as detection antibody. Based on our results, this platform could be well suited to be used as a rapid exosome quantification tool, with promising diagnostic applications, bearing in mind that the detection of exosomes from different sources may require adaptation of the analytical settings to their specific composition.

  16. Development and Evaluation of an Automated Machine Learning Algorithm for In-Hospital Mortality Risk Adjustment Among Critical Care Patients.

    Science.gov (United States)

    Delahanty, Ryan J; Kaufman, David; Jones, Spencer S

    2018-02-06

    Risk adjustment algorithms for ICU mortality are necessary for measuring and improving ICU performance. Existing risk adjustment algorithms are not widely adopted. Key barriers to adoption include licensing and implementation costs as well as labor costs associated with human-intensive data collection. Widespread adoption of electronic health records makes automated risk adjustment feasible. Using modern machine learning methods and open source tools, we developed and evaluated a retrospective risk adjustment algorithm for in-hospital mortality among ICU patients. The Risk of Inpatient Death score can be fully automated and is reliant upon data elements that are generated in the course of usual hospital processes. One hundred thirty-one ICUs in 53 hospitals operated by Tenet Healthcare. A cohort of 237,173 ICU patients discharged between January 2014 and December 2016. The data were randomly split into training (36 hospitals), and validation (17 hospitals) data sets. Feature selection and model training were carried out using the training set while the discrimination, calibration, and accuracy of the model were assessed in the validation data set. Model discrimination was evaluated based on the area under receiver operating characteristic curve; accuracy and calibration were assessed via adjusted Brier scores and visual analysis of calibration curves. Seventeen features, including a mix of clinical and administrative data elements, were retained in the final model. The Risk of Inpatient Death score demonstrated excellent discrimination (area under receiver operating characteristic curve = 0.94) and calibration (adjusted Brier score = 52.8%) in the validation dataset; these results compare favorably to the published performance statistics for the most commonly used mortality risk adjustment algorithms. Low adoption of ICU mortality risk adjustment algorithms impedes progress toward increasing the value of the healthcare delivered in ICUs. The Risk of Inpatient Death

  17. A novel hybrid classification model of genetic algorithms, modified k-Nearest Neighbor and developed backpropagation neural network.

    Directory of Open Access Journals (Sweden)

    Nader Salari

    Full Text Available Among numerous artificial intelligence approaches, k-Nearest Neighbor algorithms, genetic algorithms, and artificial neural networks are considered as the most common and effective methods in classification problems in numerous studies. In the present study, the results of the implementation of a novel hybrid feature selection-classification model using the above mentioned methods are presented. The purpose is benefitting from the synergies obtained from combining these technologies for the development of classification models. Such a combination creates an opportunity to invest in the strength of each algorithm, and is an approach to make up for their deficiencies. To develop proposed model, with the aim of obtaining the best array of features, first, feature ranking techniques such as the Fisher's discriminant ratio and class separability criteria were used to prioritize features. Second, the obtained results that included arrays of the top-ranked features were used as the initial population of a genetic algorithm to produce optimum arrays of features. Third, using a modified k-Nearest Neighbor method as well as an improved method of backpropagation neural networks, the classification process was advanced based on optimum arrays of the features selected by genetic algorithms. The performance of the proposed model was compared with thirteen well-known classification models based on seven datasets. Furthermore, the statistical analysis was performed using the Friedman test followed by post-hoc tests. The experimental findings indicated that the novel proposed hybrid model resulted in significantly better classification performance compared with all 13 classification methods. Finally, the performance results of the proposed model was benchmarked against the best ones reported as the state-of-the-art classifiers in terms of classification accuracy for the same data sets. The substantial findings of the comprehensive comparative study revealed that

  18. A Novel Hybrid Classification Model of Genetic Algorithms, Modified k-Nearest Neighbor and Developed Backpropagation Neural Network

    Science.gov (United States)

    Salari, Nader; Shohaimi, Shamarina; Najafi, Farid; Nallappan, Meenakshii; Karishnarajah, Isthrinayagy

    2014-01-01

    Among numerous artificial intelligence approaches, k-Nearest Neighbor algorithms, genetic algorithms, and artificial neural networks are considered as the most common and effective methods in classification problems in numerous studies. In the present study, the results of the implementation of a novel hybrid feature selection-classification model using the above mentioned methods are presented. The purpose is benefitting from the synergies obtained from combining these technologies for the development of classification models. Such a combination creates an opportunity to invest in the strength of each algorithm, and is an approach to make up for their deficiencies. To develop proposed model, with the aim of obtaining the best array of features, first, feature ranking techniques such as the Fisher's discriminant ratio and class separability criteria were used to prioritize features. Second, the obtained results that included arrays of the top-ranked features were used as the initial population of a genetic algorithm to produce optimum arrays of features. Third, using a modified k-Nearest Neighbor method as well as an improved method of backpropagation neural networks, the classification process was advanced based on optimum arrays of the features selected by genetic algorithms. The performance of the proposed model was compared with thirteen well-known classification models based on seven datasets. Furthermore, the statistical analysis was performed using the Friedman test followed by post-hoc tests. The experimental findings indicated that the novel proposed hybrid model resulted in significantly better classification performance compared with all 13 classification methods. Finally, the performance results of the proposed model was benchmarked against the best ones reported as the state-of-the-art classifiers in terms of classification accuracy for the same data sets. The substantial findings of the comprehensive comparative study revealed that performance of the

  19. Algorithm and simulation development in support of response strategies for contamination events in air and water systems.

    Energy Technology Data Exchange (ETDEWEB)

    Waanders, Bart Van Bloemen

    2006-01-01

    Chemical/Biological/Radiological (CBR) contamination events pose a considerable threat to our nation's infrastructure, especially in large internal facilities, external flows, and water distribution systems. Because physical security can only be enforced to a limited degree, deployment of early warning systems is being considered. However to achieve reliable and efficient functionality, several complex questions must be answered: (1) where should sensors be placed, (2) how can sparse sensor information be efficiently used to determine the location of the original intrusion, (3) what are the model and data uncertainties, (4) how should these uncertainties be handled, and (5) how can our algorithms and forward simulations be sufficiently improved to achieve real time performance? This report presents the results of a three year algorithmic and application development to support the identification, mitigation, and risk assessment of CBR contamination events. The main thrust of this investigation was to develop (1) computationally efficient algorithms for strategically placing sensors, (2) identification process of contamination events by using sparse observations, (3) characterization of uncertainty through developing accurate demands forecasts and through investigating uncertain simulation model parameters, (4) risk assessment capabilities, and (5) reduced order modeling methods. The development effort was focused on water distribution systems, large internal facilities, and outdoor areas.

  20. The ethics of algorithms: Mapping the debate

    Directory of Open Access Journals (Sweden)

    Brent Daniel Mittelstadt

    2016-11-01

    Full Text Available In information societies, operations, decisions and choices previously left to humans are increasingly delegated to algorithms, which may advise, if not decide, about how data should be interpreted and what actions should be taken as a result. More and more often, algorithms mediate social processes, business transactions, governmental decisions, and how we perceive, understand, and interact among ourselves and with the environment. Gaps between the design and operation of algorithms and our understanding of their ethical implications can have severe consequences affecting individuals as well as groups and whole societies. This paper makes three contributions to clarify the ethical importance of algorithmic mediation. It provides a prescriptive map to organise the debate. It reviews the current discussion of ethical aspects of algorithms. And it assesses the available literature in order to identify areas requiring further work to develop the ethics of algorithms.

  1. Algorithms for worst-case tolerance optimization

    DEFF Research Database (Denmark)

    Schjær-Jacobsen, Hans; Madsen, Kaj

    1979-01-01

    New algorithms are presented for the solution of optimum tolerance assignment problems. The problems considered are defined mathematically as a worst-case problem (WCP), a fixed tolerance problem (FTP), and a variable tolerance problem (VTP). The basic optimization problem without tolerances...... is denoted the zero tolerance problem (ZTP). For solution of the WCP we suggest application of interval arithmetic and also alternative methods. For solution of the FTP an algorithm is suggested which is conceptually similar to algorithms previously developed by the authors for the ZTP. Finally, the VTP...... is solved by a double-iterative algorithm in which the inner iteration is performed by the FTP- algorithm. The application of the algorithm is demonstrated by means of relatively simple numerical examples. Basic properties, such as convergence properties, are displayed based on the examples....

  2. Development of Real-Time Image Processing Algorithm on the Positions of Multi-Object in an Image Plane

    International Nuclear Information System (INIS)

    Jang, W. S.; Kim, K. S.; Lee, S. M.

    2002-01-01

    This study is concentrated on the development of high speed multi-object image processing algorithm in real time. Recently, the use of vision system is rapidly increasing in inspection and robot's position control. To apply the vision system, it is necessary to transform the physical coordinate of object into the image information acquired by CCD camera. Thus, to use the application of the vision system to the inspection and robot's position control in real time, we have to know the position of object in the image plane. Particularly, in case of rigid body using multi-cue to identify its shape, the each position of multi-cue must be calculated in an image plane at the same time. To solve these problems, the image processing algorithm on the position of multi-cue is developed

  3. Development and validation of QDiabetes-2018 risk prediction algorithm to estimate future risk of type 2 diabetes: cohort study

    OpenAIRE

    Hippisley-Cox, Julia; Coupland, Carol

    2017-01-01

    Objectives: To derive and validate updated QDiabetes-2018 prediction algorithms to estimate the 10 year risk of type 2 diabetes in men and women, taking account of potential new risk factors, and to compare their performance with current approaches.\\ud \\ud Design: Prospective open cohort study.\\ud \\ud Setting: Routinely collected data from 1457 general practices in England contributing to the QResearch database: 1094 were used to develop the scores and a separate set of 363 were used to valid...

  4. The Performance and Development of the Inner Detector Trigger Algorithms at ATLAS for LHC Run 2

    CERN Document Server

    INSPIRE-00403960; The ATLAS collaboration

    2015-01-01

    The upgrade to the ATLAS trigger for LHC Run 2 is presented including a description of the design and performance of the newly reimplemented tracking algorithms. The profiling infrastructure, constructed to provide prompt feedback from the optimisation is described including the methods used to monitor the relative performance improvements as the code evolves. The performance of the trigger on the first data collected as part of the LHC Run 2 are presented.

  5. Development of a Response Planner using the UCT Algorithm for Cyber Defense

    Science.gov (United States)

    2013-03-01

    President Obama as “one of the most serious economic and national security challenges we face as a nation, but one that we as a government, or as a country...an expensive, exhaustive search. Some other data sets, similar to KDD 99, used with the SFS feature selection algorithm include the Auckland -VI, NZIXII...domain can be considerably long depending on the number of objects placed in the domain. After a certain number of objects, exponential growth starts

  6. Selection and collection of multi parameter physiological data for cardiac rhythm diagnostic algorithm development

    International Nuclear Information System (INIS)

    Bostock, J.; Weller, P.; Cooklin, M.

    2010-01-01

    Automated diagnostic algorithms are used in implantable cardioverter-defibrillators (ICD's) to detect abnormal heart rhythms. Algorithms misdiagnose and improved specificity is needed to prevent inappropriate therapy. Knowledge engineering (KE) and artificial intelligence (AI) could improve this. A pilot study of KE was performed with artificial neural network (ANN) as AI system. A case note review analysed arrhythmic events stored in patients ICD memory. 13.2% patients received inappropriate therapy. The best ICD algorithm had sensitivity 1.00, specificity 0.69 (p<0.001 different to gold standard). A subset of data was used to train and test an ANN. A feed-forward, back-propagation network with 7 inputs, a 4 node hidden layer and 1 output had sensitivity 1.00, specificity 0.71 (p<0.001). A prospective study was performed using KE to list arrhythmias, factors and indicators for which measurable parameters were evaluated and results reviewed by a domain expert. Waveforms from electrodes in the heart and thoracic bio-impedance; temperature and motion data were collected from 65 patients during cardiac electrophysiological studies. 5 incomplete datasets were due to technical failures. We concluded that KE successfully guided selection of parameters and ANN produced a usable system and that complex data collection carries greater risk of technical failure, leading to data loss.

  7. Performance and development plans for the Inner Detector trigger algorithms at ATLAS

    CERN Document Server

    Martin-haugh, Stewart; The ATLAS collaboration

    2015-01-01

    A description of the design and performance of the newly re-implemented tracking algorithms for the ATLAS trigger for LHC Run 2, to commence in spring 2015, is presented. The ATLAS High Level Trigger (HLT) has been restructured to run as a more flexible single stage process, rather than the two separate Level 2 and Event Filter stages used during Run 1. To make optimal use of this new scenario, a new tracking strategy has been implemented for Run 2. This new strategy will use a FastTrackFinder algorithm to directly seed the subsequent Precision Tracking, and will result in improved track parameter resolution and significantly faster execution times than achieved during Run 1 and with better efficiency. The timings of the algorithms for electron and tau track triggers are presented. The profiling infrastructure, constructed to provide prompt feedback from the optimisation, is described, including the methods used to monitor the relative performance improvements as the code evolves. The online deployment and co...

  8. Development of an Algorithm for Heart Rate Measurement Using a Mobile Phone Camera

    Directory of Open Access Journals (Sweden)

    D. A. Laure

    2014-01-01

    Full Text Available Nowadays there exist many different ways to measure a person’s heart rate. One of them assumes the usage of a mobile phone built-in camera. This method is easy to use and does not require any additional skills or special devices for heart rate measurement. It requires only a mobile cellphone with a built-in camera and a flash. The main idea of the method is to detect changes in finger skin color that occur due to blood pulsation. The measurement process is simple: the user covers the camera lens with a finger and the application on the mobile phone starts catching and analyzing frames from the camera. Heart rate can be calculated by analyzing average red component values of frames taken by the mobile cellphone camera that contain images of an area of the skin.In this paper the authors review the existing algorithms for heart rate measurement with the help of a mobile phone camera and propose their own algorithm which is more efficient than the reviewed algorithms.

  9. Development of the Chen Magnetic Cloud Prediction Algorithm for Real-Time Space Weather Forecasting

    Science.gov (United States)

    Bain, H. M.; Biesecker, D. A.; Cash, M. D.; Reinard, A.; Chen, J.

    2017-12-01

    We present details of a space weather forecasting tool which attempts to accurately predict the occurrence and severity of large geomagnetic storms caused by prolonged periods of south directed magnetic field components associated with magnetic clouds. The algorithm takes the work of Chen et al. (1996, 1997) and modifies it to run in a real-time operational environment, with input solar wind data from the Deep Space Climate Observatory (DSCOVR) spacecraft at L1. From the real-time magnetic field measurements, the algorithm identifies the initial magnetic field rotation signature assuming it represents the initial phase of a magnetic cloud. Fitting the field rotation, an estimate of the solar wind profile upstream of the spacecraft is determined, in particular the expected event duration (time to the next zero crossing of Bz) and maximum Bz field strength. Using Bayesian statistics, the tool returns the probability of a large geomagnetic storm occurring and a measure of its geoeffectiveness, with an expected warning time of several hours to possibly more than 10 hours (Arge et al. 2002). We discuss the current algorithm performance as well the limitations of the model.

  10. Development and evaluation of a data-adaptive alerting algorithm for univariate temporal biosurveillance data.

    Science.gov (United States)

    Elbert, Yevgeniy; Burkom, Howard S

    2009-11-20

    This paper discusses further advances in making robust predictions with the Holt-Winters forecasts for a variety of syndromic time series behaviors and introduces a control-chart detection approach based on these forecasts. Using three collections of time series data, we compare biosurveillance alerting methods with quantified measures of forecast agreement, signal sensitivity, and time-to-detect. The study presents practical rules for initialization and parameterization of biosurveillance time series. Several outbreak scenarios are used for detection comparison. We derive an alerting algorithm from forecasts using Holt-Winters-generalized smoothing for prospective application to daily syndromic time series. The derived algorithm is compared with simple control-chart adaptations and to more computationally intensive regression modeling methods. The comparisons are conducted on background data from both authentic and simulated data streams. Both types of background data include time series that vary widely by both mean value and cyclic or seasonal behavior. Plausible, simulated signals are added to the background data for detection performance testing at signal strengths calculated to be neither too easy nor too hard to separate the compared methods. Results show that both the sensitivity and the timeliness of the Holt-Winters-based algorithm proved to be comparable or superior to that of the more traditional prediction methods used for syndromic surveillance.

  11. Predicting neurally mediated syncope based on pulse arrival time: algorithm development and preliminary results.

    Science.gov (United States)

    Meyer, Christian; Morren, Geert; Muehlsteff, Jens; Heiss, Christian; Lauer, Thomas; Schauerte, Patrick; Rassaf, Tienush; Purerfellner, Helmut; Kelm, Malte

    2011-09-01

    Neurally mediated syncope (NMS) is a common disorder that is triggered by orthostatic stress. The circulatory adjustments to orthostatic stress occur just prior to a sudden loss of consciousness. NMS prediction would protect patients from falls or accidents. Based on simultaneously recorded heart rate (HR) and pulse wave during 70° head-up tilt (HUT) table testing we investigated a syncope warning system. In 14 patients with a history of suspected NMS we tested 2 algorithms based on HR and/or pulse arrival time (PAT). When the cumulative risk exceeded the threshold, which was calculated during the first 2 minutes following the posture change to upright position, a syncope prediction alarm was triggered. All syncopes (n = 7) were detected more than 16 seconds before the onset of dizziness or unconsciousness by using a prediction alarm based on HR and PAT (syncope prediction algorithm 2). No false alarm was generated in patients with negative HUT (n = 7). Syncope prediction was improved by detecting the slope of HR changes as compared with monitoring PAT changes alone (syncope prediction algorithm 1). The duration between the prediction alarm and the occurrence of syncope was 99 ± 108 seconds. Predicting NMS is feasible by monitoring HR and the onset of the pulse wave at the periphery. This approach might improve NMS management.  © 2011 Wiley Periodicals, Inc.

  12. Development of a hybrid energy storage sizing algorithm associated with the evaluation of power management in different driving cycles

    International Nuclear Information System (INIS)

    Masoud, Masih Tehrani; Mohammad Reza, Ha'iri Yazdi; Esfahanian, Vahid; Sagha, Hossein

    2012-01-01

    In this paper, a hybrid energy storage sizing algorithm for electric vehicles is developed to achieve a semi optimum cost effective design. Using the developed algorithm, a driving cycle is divided into its micro-trips and the power and energy demands in each micro trip are determined. The battery size is estimated because the battery fulfills the power demands. Moreover, the ultra capacitor (UC) energy (or the number of UC modules) is assessed because the UC delivers the maximum energy demands of the different micro trips of a driving cycle. Finally, a design factor, which shows the power of the hybrid energy storage control strategy, is utilized to evaluate the newly designed control strategies. Using the developed algorithm, energy saving loss, driver satisfaction criteria, and battery life criteria are calculated using a feed forward dynamic modeling software program and are utilized for comparison among different energy storage candidates. This procedure is applied to the hybrid energy storage sizing of a series hybrid electric city bus in Manhattan and to the Tehran driving cycle. Results show that a higher aggressive driving cycle (Manhattan) requires more expensive energy storage system and more sophisticated energy management strategy

  13. The design and development of signal-processing algorithms for an airborne x-band Doppler weather radar

    Science.gov (United States)

    Nicholson, Shaun R.

    1994-01-01

    Improved measurements of precipitation will aid our understanding of the role of latent heating on global circulations. Spaceborne meteorological sensors such as the planned precipitation radar and microwave radiometers on the Tropical Rainfall Measurement Mission (TRMM) provide for the first time a comprehensive means of making these global measurements. Pre-TRMM activities include development of precipitation algorithms using existing satellite data, computer simulations, and measurements from limited aircraft campaigns. Since the TRMM radar will be the first spaceborne precipitation radar, there is limited experience with such measurements, and only recently have airborne radars become available that can attempt to address the issue of the limitations of a spaceborne radar. There are many questions regarding how much attenuation occurs in various cloud types and the effect of cloud vertical motions on the estimation of precipitation rates. The EDOP program being developed by NASA GSFC will provide data useful for testing both rain-retrieval algorithms and the importance of vertical motions on the rain measurements. The purpose of this report is to describe the design and development of real-time embedded parallel algorithms used by EDOP to extract reflectivity and Doppler products (velocity, spectrum width, and signal-to-noise ratio) as the first step in the aforementioned goals.

  14. The Eukaryotic Microbiome: Origins and Implications for Fetal and Neonatal life note bene: previous titles: The Microbiome in the Development of Terrestrial Life,and,The Origins and Development of the Neonatal Microbiome

    Directory of Open Access Journals (Sweden)

    William B. Miller

    2016-09-01

    Full Text Available All eukaryotic organisms are holobionts representing complex collaborations between the entire microbiome of each eukaryote and its innate cells. These linked constituencies form complex localized and interlocking ecologies in which the specific microbial constituents and their relative abundance differ substantially according to age and environmental exposures. Rapid advances in microbiology and genetic research techniques have uncovered a significant previous underestimate of the extent of that microbial contribution and its metabolic and developmental impact on holobionts. Therefore, a re-calibration of the neonatal period is suggested as a transitional phase in development that includes the acquisition of consequential collaborative microbial life from extensive environmental influences. These co-dependent, symbiotic relationships formed in the fetal and neonatal stages extend into adulthood and even across generations.

  15. Watershed model calibration framework developed using an influence coefficient algorithm and a genetic algorithm and analysis of pollutant discharge characteristics and load reduction in a TMDL planning area.

    Science.gov (United States)

    Cho, Jae Heon; Lee, Jong Ho

    2015-11-01

    Manual calibration is common in rainfall-runoff model applications. However, rainfall-runoff models include several complicated parameters; thus, significant time and effort are required to manually calibrate the parameters individually and repeatedly. Automatic calibration has relative merit regarding time efficiency and objectivity but shortcomings regarding understanding indigenous processes in the basin. In this study, a watershed model calibration framework was developed using an influence coefficient algorithm and genetic algorithm (WMCIG) to automatically calibrate the distributed models. The optimization problem used to minimize the sum of squares of the normalized residuals of the observed and predicted values was solved using a genetic algorithm (GA). The final model parameters were determined from the iteration with the smallest sum of squares of the normalized residuals of all iterations. The WMCIG was applied to a Gomakwoncheon watershed located in an area that presents a total maximum daily load (TMDL) in Korea. The proportion of urbanized area in this watershed is low, and the diffuse pollution loads of nutrients such as phosphorus are greater than the point-source pollution loads because of the concentration of rainfall that occurs during the summer. The pollution discharges from the watershed were estimated for each land-use type, and the seasonal variations of the pollution loads were analyzed. Consecutive flow measurement gauges have not been installed in this area, and it is difficult to survey the flow and water quality in this area during the frequent heavy rainfall that occurs during the wet season. The Hydrological Simulation Program-Fortran (HSPF) model was used to calculate the runoff flow and water quality in this basin. Using the water quality results, a load duration curve was constructed for the basin, the exceedance frequency of the water quality standard was calculated for each hydrologic condition class, and the percent reduction

  16. Developing a NIR multispectral imaging for prediction and visualization of peanut protein content using variable selection algorithms

    Science.gov (United States)

    Cheng, Jun-Hu; Jin, Huali; Liu, Zhiwei

    2018-01-01

    The feasibility of developing a multispectral imaging method using important wavelengths from hyperspectral images selected by genetic algorithm (GA), successive projection algorithm (SPA) and regression coefficient (RC) methods for modeling and predicting protein content in peanut kernel was investigated for the first time. Partial least squares regression (PLSR) calibration model was established between the spectral data from the selected optimal wavelengths and the reference measured protein content ranged from 23.46% to 28.43%. The RC-PLSR model established using eight key wavelengths (1153, 1567, 1972, 2143, 2288, 2339, 2389 and 2446 nm) showed the best predictive results with the coefficient of determination of prediction (R2P) of 0.901, and root mean square error of prediction (RMSEP) of 0.108 and residual predictive deviation (RPD) of 2.32. Based on the obtained best model and image processing algorithms, the distribution maps of protein content were generated. The overall results of this study indicated that developing a rapid and online multispectral imaging system using the feature wavelengths and PLSR analysis is potential and feasible for determination of the protein content in peanut kernels.

  17. Research on magnetorheological damper suspension with permanent magnet and magnetic valve based on developed FOA-optimal control algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Xiao, Ping; Gao, Hong [Anhui Polytechnic University, Wuhu (China); Niu, Limin [Anhui University of Technology, Maanshan (China)

    2017-07-15

    Due to the fail safe problem, it was difficult for the existing Magnetorheological damper (MD) to be widely applied in automotive suspensions. Therefore, permanent magnets and magnetic valves were introduced to existing MDs so that fail safe problem could be solved by the magnets and damping force could be adjusted easily by the magnetic valve. Thus, a new Magnetorheological damper with permanent magnet and magnetic valve (MDPMMV) was developed and MDPMMV suspension was studied. First of all, mechanical structure of existing magnetorheological damper applied in automobile suspensions was redesigned, comprising a permanent magnet and a magnetic valve. In addition, prediction model of damping force was built based on electromagnetics theory and Bingham model. Experimental research was onducted on the newly designed damper and goodness of fit between experiment results and simulated ones by models was high. On this basis, a quarter suspension model was built. Then, fruit Fly optimization algorithm (FOA)-optimal control algorithm suitable for automobile suspension was designed based on developing normal FOA. Finally, simulation experiments and bench tests with input surface of pulse road and B road were carried out and the results indicated that working erformance of MDPMMV suspension based on FOA-optimal control algorithm was good.

  18. Development of double dosimetry algorithm for assessment of effective dose to staff in interventional radiology

    International Nuclear Information System (INIS)

    Kim, Ji Young

    2011-02-01

    Medical staff involving interventional radiology(IR) procedures are significantly exposed to the scatter radiation because they stand in close proximity to the patient. Since modern IR techniques are often very complicated and require extended operation time, doses to IR workers tend to increase considerably. In general, the personal dose equivalent at 10 mm depth, H p (10), read from one dosimeter worn on the trunk of a radiation worker is assumed to be a good estimate of the effective dose and compared to the dose limits for regulatory compliance. This assumption is based on the exposure conditions that the radiation field is broad and rather homogeneous. However, IR workers usually wear protective clothing like lead aprons and thyroid shield which allow part of the body being exposed to much higher doses. To solve this problem, i.e. to adequately estimate the effective doses of IR workers, use of double dosimeters, one under the apron and one over the apron where unshielded part of the body exposed, was recommended. Several algorithms on the interpretation of the two dosimeter readings have been proposed. However, the dosimeter weighting factors applied to the algorithm differ significantly, which quests a question on the reliability of the algorithm. Moreover, there are some changes in the process of calculating the effective dose in the 2007 recommendations of the International Commission on Radiological Protection(ICRP): changes in the radiation weighting factors, tissue weighting factors and the computational reference phantoms. Therefore, this study attempts to set a new algorithm for interpreting two dosimeter readings to provide a proper estimate of the effective dose for IR workers, incorporating those changes in definition of effective dose. The effective doses were estimated using Monte Carlo simulations for various practical conditions based on the vogel reference phantom and the new tissue weighting factors. A quasi-effective dose, which is

  19. Development of double dosimetry algorithm for assessment of effective dose to staff in interventional radiology

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ji Young

    2011-02-15

    Medical staff involving interventional radiology(IR) procedures are significantly exposed to the scatter radiation because they stand in close proximity to the patient. Since modern IR techniques are often very complicated and require extended operation time, doses to IR workers tend to increase considerably. In general, the personal dose equivalent at 10 mm depth, H{sub p}(10), read from one dosimeter worn on the trunk of a radiation worker is assumed to be a good estimate of the effective dose and compared to the dose limits for regulatory compliance. This assumption is based on the exposure conditions that the radiation field is broad and rather homogeneous. However, IR workers usually wear protective clothing like lead aprons and thyroid shield which allow part of the body being exposed to much higher doses. To solve this problem, i.e. to adequately estimate the effective doses of IR workers, use of double dosimeters, one under the apron and one over the apron where unshielded part of the body exposed, was recommended. Several algorithms on the interpretation of the two dosimeter readings have been proposed. However, the dosimeter weighting factors applied to the algorithm differ significantly, which quests a question on the reliability of the algorithm. Moreover, there are some changes in the process of calculating the effective dose in the 2007 recommendations of the International Commission on Radiological Protection(ICRP): changes in the radiation weighting factors, tissue weighting factors and the computational reference phantoms. Therefore, this study attempts to set a new algorithm for interpreting two dosimeter readings to provide a proper estimate of the effective dose for IR workers, incorporating those changes in definition of effective dose. The effective doses were estimated using Monte Carlo simulations for various practical conditions based on the vogel reference phantom and the new tissue weighting factors. A quasi-effective dose, which is

  20. Development of a Risk Algorithm to Better Target STI Testing and Treatment Among Australian Aboriginal and Torres Strait Islander People.

    Science.gov (United States)

    Wand, Handan; Bryant, Joanne; Pitts, Marian; Delaney-Thiele, Dea; Kaldor, John M; Worth, Heather; Ward, James

    2017-10-01

    Identifying and targeting those at greatest risk will likely play a significant role in developing the most efficient and cost-effective sexually transmissible infections (STI) prevention programs. We aimed to develop a risk prediction algorithm to identify those who are at increased risk of STI. A cohort (N = 2320) of young sexually active Aboriginal and Torres Strait Islander people (hereafter referred to as Aboriginal people) were included in this study. The primary outcomes were self-reported high-risk sexual behaviors and past STI diagnosis. In developing a risk algorithm, our study population was randomly assigned to either a development (67%) or an internal validation data set (33%). Logistic regression models were used to create a risk prediction algorithm from the development data set for males and females separately. In the risk prediction models, older age, methamphetamine, ecstasy, and cannabis use, and frequent alcohol intake were all consistently associated with high-risk sexual behaviors as well as with a past STI diagnosis; identifying as gay/bisexual was one of the strongest factors among males. Those who had never tested for STIs, 52% (males) and 66% (females), had a risk score >15, and prevalence of undiagnosed STI was estimated between 30 and 40%. Since universal STI screening is not cost-effective, nor practical in many settings, targeted screening strategies remain a crucial and effective approach to managing STIs among young Aboriginal people. Risk prediction tools such as the one developed in this study may help in prioritizing screening for STIs among those most at risk.

  1. Development of computational algorithms for quantification of pulmonary structures; Desenvolvimento de algoritmos computacionais para quantificacao de estruturas pulmonares

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Marcela de; Alvarez, Matheus; Alves, Allan F.F.; Miranda, Jose R.A., E-mail: marceladeoliveira@ig.com.br [Universidade Estadual Paulista Julio de Mesquita Filho (UNESP), Botucatu, SP (Brazil). Instituto de Biociencias. Departamento de Fisica e Biofisica; Pina, Diana R. [Universidade Estadual Paulista Julio de Mesquita Filho (UNESP), Botucatu, SP (Brazil). Hospital das Clinicas. Departamento de Doencas Tropicais e Diagnostico por Imagem

    2012-12-15

    The high-resolution computed tomography has become the imaging diagnostic exam most commonly used for the evaluation of the squeals of Paracoccidioidomycosis. The subjective evaluations the radiological abnormalities found on HRCT images do not provide an accurate quantification. The computer-aided diagnosis systems produce a more objective assessment of the abnormal patterns found in HRCT images. Thus, this research proposes the development of algorithms in MATLAB® computing environment can quantify semi-automatically pathologies such as pulmonary fibrosis and emphysema. The algorithm consists in selecting a region of interest (ROI), and by the use of masks, filter densities and morphological operators, to obtain a quantification of the injured area to the area of a healthy lung. The proposed method was tested on ten HRCT scans of patients with confirmed PCM. The results of semi-automatic measurements were compared with subjective evaluations performed by a specialist in radiology, falling to a coincidence of 80% for emphysema and 58% for fibrosis. (author)

  2. Some computational challenges of developing efficient parallel algorithms for data-dependent computations in thermal-hydraulics supercomputer applications

    International Nuclear Information System (INIS)

    Woodruff, S.B.

    1992-01-01

    The Transient Reactor Analysis Code (TRAC), which features a two- fluid treatment of thermal-hydraulics, is designed to model transients in water reactors and related facilities. One of the major computational costs associated with TRAC and similar codes is calculating constitutive coefficients. Although the formulations for these coefficients are local the costs are flow-regime- or data-dependent; i.e., the computations needed for a given spatial node often vary widely as a function of time. Consequently, poor load balancing will degrade efficiency on either vector or data parallel architectures when the data are organized according to spatial location. Unfortunately, a general automatic solution to the load-balancing problem associated with data-dependent computations is not yet available for massively parallel architectures. This document discusses why developers algorithms, such as a neural net representation, that do not exhibit algorithms, such as a neural net representation, that do not exhibit load-balancing problems

  3. Development of an apnea detection algorithm based on temporal analysis of thoracic respiratory effort signal

    Science.gov (United States)

    Dell'Aquila, C. R.; Cañadas, G. E.; Correa, L. S.; Laciar, E.

    2016-04-01

    This work describes the design of an algorithm for detecting apnea episodes, based on analysis of thorax respiratory effort signal. Inspiration and expiration time, and range amplitude of respiratory cycle were evaluated. For range analysis the standard deviation statistical tool was used over respiratory signal temporal windows. The validity of its performance was carried out in 8 records of Apnea-ECG database that has annotations of apnea episodes. The results are: sensitivity (Se) 73%, specificity (Sp) 83%. These values can be improving eliminating artifact of signal records.

  4. Data and software tools for gamma radiation spectral threat detection and nuclide identification algorithm development and evaluation

    Science.gov (United States)

    Portnoy, David; Fisher, Brian; Phifer, Daniel

    2015-06-01

    The detection of radiological and nuclear threats is extremely important to national security. The federal government is spending significant resources developing new detection systems and attempting to increase the performance of existing ones. The detection of illicit radionuclides that may pose a radiological or nuclear threat is a challenging problem complicated by benign radiation sources (e.g., cat litter and medical treatments), shielding, and large variations in background radiation. Although there is a growing acceptance within the community that concentrating efforts on algorithm development (independent of the specifics of fully assembled systems) has the potential for significant overall system performance gains, there are two major hindrances to advancements in gamma spectral analysis algorithms under the current paradigm: access to data and common performance metrics along with baseline performance measures. Because many of the signatures collected during performance measurement campaigns are classified, dissemination to algorithm developers is extremely limited. This leaves developers no choice but to collect their own data if they are lucky enough to have access to material and sensors. This is often combined with their own definition of metrics for measuring performance. These two conditions make it all but impossible for developers and external reviewers to make meaningful comparisons between algorithms. Without meaningful comparisons, performance advancements become very hard to achieve and (more importantly) recognize. The objective of this work is to overcome these obstacles by developing and freely distributing real and synthetically generated gamma-spectra data sets as well as software tools for performance evaluation with associated performance baselines to national labs, academic institutions, government agencies, and industry. At present, datasets for two tracks, or application domains, have been developed: one that includes temporal

  5. JOURNAL CLUB: Plagiarism in Manuscripts Submitted to the AJR: Development of an Optimal Screening Algorithm and Management Pathways.

    Science.gov (United States)

    Taylor, Donna B

    2017-04-01

    The objective of this study was to investigate the incidence of plagiarism in a sample of manuscripts submitted to the AJR using CrossCheck, develop an algorithm to identify significant plagiarism, and formulate management pathways. A sample of 110 of 1610 (6.8%) manuscripts submitted to AJR in 2014 in the categories of Original Research or Review were analyzed using CrossCheck and manual assessment. The overall similarity index (OSI), highest similarity score from a single source, whether duplication was from single or multiple origins, journal section, and presence or absence of referencing the source were recorded. The criteria outlined by the International Committee of Medical Journal Editors were the reference standard for identifying manuscripts containing plagiarism. Statistical analysis was used to develop a screening algorithm to maximize sensitivity and specificity for the detection of plagiarism. Criteria for defining the severity of plagiarism and management pathways based on the severity of the plagiarism were determined. Twelve manuscripts (10.9%) contained plagiarism. Nine had an OSI excluding quotations and references of less than 20%. In seven, the highest similarity score from a single source was less than 10%. The highest similarity score from a single source was the work of the same author or authors in nine. Common sections for duplication were the Materials and Methods, Discussion, and abstract. Referencing the original source was lacking in 11. Plagiarism was undetected at submission in five of these 12 articles; two had been accepted for publication. The most effective screening algorithm was to average the OSI including quotations and references and the highest similarity score from a single source and to submit manuscripts with an average value of more than 12% for further review. The current methods for detecting plagiarism are suboptimal. A new screening algorithm is proposed.

  6. Transmission dose estimation algorithm for tissue deficit

    International Nuclear Information System (INIS)

    Yun, Hyong Geun; Shin, Kyo Chul; Chie, Eui Kyu; Huh, Soon Nyung; Woo, Hong Gyun; Ha, Sung Whan; Lee, Hyoung Koo

    2002-01-01

    Measurement of transmission dose is useful for in vivo dosimetry. In this study, previous algorithm for estimation of transmission dose was modified for use in cases with tissue deficit. The beam data was measured with flat solid phantom in various conditions of tissue deficit. New algorithm for correction of transmission dose for tissue deficit was developed by physical reasoning. The algorithm was tested in experimental settings with irregular contours mimicking breast cancer patients using multiple sheets of solid phantoms. The correction algorithm for tissue deficit could accurately reflect the effect of tissue deficit with errors within ± 1.0% in most situations and within ± 3.0% in experimental settings with irregular contours mimicking breast cancer treatment set-up. Developed algorithm could accurately reflect the effect of tissue deficit and irregularly shaped body contour on transmission dosimetry

  7. Autonomous Star Tracker Algorithms

    DEFF Research Database (Denmark)

    Betto, Maurizio; Jørgensen, John Leif; Kilsgaard, Søren

    1998-01-01

    Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances.......Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances....

  8. Development of a Leader-End Reclosing Algorithm Considering Turbine-Generator Shaft Torque

    Directory of Open Access Journals (Sweden)

    Gyu-Jung Cho

    2017-05-01

    Full Text Available High-speed auto-reclosing is used in power system protection schemes to ensure the stability and reliability of the transmission system; leader-follower auto-reclosing is one scheme type that is widely used. However, when a leader-follower reclosing scheme responds to a permanent fault that affects a transmission line in the proximity of a generation plant, the reclosing directly impacts the turbine-generator shaft; furthermore, the nature of this impact is dependent upon the selection of the leader reclosing terminal. We therefore analyzed the transient torque of the turbine-generator shaft according to the selection of the leader-follower reclosing end between both ends of the transmission line. We used this analysis to propose an adaptive leader-end reclosing algorithm that removes the stress potential of the transient torque to prevent it from damaging the turbine-generator shaft. We conducted a simulation in actual Korean power systems based on the ElectroMagnetic Transients Program (EMTP and the Dynamic Link Library (DLL function in EMTP-RV (Restructured Version to realize the proposed algorithm.

  9. Development of Future Rule Curves for Multipurpose Reservoir Operation Using Conditional Genetic and Tabu Search Algorithms

    Directory of Open Access Journals (Sweden)

    Anongrit Kangrang

    2018-01-01

    Full Text Available Optimal rule curves are necessary guidelines in the reservoir operation that have been used to assess performance of any reservoir to satisfy water supply, irrigation, industrial, hydropower, and environmental conservation requirements. This study applied the conditional genetic algorithm (CGA and the conditional tabu search algorithm (CTSA technique to connect with the reservoir simulation model in order to search optimal reservoir rule curves. The Ubolrat Reservoir located in the northeast region of Thailand was an illustrative application including historic monthly inflow, future inflow generated by the SWAT hydrological model using 50-year future climate data from the PRECIS regional climate model in case of B2 emission scenario by IPCC SRES, water demand, hydrologic data, and physical reservoir data. The future and synthetic inflow data of reservoirs were used to simulate reservoir system for evaluating water situation. The situations of water shortage and excess water were shown in terms of frequency magnitude and duration. The results have shown that the optimal rule curves from CGA and CTSA connected with the simulation model can mitigate drought and flood situations than the existing rule curves. The optimal future rule curves were more suitable for future situations than the other rule curves.

  10. Development and Implementation of an Advanced Power Management Algorithm for Electronic Load Sensing on a Telehandler

    DEFF Research Database (Denmark)

    Hansen, Rico Hjerm; Andersen, Torben Ole; Pedersen, Henrik C.

    2010-01-01

    The relevance of electronic control of mobile hydraulic systems is increasing as hydraulic components are implemented with more electrical sensors and actuators. This paper presents how the traditional Hydro-mechanical Load Sensing (HLS) control of a specific mobile hydraulic application, a teleh......The relevance of electronic control of mobile hydraulic systems is increasing as hydraulic components are implemented with more electrical sensors and actuators. This paper presents how the traditional Hydro-mechanical Load Sensing (HLS) control of a specific mobile hydraulic application......, a telehandler, can be replaced with electronic control, i.e. Electronic Load Sensing (ELS). The motivation is the potential of improved dynamic performance and power utilization, along with reducing the mechanical complexity by moving traditional hydro-mechanical implemented features such as pressure control......, flow-sharing, prioritization of steering, anti-stall and high pressure protection into electronics. In order to implement these features, the paper presents and tests a general power management algorithm for a telehandler. The algorithm is capable of implementing the above features, while also handling...

  11. Robust integration schemes for generalized viscoplasticity with internal-state variables. Part 2: Algorithmic developments and implementation

    Science.gov (United States)

    Li, Wei; Saleeb, Atef F.

    1995-01-01

    This two-part report is concerned with the development of a general framework for the implicit time-stepping integrators for the flow and evolution equations in generalized viscoplastic models. The primary goal is to present a complete theoretical formulation, and to address in detail the algorithmic and numerical analysis aspects involved in its finite element implementation, as well as to critically assess the numerical performance of the developed schemes in a comprehensive set of test cases. On the theoretical side, the general framework is developed on the basis of the unconditionally-stable, backward-Euler difference scheme as a starting point. Its mathematical structure is of sufficient generality to allow a unified treatment of different classes of viscoplastic models with internal variables. In particular, two specific models of this type, which are representative of the present start-of-art in metal viscoplasticity, are considered in applications reported here; i.e., fully associative (GVIPS) and non-associative (NAV) models. The matrix forms developed for both these models are directly applicable for both initially isotropic and anisotropic materials, in general (three-dimensional) situations as well as subspace applications (i.e., plane stress/strain, axisymmetric, generalized plane stress in shells). On the computational side, issues related to efficiency and robustness are emphasized in developing the (local) interative algorithm. In particular, closed-form expressions for residual vectors and (consistent) material tangent stiffness arrays are given explicitly for both GVIPS and NAV models, with their maximum sizes 'optimized' to depend only on the number of independent stress components (but independent of the number of viscoplastic internal state parameters). Significant robustness of the local iterative solution is provided by complementing the basic Newton-Raphson scheme with a line-search strategy for convergence. In the present second part of

  12. Developing an eye-tracking algorithm as a potential tool for early diagnosis of autism spectrum disorder in children.

    Directory of Open Access Journals (Sweden)

    Natalia I Vargas-Cuentas

    Full Text Available Autism spectrum disorder (ASD currently affects nearly 1 in 160 children worldwide. In over two-thirds of evaluations, no validated diagnostics are used and gold standard diagnostic tools are used in less than 5% of evaluations. Currently, the diagnosis of ASD requires lengthy and expensive tests, in addition to clinical confirmation. Therefore, fast, cheap, portable, and easy-to-administer screening instruments for ASD are required. Several studies have shown that children with ASD have a lower preference for social scenes compared with children without ASD. Based on this, eye-tracking and measurement of gaze preference for social scenes has been used as a screening tool for ASD. Currently available eye-tracking software requires intensive calibration, training, or holding of the head to prevent interference with gaze recognition limiting its use in children with ASD.In this study, we designed a simple eye-tracking algorithm that does not require calibration or head holding, as a platform for future validation of a cost-effective ASD potential screening instrument. This system operates on a portable and inexpensive tablet to measure gaze preference of children for social compared to abstract scenes. A child watches a one-minute stimulus video composed of a social scene projected on the left side and an abstract scene projected on the right side of the tablet's screen. We designed five stimulus videos by changing the social/abstract scenes. Every child observed all the five videos in random order. We developed an eye-tracking algorithm that calculates the child's gaze preference for the social and abstract scenes, estimated as the percentage of the accumulated time that the child observes the left or right side of the screen, respectively. Twenty-three children without a prior history of ASD and 8 children with a clinical diagnosis of ASD were evaluated. The recorded video of the child´s eye movement was analyzed both manually by an observer

  13. Developing an eye-tracking algorithm as a potential tool for early diagnosis of autism spectrum disorder in children

    Science.gov (United States)

    Vargas-Cuentas, Natalia I.; Roman-Gonzalez, Avid; Gilman, Robert H.; Barrientos, Franklin; Ting, James; Hidalgo, Daniela; Jensen, Kelly

    2017-01-01

    Background Autism spectrum disorder (ASD) currently affects nearly 1 in 160 children worldwide. In over two-thirds of evaluations, no validated diagnostics are used and gold standard diagnostic tools are used in less than 5% of evaluations. Currently, the diagnosis of ASD requires lengthy and expensive tests, in addition to clinical confirmation. Therefore, fast, cheap, portable, and easy-to-administer screening instruments for ASD are required. Several studies have shown that children with ASD have a lower preference for social scenes compared with children without ASD. Based on this, eye-tracking and measurement of gaze preference for social scenes has been used as a screening tool for ASD. Currently available eye-tracking software requires intensive calibration, training, or holding of the head to prevent interference with gaze recognition limiting its use in children with ASD. Methods In this study, we designed a simple eye-tracking algorithm that does not require calibration or head holding, as a platform for future validation of a cost-effective ASD potential screening instrument. This system operates on a portable and inexpensive tablet to measure gaze preference of children for social compared to abstract scenes. A child watches a one-minute stimulus video composed of a social scene projected on the left side and an abstract scene projected on the right side of the tablet’s screen. We designed five stimulus videos by changing the social/abstract scenes. Every child observed all the five videos in random order. We developed an eye-tracking algorithm that calculates the child’s gaze preference for the social and abstract scenes, estimated as the percentage of the accumulated time that the child observes the left or right side of the screen, respectively. Twenty-three children without a prior history of ASD and 8 children with a clinical diagnosis of ASD were evaluated. The recorded video of the child´s eye movement was analyzed both manually by an

  14. 3D-2D registration in mobile radiographs: algorithm development and preliminary clinical evaluation

    Science.gov (United States)

    Otake, Yoshito; Wang, Adam S.; Uneri, Ali; Kleinszig, Gerhard; Vogt, Sebastian; Aygun, Nafi; Lo, Sheng-fu L.; Wolinsky, Jean-Paul; Gokaslan, Ziya L.; Siewerdsen, Jeffrey H.

    2015-03-01

    An image-based 3D-2D registration method is presented using radiographs acquired in the uncalibrated, unconstrained geometry of mobile radiography. The approach extends a previous method for six degree-of-freedom (DOF) registration in C-arm fluoroscopy (namely ‘LevelCheck’) to solve the 9-DOF estimate of geometry in which the position of the source and detector are unconstrained. The method was implemented using a gradient correlation similarity metric and stochastic derivative-free optimization on a GPU. Development and evaluation were conducted in three steps. First, simulation studies were performed that involved a CT scan of an anthropomorphic body phantom and 1000 randomly generated digitally reconstructed radiographs in posterior-anterior and lateral views. A median projection distance error (PDE) of 0.007 mm was achieved with 9-DOF registration compared to 0.767 mm for 6-DOF. Second, cadaver studies were conducted using mobile radiographs acquired in three anatomical regions (thorax, abdomen and pelvis) and three levels of source-detector distance (~800, ~1000 and ~1200 mm). The 9-DOF method achieved a median PDE of 0.49 mm (compared to 2.53 mm for the 6-DOF method) and demonstrated robustness in the unconstrained imaging geometry. Finally, a retrospective clinical study was conducted with intraoperative radiographs of the spine exhibiting real anatomical deformation and image content mismatch (e.g. interventional devices in the radiograph that were not in the CT), demonstrating a PDE = 1.1 mm for the 9-DOF approach. Average computation time was 48.5 s, involving 687 701 function evaluations on average, compared to 18.2 s for the 6-DOF method. Despite the greater computational load, the 9-DOF method may offer a valuable tool for target localization (e.g. decision support in level counting) as well as safety and quality assurance checks at the conclusion of a procedure (e.g. overlay of planning data on the radiograph for verification of

  15. Design and implementation of universal mathematical library supporting algorithm development for FPGA based systems in high energy physics experiments

    Energy Technology Data Exchange (ETDEWEB)

    Jalmuzna, W.

    2006-02-15

    The X-ray free-electron laser XFEL that is being planned at the DESY research center in cooperation with European partners will produce high-intensity ultra-short Xray flashes with the properties of laser light. This new light source, which can only be described in terms of superlatives, will open up a whole range of new perspectives for the natural sciences. It could also offer very promising opportunities for industrial users. SIMCON (SIMulator and CONtroller) is the project of the fast, low latency digital controller dedicated for LLRF system in VUV FEL experiment based on modern FPGA chips It is being developed by ELHEP group in Institute of Electronic Systems at Warsaw University of Technology. The main purpose of the project is to create a controller for stabilizing the vector sum of fields in cavities of one cryomodule in the experiment. The device can be also used as the simulator of the cavity and testbench for other devices. Flexibility and computation power of this device allow implementation of fast mathematical algorithms. This paper describes the concept, implementation and tests of universal mathematical library for FPGA algorithm implementation. It consists of many useful components such as IQ demodulator, division block, library for complex and floating point operations, etc. It is able to speed up implementation time of many complicated algorithms. Library have already been tested using real accelerator signals and the performance achieved is satisfactory. (Orig.)

  16. Design and implementation of universal mathematical library supporting algorithm development for FPGA based systems in high energy physics experiments

    International Nuclear Information System (INIS)

    Jalmuzna, W.

    2006-02-01

    The X-ray free-electron laser XFEL that is being planned at the DESY research center in cooperation with European partners will produce high-intensity ultra-short Xray flashes with the properties of laser light. This new light source, which can only be described in terms of superlatives, will open up a whole range of new perspectives for the natural sciences. It could also offer very promising opportunities for industrial users. SIMCON (SIMulator and CONtroller) is the project of the fast, low latency digital controller dedicated for LLRF system in VUV FEL experiment based on modern FPGA chips It is being developed by ELHEP group in Institute of Electronic Systems at Warsaw University of Technology. The main purpose of the project is to create a controller for stabilizing the vector sum of fields in cavities of one cryomodule in the experiment. The device can be also used as the simulator of the cavity and testbench for other devices. Flexibility and computation power of this device allow implementation of fast mathematical algorithms. This paper describes the concept, implementation and tests of universal mathematical library for FPGA algorithm implementation. It consists of many useful components such as IQ demodulator, division block, library for complex and floating point operations, etc. It is able to speed up implementation time of many complicated algorithms. Library have already been tested using real accelerator signals and the performance achieved is satisfactory. (Orig.)

  17. Developing a Novel Hybrid Biogeography-Based Optimization Algorithm for Multilayer Perceptron Training under Big Data Challenge

    Directory of Open Access Journals (Sweden)

    Xun Pu

    2018-01-01

    Full Text Available A Multilayer Perceptron (MLP is a feedforward neural network model consisting of one or more hidden layers between the input and output layers. MLPs have been successfully applied to solve a wide range of problems in the fields of neuroscience, computational linguistics, and parallel distributed processing. While MLPs are highly successful in solving problems which are not linearly separable, two of the biggest challenges in their development and application are the local-minima problem and the problem of slow convergence under big data challenge. In order to tackle these problems, this study proposes a Hybrid Chaotic Biogeography-Based Optimization (HCBBO algorithm for training MLPs for big data analysis and processing. Four benchmark datasets are employed to investigate the effectiveness of HCBBO in training MLPs. The accuracy of the results and the convergence of HCBBO are compared to three well-known heuristic algorithms: (a Biogeography-Based Optimization (BBO, (b Particle Swarm Optimization (PSO, and (c Genetic Algorithms (GA. The experimental results show that training MLPs by using HCBBO is better than the other three heuristic learning approaches for big data processing.

  18. Hybridization properties of long nucleic acid probes for detection of variable target sequences, and development of a hybridization prediction algorithm

    Science.gov (United States)

    Öhrmalm, Christina; Jobs, Magnus; Eriksson, Ronnie; Golbob, Sultan; Elfaitouri, Amal; Benachenhou, Farid; Strømme, Maria; Blomberg, Jonas

    2010-01-01

    One of the main problems in nucleic acid-based techniques for detection of infectious agents, such as influenza viruses, is that of nucleic acid sequence variation. DNA probes, 70-nt long, some including the nucleotide analog deoxyribose-Inosine (dInosine), were analyzed for hybridization tolerance to different amounts and distributions of mismatching bases, e.g. synonymous mutations, in target DNA. Microsphere-linked 70-mer probes were hybridized in 3M TMAC buffer to biotinylated single-stranded (ss) DNA for subsequent analysis in a Luminex® system. When mismatches interrupted contiguous matching stretches of 6 nt or longer, it had a strong impact on hybridization. Contiguous matching stretches are more important than the same number of matching nucleotides separated by mismatches into several regions. dInosine, but not 5-nitroindole, substitutions at mismatching positions stabilized hybridization remarkably well, comparable to N (4-fold) wobbles in the same positions. In contrast to shorter probes, 70-nt probes with judiciously placed dInosine substitutions and/or wobble positions were remarkably mismatch tolerant, with preserved specificity. An algorithm, NucZip, was constructed to model the nucleation and zipping phases of hybridization, integrating both local and distant binding contributions. It predicted hybridization more exactly than previous algorithms, and has the potential to guide the design of variation-tolerant yet specific probes. PMID:20864443

  19. Analysis and development of the automated emergency algorithm to control primary to secondary LOCA for SUNPP safety upgrading

    International Nuclear Information System (INIS)

    Kim, V.; Kuznetsov, V.; Balakan, G.; Gromov, G.; Krushynsky, A.; Sholomitsky, S.; Lola, I.

    2007-01-01

    The paper presents the results of the study conducted to support planned modernization of the South Ukraine nuclear power plant. The objective of the analysis has been to develop the automated emergency control algorithm for primary to secondary LOCA accident for SUNPP WWER-1000 safety upgrading. According to the analyses performed in the framework of safety assesment report, given accident is the most complex for control and has the largest contribution into the core damage frequency value. This is because of initial event diagnostics is difficult, emergency control is complicated for personnel, time available for decision making and actions performing is limited with coolant inventory for make-up, probability of steam dump valves on affected steam generator non-closing after opening is high, and as a consequence containment bypass, irretrievable loss of coolant and radioactive materials release into the environment are possible. Unit design modifications are directed on expansion of safety systems capabilities to overcome given accident and to facilitate the personnel actions on emergency control. Safety systems modification according to developed algorithm will allow to simplify accident control by personnel and enable to control the ECCS discharge limiting pressure below the affected steam generator steam dump valve opening pressure, and decrease the probability of the containment bypass sequences. The analysis of the primary-to-secondary LOCA thermal-hydraulics has been conducted with RELAP5/Mod 3.2, and involved development of the dedicated analytical model, calculations of various plant response accident scenarios, conducting of plant personnel intervention analyses using full-scale simulator, development and justification of the emergency control algorithm aimed on the minimization of negative consequences of the primary-to-secondary LOCA (Authors)

  20. Javascript Library for Developing Interactive Micro-Level Animations for Teaching and Learning Algorithms on One-Dimensional Arrays

    Science.gov (United States)

    Végh, Ladislav

    2016-01-01

    The first data structure that first-year undergraduate students learn during the programming and algorithms courses is the one-dimensional array. For novice programmers, it might be hard to understand different algorithms on arrays (e.g. searching, mirroring, sorting algorithms), because the algorithms dynamically change the values of elements. In…

  1. Predicting the onset of hazardous alcohol drinking in primary care: development and validation of a simple risk algorithm.

    Science.gov (United States)

    Bellón, Juan Ángel; de Dios Luna, Juan; King, Michael; Nazareth, Irwin; Motrico, Emma; GildeGómez-Barragán, María Josefa; Torres-González, Francisco; Montón-Franco, Carmen; Sánchez-Celaya, Marta; Díaz-Barreiros, Miguel Ángel; Vicens, Catalina; Moreno-Peral, Patricia

    2017-04-01

    Little is known about the risk of progressing to hazardous alcohol use in abstinent or low-risk drinkers. To develop and validate a simple brief risk algorithm for the onset of hazardous alcohol drinking (HAD) over 12 months for use in primary care. Prospective cohort study in 32 health centres from six Spanish provinces, with evaluations at baseline, 6 months, and 12 months. Forty-one risk factors were measured and multilevel logistic regression and inverse probability weighting were used to build the risk algorithm. The outcome was new occurrence of HAD during the study, as measured by the AUDIT. From the lists of 174 GPs, 3954 adult abstinent or low-risk drinkers were recruited. The 'predictAL-10' risk algorithm included just nine variables (10 questions): province, sex, age, cigarette consumption, perception of financial strain, having ever received treatment for an alcohol problem, childhood sexual abuse, AUDIT-C, and interaction AUDIT-C*Age. The c-index was 0.886 (95% CI = 0.854 to 0.918). The optimal cutoff had a sensitivity of 0.83 and specificity of 0.80. Excluding childhood sexual abuse from the model (the 'predictAL-9'), the c-index was 0.880 (95% CI = 0.847 to 0.913), sensitivity 0.79, and specificity 0.81. There was no statistically significant difference between the c-indexes of predictAL-10 and predictAL-9. The predictAL-10/9 is a simple and internally valid risk algorithm to predict the onset of hazardous alcohol drinking over 12 months in primary care attendees; it is a brief tool that is potentially useful for primary prevention of hazardous alcohol drinking. © British Journal of General Practice 2017.

  2. Development of a Sequential Restoration Strategy Based on the Enhanced Dijkstra Algorithm for Korean Power Systems

    Directory of Open Access Journals (Sweden)

    Bokyung Goo

    2016-12-01

    Full Text Available When a blackout occurs, it is important to reduce the time for power system restoration to minimize damage. For fast restoration, it is important to reduce taking time for the selection of generators, transmission lines and transformers. In addition, it is essential that a determination of a generator start-up sequence (GSS be made to restore the power system. In this paper, we propose the optimal selection of black start units through the generator start-up sequence (GSS to minimize the restoration time using generator characteristic data and the enhanced Dijkstra algorithm. For each restoration step, the sequence selected for the next start unit is recalculated to reflect the system conditions. The proposed method is verified by the empirical Korean power systems.

  3. Development and evaluation of a scheduling algorithm for parallel hardware tests at CERN

    CERN Document Server

    Galetzka, Michael

    This thesis aims at describing the problem of scheduling, evaluating different scheduling algorithms and comparing them with each other as well as with the current prototype solution. The implementation of the final solution will be delineated, as will the design considerations that led to it. The CERN Large Hadron Collider (LHC) has to deal with unprecedented stored energy, both in its particle beams and its superconducting magnet circuits. This energy could result in major equipment damage and downtime if it is not properly extracted from the machine. Before commissioning the machine with the particle beam, several thousands of tests have to be executed, analyzed and tracked to assess the proper functioning of the equipment and protection systems. These tests access the accelerator's equipment in order to verify the correct behavior of all systems, such as magnets, power converters and interlock controllers. A test could, for example, ramp the magnet to a certain energy level and then provoke an emergency...

  4. Development of response models for the Earth Radiation Budget Experiment (ERBE) sensors. Part 4: Preliminary nonscanner models and count conversion algorithms

    Science.gov (United States)

    Halyo, Nesim; Choi, Sang H.

    1987-01-01

    Two count conversion algorithms and the associated dynamic sensor model for the M/WFOV nonscanner radiometers are defined. The sensor model provides and updates the constants necessary for the conversion algorithms, though the frequency with which these updates were needed was uncertain. This analysis therefore develops mathematical models for the conversion of irradiance at the sensor field of view (FOV) limiter into data counts, derives from this model two algorithms for the conversion of data counts to irradiance at the sensor FOV aperture and develops measurement models which account for a specific target source together with a sensor. The resulting algorithms are of the gain/offset and Kalman filter types. The gain/offset algorithm was chosen since it provided sufficient accuracy using simpler computations.

  5. Development of a sensorimotor algorithm able to deal with unforeseen pushes and its implementation based on VHDL

    OpenAIRE

    Lezcano Giménez, Pablo Gabriel

    2015-01-01

    Development of a Sensorimotor Algorithm Able to Deal with Unforeseen Pushes and Its Implementation Based on VHDL is the title of my thesis which concludes my Bachelor Degree in the Escuela Técnica Superior de Ingeniería y Sistemas de Telecomunicación of the Universidad Politécnica de Madrid. It encloses the overall work I did in the Neurorobotics Research Laboratory from the Beuth Hochschule für Technik Berlin during my ERASMUS year in 2015. This thesis is focused on the field of robotics, sp...

  6. DEVELOPMENT OF A PEDESTRIAN INDOOR NAVIGATION SYSTEM BASED ON MULTI-SENSOR FUSION AND FUZZY LOGIC ESTIMATION ALGORITHMS

    Directory of Open Access Journals (Sweden)

    Y. C. Lai

    2015-05-01

    Full Text Available This paper presents a pedestrian indoor navigation system based on the multi-sensor fusion and fuzzy logic estimation algorithms. The proposed navigation system is a self-contained dead reckoning navigation that means no other outside signal is demanded. In order to achieve the self-contained capability, a portable and wearable inertial measure unit (IMU has been developed. Its adopted sensors are the low-cost inertial sensors, accelerometer and gyroscope, based on the micro electro-mechanical system (MEMS. There are two types of the IMU modules, handheld and waist-mounted. The low-cost MEMS sensors suffer from various errors due to the results of manufacturing imperfections and other effects. Therefore, a sensor calibration procedure based on the scalar calibration and the least squares methods has been induced in this study to improve the accuracy of the inertial sensors. With the calibrated data acquired from the inertial sensors, the step length and strength of the pedestrian are estimated by multi-sensor fusion and fuzzy logic estimation algorithms. The developed multi-sensor fusion algorithm provides the amount of the walking steps and the strength of each steps in real-time. Consequently, the estimated walking amount and strength per step are taken into the proposed fuzzy logic estimation algorithm to estimates the step lengths of the user. Since the walking length and direction are both the required information of the dead reckoning navigation, the walking direction is calculated by integrating the angular rate acquired by the gyroscope of the developed IMU module. Both the walking length and direction are calculated on the IMU module and transmit to a smartphone with Bluetooth to perform the dead reckoning navigation which is run on a self-developed APP. Due to the error accumulating of dead reckoning navigation, a particle filter and a pre-loaded map of indoor environment have been applied to the APP of the proposed navigation system

  7. Development of a Pedestrian Indoor Navigation System Based on Multi-Sensor Fusion and Fuzzy Logic Estimation Algorithms

    Science.gov (United States)

    Lai, Y. C.; Chang, C. C.; Tsai, C. M.; Lin, S. Y.; Huang, S. C.

    2015-05-01

    This paper presents a pedestrian indoor navigation system based on the multi-sensor fusion and fuzzy logic estimation algorithms. The proposed navigation system is a self-contained dead reckoning navigation that means no other outside signal is demanded. In order to achieve the self-contained capability, a portable and wearable inertial measure unit (IMU) has been developed. Its adopted sensors are the low-cost inertial sensors, accelerometer and gyroscope, based on the micro electro-mechanical system (MEMS). There are two types of the IMU modules, handheld and waist-mounted. The low-cost MEMS sensors suffer from various errors due to the results of manufacturing imperfections and other effects. Therefore, a sensor calibration procedure based on the scalar calibration and the least squares methods has been induced in this study to improve the accuracy of the inertial sensors. With the calibrated data acquired from the inertial sensors, the step length and strength of the pedestrian are estimated by multi-sensor fusion and fuzzy logic estimation algorithms. The developed multi-sensor fusion algorithm provides the amount of the walking steps and the strength of each steps in real-time. Consequently, the estimated walking amount and strength per step are taken into the proposed fuzzy logic estimation algorithm to estimates the step lengths of the user. Since the walking length and direction are both the required information of the dead reckoning navigation, the walking direction is calculated by integrating the angular rate acquired by the gyroscope of the developed IMU module. Both the walking length and direction are calculated on the IMU module and transmit to a smartphone with Bluetooth to perform the dead reckoning navigation which is run on a self-developed APP. Due to the error accumulating of dead reckoning navigation, a particle filter and a pre-loaded map of indoor environment have been applied to the APP of the proposed navigation system to extend its

  8. Development of an algorithm as an implementation model for a wound management formulary across a UK health economy.

    Science.gov (United States)

    Stephen-Haynes, J

    2013-12-01

    This article outlines a strategic process for the evaluation of wound management products and the development of an algorithm as an implementation model for wound management. Wound management is an increasingly complex process given the variety of interactive dressings and other devices available. This article discusses the procurement process, access to wound management dressings and the use of wound management formularies within the UK. We conclude that the current commissioners of tissue viability within healthcare organisations need to adopt a proactive approach to ensure appropriate formulary evaluation and product selection, in order to achieve the most beneficial clinical and financial outcomes.

  9. Development of Categories of Colour Similarity Estimation Algorithm for Persons with Low Visual Capability Using Artificial Neural Network

    Science.gov (United States)

    Fujisawa, Shoichiro; Kurozumi, Ryota; Mitani, Seiji; Sueda, Osamu

    In Japan, it is reported about 90% of the persons with visual impairment are the persons with low visual capability. However, the database of the low visual capability and the standard about the easily comprehensible visual information presentation are not existed yet. Therefore, the categories of colour similarity of the persons with low visual capability are measured. However, measuring various colour combinations is placing a burden on the object person. Therefore, in this research, development of the categories of colour similarity estimation algorithm is proposed. Applying the generalization capability of the artificial neural network, the distribution of the categories of colour similarity will be obtained from the limited colour combination measures.

  10. Development of Fault Models for Hybrid Fault Detection and Diagnostics Algorithm: October 1, 2014 -- May 5, 2015

    Energy Technology Data Exchange (ETDEWEB)

    Cheung, Howard; Braun, James E.

    2015-12-31

    This report describes models of building faults created for OpenStudio to support the ongoing development of fault detection and diagnostic (FDD) algorithms at the National Renewable Energy Laboratory. Building faults are operating abnormalities that degrade building performance, such as using more energy than normal operation, failing to maintain building temperatures according to the thermostat set points, etc. Models of building faults in OpenStudio can be used to estimate fault impacts on building performance and to develop and evaluate FDD algorithms. The aim of the project is to develop fault models of typical heating, ventilating and air conditioning (HVAC) equipment in the United States, and the fault models in this report are grouped as control faults, sensor faults, packaged and split air conditioner faults, water-cooled chiller faults, and other uncategorized faults. The control fault models simulate impacts of inappropriate thermostat control schemes such as an incorrect thermostat set point in unoccupied hours and manual changes of thermostat set point due to extreme outside temperature. Sensor fault models focus on the modeling of sensor biases including economizer relative humidity sensor bias, supply air temperature sensor bias, and water circuit temperature sensor bias. Packaged and split air conditioner fault models simulate refrigerant undercharging, condenser fouling, condenser fan motor efficiency degradation, non-condensable entrainment in refrigerant, and liquid line restriction. Other fault models that are uncategorized include duct fouling, excessive infiltration into the building, and blower and pump motor degradation.

  11. Modeling Design Iteration in Product Design and Development and Its Solution by a Novel Artificial Bee Colony Algorithm

    Science.gov (United States)

    2014-01-01

    Due to fierce market competition, how to improve product quality and reduce development cost determines the core competitiveness of enterprises. However, design iteration generally causes increases of product cost and delays of development time as well, so how to identify and model couplings among tasks in product design and development has become an important issue for enterprises to settle. In this paper, the shortcomings existing in WTM model are discussed and tearing approach as well as inner iteration method is used to complement the classic WTM model. In addition, the ABC algorithm is also introduced to find out the optimal decoupling schemes. In this paper, firstly, tearing approach and inner iteration method are analyzed for solving coupled sets. Secondly, a hybrid iteration model combining these two technologies is set up. Thirdly, a high-performance swarm intelligence algorithm, artificial bee colony, is adopted to realize problem-solving. Finally, an engineering design of a chemical processing system is given in order to verify its reasonability and effectiveness. PMID:25431584

  12. Pseudocode Interpreter (Pseudocode Integrated Development Environment with Lexical Analyzer and Syntax Analyzer using Recursive Descent Parsing Algorithm

    Directory of Open Access Journals (Sweden)

    Christian Lester D. Gimeno

    2017-11-01

    Full Text Available –This research study focused on the development of a software that helps students design, write, validate and run their pseudocode in a semi Integrated Development Environment (IDE instead of manually writing it on a piece of paper.Specifically, the study aimed to develop lexical analyzer or lexer, syntax analyzer or parser using recursive descent parsing algorithm and an interpreter. The lexical analyzer reads pseudocodesource in a sequence of symbols or characters as lexemes.The lexemes are then analyzed by the lexer that matches a pattern for valid tokens and passes to the syntax analyzer or parser. The syntax analyzer or parser takes those valid tokens and builds meaningful commands using recursive descent parsing algorithm in a form of an abstract syntax tree. The generation of an abstract syntax tree is based on the specified grammar rule created by the researcher expressed in Extended Backus-Naur Form. The Interpreter takes the generated abstract syntax tree and starts the evaluation or interpretation to produce pseudocode output. The software was evaluated using white-box testing by several ICT professionals and black-box testing by several computer science students based on the International Organization for Standardization (ISO 9126 software quality standards. The overall results of the evaluation both for white-box and black-box were described as “Excellent in terms of functionality, reliability, usability, efficiency, maintainability and portability”.

  13. Development of a radiation track structure clustering algorithm for the prediction of DNA DSB yields and radiation induced cell death in Eukaryotic cells.

    Science.gov (United States)

    Douglass, Michael; Bezak, Eva; Penfold, Scott

    2015-04-21

    The preliminary framework of a combined radiobiological model is developed and calibrated in the current work. The model simulates the production of individual cells forming a tumour, the spatial distribution of individual ionization events (using Geant4-DNA) and the stochastic biochemical repair of DNA double strand breaks (DSBs) leading to the prediction of survival or death of individual cells. In the current work, we expand upon a previously developed tumour generation and irradiation model to include a stochastic ionization damage clustering and DNA lesion repair model. The Geant4 code enabled the positions of each ionization event in the cells to be simulated and recorded for analysis. An algorithm was developed to cluster the ionization events in each cell into simple and complex double strand breaks. The two lesion kinetic (TLK) model was then adapted to predict DSB repair kinetics and the resultant cell survival curve. The parameters in the cell survival model were then calibrated using experimental cell survival data of V79 cells after low energy proton irradiation. A monolayer of V79 cells was simulated using the tumour generation code developed previously. The cells were then irradiated by protons with mean energies of 0.76 MeV and 1.9 MeV using a customized version of Geant4. By replicating the experimental parameters of a low energy proton irradiation experiment and calibrating the model with two sets of data, the model is now capable of predicting V79 cell survival after low energy (cell survival probability, the cell survival probability is calculated for each cell in the geometric tumour model developed in the current work. This model uses fundamental measurable microscopic quantities such as genome length rather than macroscopic radiobiological quantities such as alpha/beta ratios. This means that the model can be theoretically used under a wide range of conditions with a single set of input parameters once calibrated for a given cell line.

  14. Quantitative morphometric analysis of hepatocellular carcinoma: development of a programmed algorithm and preliminary application.

    Science.gov (United States)

    Yap, Felix Y; Bui, James T; Knuttinen, M Grace; Walzer, Natasha M; Cotler, Scott J; Owens, Charles A; Berkes, Jamie L; Gaba, Ron C

    2013-01-01

    The quantitative relationship between tumor morphology and malignant potential has not been explored in liver tumors. We designed a computer algorithm to analyze shape features of hepatocellular carcinoma (HCC) and tested feasibility of morphologic analysis. Cross-sectional images from 118 patients diagnosed with HCC between 2007 and 2010 were extracted at the widest index tumor diameter. The tumor margins were outlined, and point coordinates were input into a MATLAB (MathWorks Inc., Natick, Massachusetts, USA) algorithm. Twelve shape descriptors were calculated per tumor: the compactness, the mean radial distance (MRD), the RD standard deviation (RDSD), the RD area ratio (RDAR), the zero crossings, entropy, the mean Feret diameter (MFD), the Feret ratio, the convex hull area (CHA) and perimeter (CHP) ratios, the elliptic compactness (EC), and the elliptic irregularity (EI). The parameters were correlated with the levels of alpha-fetoprotein (AFP) as an indicator of tumor aggressiveness. The quantitative morphometric analysis was technically successful in all cases. The mean parameters were as follows: compactness 0.88±0.086, MRD 0.83±0.056, RDSD 0.087±0.037, RDAR 0.045±0.023, zero crossings 6±2.2, entropy 1.43±0.16, MFD 4.40±3.14 cm, Feret ratio 0.78±0.089, CHA 0.98±0.027, CHP 0.98±0.030, EC 0.95±0.043, and EI 0.95±0.023. MFD and RDAR provided the widest value range for the best shape discrimination. The larger tumors were less compact, more concave, and less ellipsoid than the smaller tumors (P < 0.0001). AFP-producing tumors displayed greater morphologic irregularity based on several parameters, including compactness, MRD, RDSD, RDAR, entropy, and EI (P < 0.05 for all). Computerized HCC image analysis using shape descriptors is technically feasible. Aggressively growing tumors have wider diameters and more irregular margins. Future studies will determine further clinical applications for this morphologic analysis.

  15. Placental complications after a previous cesarean section

    OpenAIRE

    Milošević Jelena; Lilić Vekoslav; Tasić Marija; Radović-Janošević Dragana; Stefanović Milan; Antić Vladimir

    2009-01-01

    Introduction The incidence of cesarean section has been rising in the past 50 years. With the increased number of cesarean sections, the number of pregnancies with the previous cesarean section rises as well. The aim of this study was to establish the influence of the previous cesarean section on the development of placental complications: placenta previa, placental abruption and placenta accreta, as well as to determine the influence of the number of previous cesarean sections on the complic...

  16. Additional operations in algebra of structural numbers for control algorithm development

    Directory of Open Access Journals (Sweden)

    Morhun A.V.

    2016-12-01

    Full Text Available The structural numbers and the algebra of the structural numbers due to the simplicity of representation, flexibility and current algebraic operations are the powerful tool for a wide range of applications. In autonomous power supply systems and systems with distributed generation (Micro Grid mathematical apparatus of structural numbers can be effectively used for the calculation of the parameters of the operating modes of consumption of electric energy. The purpose of the article is the representation of the additional algebra of structural numbers. The standard algebra was proposed to be extended by the additional operations and modification current in order to expand the scope of their use, namely to construct a flexible, adaptive algorithms of control systems. It is achieved due to the possibility to consider each individual component of the system with its parameters and provide easy management of entire system and each individual component. Thus, structural numbers and extended algebra are the perspective line of research and further studying is required.

  17. Network Reduction Algorithm for Developing Distribution Feeders for Real-Time Simulators: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Nagarajan, Adarsh; Nelson, Austin; Prabakar, Kumaraguru; Hoke, Andy; Asano, Marc; Ueda, Reid; Nepal, Shaili

    2017-06-15

    As advanced grid-support functions (AGF) become more widely used in grid-connected photovoltaic (PV) inverters, utilities are increasingly interested in their impacts when implemented in the field. These effects can be understood by modeling feeders in real-time systems and testing PV inverters using power hardware-in-the-loop (PHIL) techniques. This paper presents a novel feeder model reduction algorithm using a Monte Carlo method that enables large feeders to be solved and operated on real-time computing platforms. Two Hawaiian Electric feeder models in Synergi Electric's load flow software were converted to reduced order models in OpenDSS, and subsequently implemented in the OPAL-RT real-time digital testing platform. Smart PV inverters were added to the real-time model with AGF responses modeled after characterizing commercially available hardware inverters. Finally, hardware inverters were tested in conjunction with the real-time model using PHIL techniques so that the effects of AGFs on the choice feeders could be analyzed.

  18. Development of ice floe tracker algorithm to measure Lagrangian statistics in the eastern Greenland coast

    Science.gov (United States)

    Lopez, Rosalinda; Wilhelmus, Monica M.; Schodlok, Michael; Klein, Patrice

    2017-11-01

    Sea ice export through Fram Strait is a key component of the Arctic climate system. The East Greenland Current (EGC) carries most of the sea ice southwards until it melts. Lagrangian methods using sea ice buoys have been used to map ice features in polar regions. However, their spatial and temporal coverage is limited. Satellite data can provide a better tool to map sea ice flow and its variability. Here, an automated sea ice floe detection algorithm uses ice floes as tracers for surface ocean currents. We process Moderate Resolution Imaging Spectroradiometer satellite images to track ice floes (length scale 5-10 km) in the north-eastern Greenland Sea region. Our matlab-based routines effectively filter out clouds and adaptively modify the images to segment and identify ice floes. Ice floes were tracked based on persistent surface features common in successive images throughout 2016. Their daily centroid locations were extracted and its resulting trajectories are used to describe surface circulation and its variability using differential kinematic parameters. We will discuss the application of this method to a longer time series and larger spatial coverage. This enables us to derive the inter-annual variability of mesoscale features along the eastern coast of Greenland. Supported by UCR Mechanical Engineering Departmental Fellowship.

  19. Development and applications of various optimization algorithms for diesel engine combustion and emissions optimization

    Science.gov (United States)

    Ogren, Ryan M.

    For this work, Hybrid PSO-GA and Artificial Bee Colony Optimization (ABC) algorithms are applied to the optimization of experimental diesel engine performance, to meet Environmental Protection Agency, off-road, diesel engine standards. This work is the first to apply ABC optimization to experimental engine testing. All trials were conducted at partial load on a four-cylinder, turbocharged, John Deere engine using neat-Biodiesel for PSO-GA and regular pump diesel for ABC. Key variables were altered throughout the experiments, including, fuel pressure, intake gas temperature, exhaust gas recirculation flow, fuel injection quantity for two injections, pilot injection timing and main injection timing. Both forms of optimization proved effective for optimizing engine operation. The PSO-GA hybrid was able to find a superior solution to that of ABC within fewer engine runs. Both solutions call for high exhaust gas recirculation to reduce oxide of nitrogen (NOx) emissions while also moving pilot and main fuel injections to near top dead center for improved tradeoffs between NOx and particulate matter.

  20. The development of small-scale mechanization means positioning algorithm using radio frequency identification technology in industrial plants

    Science.gov (United States)

    Astafiev, A.; Orlov, A.; Privezencev, D.

    2018-01-01

    The article is devoted to the development of technology and software for the construction of positioning and control systems for small mechanization in industrial plants based on radio frequency identification methods, which will be the basis for creating highly efficient intelligent systems for controlling the product movement in industrial enterprises. The main standards that are applied in the field of product movement control automation and radio frequency identification are considered. The article reviews modern publications and automation systems for the control of product movement developed by domestic and foreign manufacturers. It describes the developed algorithm for positioning of small-scale mechanization means in an industrial enterprise. Experimental studies in laboratory and production conditions have been conducted and described in the article.

  1. VISUALIZATION OF PAGERANK ALGORITHM

    OpenAIRE

    Perhaj, Ervin

    2013-01-01

    The goal of the thesis is to develop a web application that help users understand the functioning of the PageRank algorithm. The thesis consists of two parts. First we develop an algorithm to calculate PageRank values of web pages. The input of algorithm is a list of web pages and links between them. The user enters the list through the web interface. From the data the algorithm calculates PageRank value for each page. The algorithm repeats the process, until the difference of PageRank va...

  2. Analysis and Classification of Stride Patterns Associated with Children Development Using Gait Signal Dynamics Parameters and Ensemble Learning Algorithms

    Directory of Open Access Journals (Sweden)

    Meihong Wu

    2016-01-01

    Full Text Available Measuring stride variability and dynamics in children is useful for the quantitative study of gait maturation and neuromotor development in childhood and adolescence. In this paper, we computed the sample entropy (SampEn and average stride interval (ASI parameters to quantify the stride series of 50 gender-matched children participants in three age groups. We also normalized the SampEn and ASI values by leg length and body mass for each participant, respectively. Results show that the original and normalized SampEn values consistently decrease over the significance level of the Mann-Whitney U test (p<0.01 in children of 3–14 years old, which indicates the stride irregularity has been significantly ameliorated with the body growth. The original and normalized ASI values are also significantly changing when comparing between any two groups of young (aged 3–5 years, middle (aged 6–8 years, and elder (aged 10–14 years children. Such results suggest that healthy children may better modulate their gait cadence rhythm with the development of their musculoskeletal and neurological systems. In addition, the AdaBoost.M2 and Bagging algorithms were used to effectively distinguish the children’s gait patterns. These ensemble learning algorithms both provided excellent gait classification results in terms of overall accuracy (≥90%, recall (≥0.8, and precision (≥0.8077.

  3. Development of an algorithm for the biochemical evolution of uranium mill tailings

    International Nuclear Information System (INIS)

    Snodgrass, W.J.; Nicholson, R.V.; Garisto, N.C.

    1985-01-01

    An analysis of relevant time scales for modelling the geochemical evolution of uranium mill tailings (seconds to millions of years) is presented. It is suggested that the chemical retention time of pore water is an appropriate parameter for assessing the interaction of transport and kinetics in formulating an algorithm for the evolution of uranium mill tailings. Two special sub-studies are presented. In one, a reaction-transport model is used to examine the sensitivity of pyrite oxidation to kinetics and transport in the unsaturated zone. The results suggest that the oxygen flux (acid production flux) from a column of tailings is most sensitive to the diffusion coefficient and relatively insensitive to the pyrite oxidation rate constant. This suggests priorities for research (measurement of diffusion coefficient) and management strategies (low diffusion cover). In a second study, reaction code pathway calculations of an equilibrium mineral assemblage for Fe-Ca-Al-S-CO 2 -H 2 O are made. The evolution of minerals along an oxygen-added reaction pathway is quite sensitive to the initial ratio of goethite and pyrite present. A comparison with available field data suggests that the model simulates the expected trend of pH, but that the simulated trend of pe is in error. This uncertainty may result from the presence of pseudo-stable phases or the lack of equilibrium with pyrite. An approach for coupling these processes is to titrate an equilibrium mineral assemblage with pyrite oxidation products and use results from the reaction-diffusion model to ascribe a time dimension to the reaction pathway

  4. SU-G-BRA-02: Development of a Learning Based Block Matching Algorithm for Ultrasound Tracking in Radiotherapy

    International Nuclear Information System (INIS)

    Shepard, A; Bednarz, B

    2016-01-01

    Purpose: To develop an ultrasound learning-based tracking algorithm with the potential to provide real-time motion traces of anatomy-based fiducials that may aid in the effective delivery of external beam radiation. Methods: The algorithm was developed in Matlab R2015a and consists of two main stages: reference frame selection, and localized block matching. Immediately following frame acquisition, a normalized cross-correlation (NCC) similarity metric is used to determine a reference frame most similar to the current frame from a series of training set images that were acquired during a pretreatment scan. Segmented features in the reference frame provide the basis for the localized block matching to determine the feature locations in the current frame. The boundary points of the reference frame segmentation are used as the initial locations for the block matching and NCC is used to find the most similar block in the current frame. The best matched block locations in the current frame comprise the updated feature boundary. The algorithm was tested using five features from two sets of ultrasound patient data obtained from MICCAI 2014 CLUST. Due to the lack of a training set associated with the image sequences, the first 200 frames of the image sets were considered a valid training set for preliminary testing, and tracking was performed over the remaining frames. Results: Tracking of the five vessel features resulted in an average tracking error of 1.21 mm relative to predefined annotations. The average analysis rate was 15.7 FPS with analysis for one of the two patients reaching real-time speeds. Computations were performed on an i5-3230M at 2.60 GHz. Conclusion: Preliminary tests show tracking errors comparable with similar algorithms at close to real-time speeds. Extension of the work onto a GPU platform has the potential to achieve real-time performance, making tracking for therapy applications a feasible option. This work is partially funded by NIH grant R01CA

  5. Onboard Generic Fault Detection Algorithm Development and Demonstration for VTOL sUAS, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — In the proposed SBIR study, Empirical Systems Aerospace, Inc. (ESAero) will develop a fault detection and identification avionics system implementing a generic...

  6. Novel algorithm for management of acute epididymitis.

    Science.gov (United States)

    Hongo, Hiroshi; Kikuchi, Eiji; Matsumoto, Kazuhiro; Yazawa, Satoshi; Kanao, Kent; Kosaka, Takeo; Mizuno, Ryuichi; Miyajima, Akira; Saito, Shiro; Oya, Mototsugu

    2017-01-01

    To identify predictive factors for the severity of epididymitis and to develop an algorithm guiding decisions on how to manage patients with this disease. A retrospective study was carried out on 160 epididymitis patients at Keio University Hospital. We classified cases into severe and non-severe groups, and compared clinical findings at the first visit. Based on statistical analyses, we developed an algorithm for predicting severe cases. We validated the algorithm by applying it to an external cohort of 96 patients at Tokyo Medical Center. The efficacy of the algorithm was investigated by a decision curve analysis. A total of 19 patients (11.9%) had severe epididymitis. Patient characteristics including older age, previous history of diabetes mellitus and fever, as well as laboratory data including a higher white blood cell count, C-reactive protein level and blood urea nitrogen level were independently associated with severity. A predictive algorithm was created with the ability to classify epididymitis cases into three risk groups. In the Keio University Hospital cohort, 100%, 23.5%, and 3.4% of cases in the high-, intermediate-, and low-risk groups, respectively, became severe. The specificity of the algorithm for predicting severe epididymitis proved to be 100% in the Keio University Hospital cohort and 98.8% in the Tokyo Medical Center cohort. The decision curve analysis also showed the high efficacy of the algorithm. This algorithm might aid in decision-making for the clinical management of acute epididymitis. © 2016 The Japanese Urological Association.

  7. Assessment of Canopy Chlorophyll Content Retrieval in Maize and Soybean: Implications of Hysteresis on the Development of Generic Algorithms

    Directory of Open Access Journals (Sweden)

    Yi Peng

    2017-03-01

    Full Text Available Canopy chlorophyll content (Chl closely relates to plant photosynthetic capacity, nitrogen status and productivity. The goal of this study is to develop remote sensing techniques for accurate estimation of canopy Chl during the entire growing season without re-parameterization of algorithms for two contrasting crop species, maize and soybean. These two crops represent different biochemical mechanisms of photosynthesis, leaf structure and canopy architecture. The relationships between canopy Chl and reflectance, collected at close range and resampled to bands of the Multi Spectral Instrument (MSI aboard Sentinel-2, were analyzed in samples taken across the entirety of the growing seasons in three irrigated and rainfed sites located in eastern Nebraska between 2001 and 2005. Crop phenology was a factor strongly influencing the reflectance of both maize and soybean. Substantial hysteresis of the reflectance vs. canopy Chl relationship existed between the vegetative and reproductive stages. The effect of the hysteresis on vegetation indices (VI, applied for canopy Chl estimation, depended on the bands used and their formulation. The hysteresis greatly affected the accuracy of canopy Chl estimation by widely-used VIs with near infrared (NIR and red reflectance (e.g., normalized difference vegetation index (NDVI, enhanced vegetation index (EVI and simple ratio (SR. VIs that use red edge and NIR bands (e.g., red edge chlorophyll index (CIred edge, red edge NDVI and the MERIS terrestrial chlorophyll index (MTCI were minimally affected by crop phenology (i.e., they exhibited little hysteresis and were able to accurately estimate canopy Chl in two crops without algorithm reparameterization and, thus, were found to be the best candidates for generic algorithms to estimate crop Chl using the surface reflectance products of MSI Sentinel-2.

  8. Applying Advances in GPM Radiometer Intercalibration and Algorithm Development to a Long-Term TRMM/GPM Global Precipitation Dataset

    Science.gov (United States)

    Berg, W. K.

    2016-12-01

    The Global Precipitation Mission (GPM) Core Observatory, which was launched in February of 2014, provides a number of advances for satellite monitoring of precipitation including a dual-frequency radar, high frequency channels on the GPM Microwave Imager (GMI), and coverage over middle and high latitudes. The GPM concept, however, is about producing unified precipitation retrievals from a constellation of microwave radiometers to provide approximately 3-hourly global sampling. This involves intercalibration of the input brightness temperatures from the constellation radiometers, development of an apriori precipitation database using observations from the state-of-the-art GPM radiometer and radars, and accounting for sensor differences in the retrieval algorithm in a physically-consistent way. Efforts by the GPM inter-satellite calibration working group, or XCAL team, and the radiometer algorithm team to create unified precipitation retrievals from the GPM radiometer constellation were fully implemented into the current version 4 GPM precipitation products. These include precipitation estimates from a total of seven conical-scanning and six cross-track scanning radiometers as well as high spatial and temporal resolution global level 3 gridded products. Work is now underway to extend this unified constellation-based approach to the combined TRMM/GPM data record starting in late 1997. The goal is to create a long-term global precipitation dataset employing these state-of-the-art calibration and retrieval algorithm approaches. This new long-term global precipitation dataset will incorporate the physics provided by the combined GPM GMI and DPR sensors into the apriori database, extend prior TRMM constellation observations to high latitudes, and expand the available TRMM precipitation data to the full constellation of available conical and cross-track scanning radiometers. This combined TRMM/GPM precipitation data record will thus provide a high-quality high

  9. Algorithmic developments and qualification of the ERANOS nuclear code and data system for the characterization of fast neutron reactors

    International Nuclear Information System (INIS)

    Rimpault, G.

    2003-09-01

    In this report, the author discusses the algorithmic and methodological developments in the field of nuclear reactor physics, and more particularly the developments of the ERALIB1/ERANOS nuclear code and data system for the calculation of core critical mass and power of sodium-cooled fast neutron reactors (Phenix and Super Phenix), and of the CAPRA 4/94 core. After a brief recall of nuclear data and methods used to determine critical masses and powers, the author discusses the interpretation of start-up experiments performed on Super-Phenix. The methodology used to characterize the uncertainties of these parameters is then applied to the calculation of the Super-Phenix critical mass and power distribution. He presents the approach chosen to define the validity domain of the ERANOS form

  10. Spectral Decomposition Algorithm (SDA)

    Data.gov (United States)

    National Aeronautics and Space Administration — Spectral Decomposition Algorithm (SDA) is an unsupervised feature extraction technique similar to PCA that was developed to better distinguish spectral features in...

  11. ON THE USE OF LYTLE’S ALGORITHM FOR SOLVING TRAVELING SALESMAN PROBLEM AT DEVELOPING SUBURBAN ROUTE

    Directory of Open Access Journals (Sweden)

    S. Kantsedal

    2012-01-01

    Full Text Available Lytle’s algorithm is described as proposed for an accurate solution of the salesman Problem. Statistical characteristics of solution duration with lytle’s algorithm of some problems and of their modifications are specified. On the basis of the results obtained the limits for the algorithm practical specification in the preparation of the route network are given.

  12. High-resolution computational algorithms for simulating offshore wind turbines and farms: Model development and validation

    Energy Technology Data Exchange (ETDEWEB)

    Calderer, Antoni [Univ. of Minnesota, Minneapolis, MN (United States); Yang, Xiaolei [Stony Brook Univ., NY (United States); Angelidis, Dionysios [Univ. of Minnesota, Minneapolis, MN (United States); Feist, Chris [Univ. of Minnesota, Minneapolis, MN (United States); Guala, Michele [Univ. of Minnesota, Minneapolis, MN (United States); Ruehl, Kelley [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Guo, Xin [Univ. of Minnesota, Minneapolis, MN (United States); Boomsma, Aaron [Univ. of Minnesota, Minneapolis, MN (United States); Shen, Lian [Univ. of Minnesota, Minneapolis, MN (United States); Sotiropoulos, Fotis [Stony Brook Univ., NY (United States)

    2015-10-30

    The present project involves the development of modeling and analysis design tools for assessing offshore wind turbine technologies. The computational tools developed herein are able to resolve the effects of the coupled interaction of atmospheric turbulence and ocean waves on aerodynamic performance and structural stability and reliability of offshore wind turbines and farms. Laboratory scale experiments have been carried out to derive data sets for validating the computational models.

  13. Developing a b-tagging algorithm using soft muons at level-3 for the DO detector at Fermilab

    Energy Technology Data Exchange (ETDEWEB)

    Das, Mayukh [Louisiana Tech. U.

    2005-01-01

    The current data-taking phase of the DØ detector at Fermilab, called Run II, is designed to aid the search for the Higgs Boson. The neutral Higgs is postulated to have a mass of 117 GeV. One of the channels promising the presence of this hypothetical particle is through the decay of b-quark into a muon. The process of identifying a b-quark in a jet using muon as a reference is b-tagging with a muon tag. At the current data taking and analysis rate, it will take long to reach the process of identifying valid events. The triggering mechanism of the experiment, consisting of 3 levels of combined hardware, firmware and software writes fi physics events at the rate of 50 Hz to data disks, with Level-3 alone accounting for the reduction from 1 kHz to 50 Hz. This large rejection is achieved through algorithms implemented in the search for key physics processes. The work presented in this dissertation is the development of a fast b-tagging algorithm using central-matched muons, called L3FBTagMU. Additional tools such as the impact parameter tracks and calorimeter jets have been used to tag B jets. The dR or the differential increment in cone radius is the most significant variable introduced. Plots within thresholds of dR for both Z → bb Monte Carlo and monitor stream data show similar efficiency trends when checked against other parameters. The differential efficiencies saturate at dR within 0.5 to 0.7 range. Differential bins of 0.1 intervals project an overall efficiency of tagging a b-jet in any event is 17.25 in data. This is in good agreement with the theory. The algorithm is currently running online and offline through the DØ database repository. This work is primarily used by the b-id, B-Physics and Higgs Physics groups for their physics analysis wherein the above b-tagging efficiency serves as a crucial tool. The prospect for optimizing the physics potential using this algorithm is very promising for current and future analyses.

  14. Development of new tsunami detection algorithms for high frequency radars and application to tsunami warning in British Columbia, Canada

    Science.gov (United States)

    Grilli, S. T.; Guérin, C. A.; Shelby, M. R.; Grilli, A. R.; Insua, T. L.; Moran, P., Jr.

    2016-12-01

    A High-Frequency (HF) radar was installed by Ocean Networks Canada in Tofino, BC, to detect tsunamis from far- and near-field seismic sources; in particular, from the Cascadia Subduction Zone. This HF radar can measure ocean surface currents up to a 70-85 km range, depending on atmospheric conditions, based on the Doppler shift they cause in ocean waves at the Bragg frequency. In earlier work, we showed that tsunami currents must be at least 0.15 m/s to be directly detectable by a HF radar, when considering environmental noise and background currents (from tide/mesoscale circulation). This limits a direct tsunami detection to shallow water areas where currents are sufficiently strong due to wave shoaling and, hence, to the continental shelf. It follows that, in locations with a narrow shelf, warning times using a direct inversion method will be small. To detect tsunamis in deeper water, beyond the continental shelf, we proposed a new algorithm that does not require directly inverting currents, but instead is based on observing changes in patterns of spatial correlations of the raw radar signal between two radar cells located along the same wave ray, after time is shifted by the tsunami propagation time along the ray. A pattern change will indicate the presence of a tsunami. We validated this new algorithm for idealized tsunami wave trains propagating over a simple seafloor geometry in a direction normally incident to shore. Here, we further develop, extend, and validate the algorithm for realistic case studies of seismic tsunami sources impacting Vancouver Island, BC. Tsunami currents, computed with a state-of-the-art long wave model are spatially averaged over cells aligned along individual wave rays, located within the radar sweep area, obtained by solving the wave geometric optic equation; for long waves, such rays and tsunami propagation times along those are only function of the seafloor bathymetry, and hence can be precalculated for different incident tsunami

  15. The Development of a Hybrid EnKF-3DVAR Algorithm for Storm-Scale Data Assimilation

    Directory of Open Access Journals (Sweden)

    Jidong Gao

    2013-01-01

    Full Text Available A hybrid 3DVAR-EnKF data assimilation algorithm is developed based on 3DVAR and ensemble Kalman filter (EnKF programs within the Advanced Regional Prediction System (ARPS. The hybrid algorithm uses the extended alpha control variable approach to combine the static and ensemble-derived flow-dependent forecast error covariances. The hybrid variational analysis is performed using an equal weighting of static and flow-dependent error covariance as derived from ensemble forecasts. The method is first applied to the assimilation of simulated radar data for a supercell storm. Results obtained using 3DVAR (with static covariance entirely, hybrid 3DVAR-EnKF, and the EnKF are compared. When data from a single radar are used, the EnKF method provides the best results for the model dynamic variables, while the hybrid method provides the best results for hydrometeor related variables in term of rms errors. Although storm structures can be established reasonably well using 3DVAR, the rms errors are generally worse than seen from the other two methods. With two radars, the results from 3DVAR are closer to those from EnKF. Our tests indicate that the hybrid scheme can reduce the storm spin-up time because it fits the observations, especially the reflectivity observations, better than the EnKF and the 3DVAR at the beginning of the assimilation cycles.

  16. Development of a signal-analysis algorithm for the ZEUS transition-radiation detector under application of a neural network

    International Nuclear Information System (INIS)

    Wollschlaeger, U.

    1992-07-01

    The aim of this thesis consisted in the development of a procedure for the analysis of the data of the transition-radiation detector at ZEUS. For this a neural network was applied and first studied, which results concerning the separation power between electron an pions can be reached by this procedure. It was shown that neural nets yield within the error limits as well results as standard algorithms (total charge, cluster analysis). At an electron efficiency of 90% pion contaminations in the range 1%-2% were reached. Furthermore it could be confirmed that neural networks can be considered for the here present application field as robust in relatively insensitive against external perturbations. For the application in the experiment beside the separation power also the time-behaviour is of importance. The requirement to keep dead-times small didn't allow the application of standard method. By a simulation the time availabel for the signal analysis was estimated. For the testing of the processing time in a neural network subsequently the corresponding algorithm was implemented into an assembler code for the digital signal processor DSP56001. (orig./HSI) [de

  17. Preliminary results of real-time PPP-RTK positioning algorithm development for moving platforms and its performance validation

    Science.gov (United States)

    Won, Jihye; Park, Kwan-Dong

    2015-04-01

    Real-time PPP-RTK positioning algorithms were developed for the purpose of getting precise coordinates of moving platforms. In this implementation, corrections for the satellite orbit and satellite clock were taken from the IGS-RTS products while the ionospheric delay was removed through ionosphere-free combination and the tropospheric delay was either taken care of using the Global Pressure and Temperature (GPT) model or estimated as a stochastic parameter. To improve the convergence speed, all the available GPS and GLONASS measurements were used and Extended Kalman Filter parameters were optimized. To validate our algorithms, we collected the GPS and GLONASS data from a geodetic-quality receiver installed on a roof of a moving vehicle in an open-sky environment and used IGS final products of satellite orbits and clock offsets. The horizontal positioning error got less than 10 cm within 5 minutes, and the error stayed below 10 cm even after the vehicle start moving. When the IGS-RTS product and the GPT model were used instead of the IGS precise product, the positioning accuracy of the moving vehicle was maintained at better than 20 cm once convergence was achieved at around 6 minutes.

  18. Disaggregating reserve-to-production ratios: An algorithm for United States oil and gas reserve development

    Science.gov (United States)

    Williams, Charles William

    Reserve-to-production ratios for oil and gas development are utilized by oil and gas producing states to monitor oil and gas reserve and production dynamics. These ratios are used to determine production levels for the manipulation of oil and gas prices while maintaining adequate reserves for future development. These aggregate reserve-to-production ratios do not provide information concerning development cost and the best time necessary to develop newly discovered reserves. Oil and gas reserves are a semi-finished inventory because development of the reserves must take place in order to implement production. These reserves are considered semi-finished in that they are not counted unless it is economically profitable to produce them. The development of these reserves is encouraged by profit maximization economic variables which must consider the legal, political, and geological aspects of a project. This development is comprised of a myriad of incremental operational decisions, each of which influences profit maximization. The primary purpose of this study was to provide a model for characterizing a single product multi-period inventory/production optimization problem from an unconstrained quantity of raw material which was produced and stored as inventory reserve. This optimization was determined by evaluating dynamic changes in new additions to reserves and the subsequent depletion of these reserves with the maximization of production. A secondary purpose was to determine an equation for exponential depletion of proved reserves which presented a more comprehensive representation of reserve-to-production ratio values than an inadequate and frequently used aggregate historical method. The final purpose of this study was to determine the most accurate delay time for a proved reserve to achieve maximum production. This calculated time provided a measure of the discounted cost and calculation of net present value for developing new reserves. This study concluded that

  19. Development of an algorithm for quantifying extremity biological tissue; Desenvolvimento de um algoritmo quantificador de tecido biologico de extremidade

    Energy Technology Data Exchange (ETDEWEB)

    Pavan, Ana L.M.; Miranda, Jose R.A., E-mail: analuiza@ibb.unesp.br, E-mail: jmiranda@ibb.unesp.br [Universidade Estadual Paulista Julio de Mesquita Filho (IBB/UNESP), Botucatu, SP (Brazil). Instituto de Biociencias. Dept. de Fisica e Biofisica; Pina, Diana R. de, E-mail: drpina@frnb.unesp.br [Universidade Estadual Paulista Julio de Mesquita Filho (FMB/UNESP), Botucatu, SP (Brazil). Faculdade de Medicina. Dept. de Doencas Tropicas e Diagnostico por Imagem

    2013-07-01

    The computerized radiology (CR) has become the most widely used device for image acquisition and production, since its introduction in the 80s. The detection and early diagnosis, obtained via CR, are important for the successful treatment of diseases such as arthritis, metabolic bone diseases, tumors, infections and fractures. However, the standards used for optimization of these images are based on international protocols. Therefore, it is necessary to compose radiographic techniques for CR system that provides a secure medical diagnosis, with doses as low as reasonably achievable. To this end, the aim of this work is to develop a quantifier algorithm of tissue, allowing the construction of a homogeneous end used phantom to compose such techniques. It was developed a database of computed tomography images of hand and wrist of adult patients. Using the Matlab Registered-Sign software, was developed a computational algorithm able to quantify the average thickness of soft tissue and bones present in the anatomical region under study, as well as the corresponding thickness in simulators materials (aluminium and lucite). This was possible through the application of mask and Gaussian removal technique of histograms. As a result, was obtained an average thickness of soft tissue of 18,97 mm and bone tissue of 6,15 mm, and their equivalents in materials simulators of 23,87 mm of acrylic and 1,07mm of aluminum. The results obtained agreed with the medium thickness of biological tissues of a patient's hand pattern, enabling the construction of an homogeneous phantom.

  20. Development of a thermal control algorithm using artificial neural network models for improved thermal comfort and energy efficiency in accommodation buildings

    International Nuclear Information System (INIS)

    Moon, Jin Woo; Jung, Sung Kwon

    2016-01-01

    Highlights: • An ANN model for predicting optimal start moment of the cooling system was developed. • An ANN model for predicting the amount of cooling energy consumption was developed. • An optimal control algorithm was developed employing two ANN models. • The algorithm showed the advanced thermal comfort and energy efficiency. - Abstract: The aim of this study was to develop a control algorithm to demonstrate the improved thermal comfort and building energy efficiency of accommodation buildings in the cooling season. For this, two artificial neural network (ANN)-based predictive and adaptive models were developed and employed in the algorithm. One model predicted the cooling energy consumption during the unoccupied period for different setback temperatures and the other predicted the time required for restoring current indoor temperature to the normal set-point temperature. Using numerical simulation methods, the prediction accuracy of the two ANN models and the performance of the algorithm were tested. Through the test result analysis, the two ANN models showed their prediction accuracy with an acceptable error rate when applied in the control algorithm. In addition, the two ANN models based algorithm can be used to provide a more comfortable and energy efficient indoor thermal environment than the two conventional control methods, which respectively employed a fixed set-point temperature for the entire day and a setback temperature during the unoccupied period. Therefore, the operating range was 23–26 °C during the occupied period and 25–28 °C during the unoccupied period. Based on the analysis, it can be concluded that the optimal algorithm with two predictive and adaptive ANN models can be used to design a more comfortable and energy efficient indoor thermal environment for accommodation buildings in a comprehensive manner.

  1. Development Algorithm of the Technological Process of Manufacturing Gas Turbine Parts by Selective Laser Melting

    Science.gov (United States)

    Sotov, A. V.; Agapovichev, A. V.; Smelov, V. G.; Kyarimov, R. R.

    2018-01-01

    The technology of the selective laser melting (SLM) allows making products from powders of aluminum, titanium, heat-resistant alloys and stainless steels. Today the use of SLM technology develops at manufacture of the functional parts. This in turn requires development of a methodology projection of technological processes (TP) for manufacturing parts including databases of standard TP. Use of a technique will allow to exclude influence of technologist’s qualification on made products quality, and also to reduce labor input and energy consumption by development of TP due to use of the databases of standard TP integrated into a methodology. As approbation of the developed methodology the research of influence of the modes of a laser emission on a roughness of a surface of synthesized material was presented. It is established that the best values of a roughness of exemplars in the longitudinal and transversal directions make 1.98 μm and 3.59 μm respectively. These values of a roughness were received at specific density of energy 6.25 J/mm2 that corresponds to power and the speed of scanning of 200 W and 400 mm/s, respectively, and a hatch distance of 0.08 mm.

  2. Development of a Water Treatment Plant Operation Manual Using an Algorithmic Approach.

    Science.gov (United States)

    Counts, Cary A.

    This document describes the steps to be followed in the development of a prescription manual for training of water treatment plant operators. Suggestions on how to prepare both flow and narrative prescriptions are provided for a variety of water treatment systems, including: raw water, flocculation, rapid sand filter, caustic soda feed, alum feed,…

  3. Computational geometry algorithms and applications

    CERN Document Server

    de Berg, Mark; Overmars, Mark; Schwarzkopf, Otfried

    1997-01-01

    Computational geometry emerged from the field of algorithms design and anal­ ysis in the late 1970s. It has grown into a recognized discipline with its own journals, conferences, and a large community of active researchers. The suc­ cess of the field as a research discipline can on the one hand be explained from the beauty of the problems studied and the solutions obtained, and, on the other hand, by the many application domains--computer graphics, geographic in­ formation systems (GIS), robotics, and others-in which geometric algorithms play a fundamental role. For many geometric problems the early algorithmic solutions were either slow or difficult to understand and implement. In recent years a number of new algorithmic techniques have been developed that improved and simplified many of the previous approaches. In this textbook we have tried to make these modem algorithmic solutions accessible to a large audience. The book has been written as a textbook for a course in computational geometry, but it can ...

  4. Development of a Raman chemical image detection algorithm for authenticating dry milk

    Science.gov (United States)

    Qin, Jianwei; Chao, Kuanglin; Kim, Moon S.

    2013-05-01

    This research developed a Raman chemical imaging method for detecting multiple adulterants in skim milk powder. Ammonium sulfate, dicyandiamide, melamine, and urea were mixed into the milk powder as chemical adulterants in the concentration range of 0.1-5.0%. A Raman imaging system using a 785-nm laser acquired hyperspectral images in the wavenumber range of 102-2538 cm-1 for a 25×25 mm2 area of each mixture. A polynomial curve-fitting method was used to correct fluorescence background in the Raman images. An image classification method was developed based on single-band fluorescence-free images at unique Raman peaks of the adulterants. Raman chemical images were created to visualize identification and distribution of the multiple adulterant particles in the milk powder. Linear relationship was found between adulterant pixel number and adulterant concentration, demonstrating the potential of the Raman chemical imaging for quantitative analysis of the adulterants in the milk powder.

  5. Development of a new prior knowledge based image reconstruction algorithm for the cone-beam-CT in radiation therapy

    International Nuclear Information System (INIS)

    Vaegler, Sven

    2016-01-01

    the follow up reconstructed images are not appropriate considered so far. These deviations may result from changes in anatomy including tumour shrinkage and loss of weight and may result in a degraded image quality of the reconstructed images. Deformable registration methods that adapt the prior images adequately can compensate this shortcoming of PICCS. Such registration techniques, however, suffer from limited accurateness and much higher computation time for the overall reconstruction process. Therefore, the aim of this thesis was to develop a new knowledge-based reconstruction algorithm that incorporates additionally local dependent reliability information about the prior images into reconstruction algorithm. The basic idea of the new algorithm is the assumption that the prior images are composed of areas with large and of areas with small deviations. Accordingly, the areas of the prior image were assigned as variable where substantial deformations due to motion or change in structure over the time series were expected. Hence, these regions were not providing valuable structural information for the anticipated result anymore. In contrast, ''a priori'' information was assigned to structurally stationary areas where no changes were expected. Based on this composition, a weighting matrix was generated that considers the strength of these variations during reconstruction. The new algorithm was tested in different feasibility studies to common dose reduction strategies. These dose reduction strategies includes the reduction of the number of projections, the acquisition of projections with strong noise and the reduction of the acquisition space. The main aim of this work was to demonstrate the gain of image quality when prior images with major variations are used compared to standard reconstruction techniques. The studies were performed with a computer phantom, and in particular with experimental data that have been acquired with the clinical CBCT

  6. Development of a multi-objective PBIL evolutionary algorithm applied to a nuclear reactor core reload optimization problem

    International Nuclear Information System (INIS)

    Machado, Marcelo D.; Dchirru, Roberto

    2005-01-01

    The nuclear reactor core reload optimization problem consists in finding a pattern of partially burned-up and fresh fuels that optimizes the plant's next operation cycle. This optimization problem has been traditionally solved using an expert's knowledge, but recently artificial intelligence techniques have also been applied successfully. The artificial intelligence optimization techniques generally have a single objective. However, most real-world engineering problems, including nuclear core reload optimization, have more than one objective (multi-objective) and these objectives are usually conflicting. The aim of this work is to develop a tool to solve multi-objective problems based on the Population-Based Incremental Learning (PBIL) algorithm. The new tool is applied to solve the Angra 1 PWR core reload optimization problem with the purpose of creating a Pareto surface, so that a pattern selected from this surface can be applied for the plant's next operation cycle. (author)

  7. Graph 500 on OpenSHMEM: Using a Practical Survey of Past Work to Motivate Novel Algorithmic Developments

    Energy Technology Data Exchange (ETDEWEB)

    Grossman, Max [Rice Univ., Houston, TX (United States); Pritchard Jr., Howard Porter [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Budimlic, Zoran [Rice Univ., Houston, TX (United States); Sarkar, Vivek [Rice Univ., Houston, TX (United States)

    2016-12-22

    Graph500 [14] is an effort to offer a standardized benchmark across large-scale distributed platforms which captures the behavior of common communicationbound graph algorithms. Graph500 differs from other large-scale benchmarking efforts (such as HPL [6] or HPGMG [7]) primarily in the irregularity of its computation and data access patterns. The core computational kernel of Graph500 is a breadth-first search (BFS) implemented on an undirected graph. The output of Graph500 is a spanning tree of the input graph, usually represented by a predecessor mapping for every node in the graph. The Graph500 benchmark defines several pre-defined input sizes for implementers to test against. This report summarizes investigation into implementing the Graph500 benchmark on OpenSHMEM, and focuses on first building a strong and practical understanding of the strengths and limitations of past work before proposing and developing novel extensions.

  8. Development of a Two-Phase Flow Analysis Code based on a Unstructured-Mesh SIMPLE Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jong Tae; Park, Ik Kyu; Cho, Heong Kyu; Yoon, Han Young; Kim, Kyung Doo; Jeong, Jae Jun

    2008-09-15

    For analyses of multi-phase flows in a water-cooled nuclear power plant, a three-dimensional SIMPLE-algorithm based hydrodynamic solver CUPID-S has been developed. As governing equations, it adopts a two-fluid three-field model for the two-phase flows. The three fields represent a continuous liquid, a dispersed droplets, and a vapour field. The governing equations are discretized by a finite volume method on an unstructured grid to handle the geometrical complexity of the nuclear reactors. The phasic momentum equations are coupled and solved with a sparse block Gauss-Seidel matrix solver to increase a numerical stability. The pressure correction equation derived by summing the phasic volume fraction equations is applied on the unstructured mesh in the context of a cell-centered co-located scheme. This paper presents the numerical method and the preliminary results of the calculations.

  9. Algorithmic cryptanalysis

    CERN Document Server

    Joux, Antoine

    2009-01-01

    Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic

  10. Development and evaluation of an algorithm-based tool for Medication Management in nursing homes: the AMBER study protocol.

    Science.gov (United States)

    Erzkamp, Susanne; Rose, Olaf

    2018-04-20

    Residents of nursing homes are susceptible to risks from medication. Medication Reviews (MR) can increase clinical outcomes and the quality of medication therapy. Limited resources and barriers between healthcare practitioners are potential obstructions to performing MR in nursing homes. Focusing on frequent and relevant problems can support pharmacists in the provision of pharmaceutical care services. This study aims to develop and evaluate an algorithm-based tool that facilitates the provision of Medication Management in clinical practice. This study is subdivided into three phases. In phase I, semistructured interviews with healthcare practitioners and patients will be performed, and a mixed methods approach will be chosen. Qualitative content analysis and the rating of the aspects concerning the frequency and relevance of problems in the medication process in nursing homes will be performed. In phase II, a systematic review of the current literature on problems and interventions will be conducted. The findings will be narratively presented. The results of both phases will be combined to develop an algorithm for MRs. For further refinement of the aspects detected, a Delphi survey will be conducted. In conclusion, a tool for clinical practice will be created. In phase III, the tool will be tested on MRs in nursing homes. In addition, effectiveness, acceptance, feasibility and reproducibility will be assessed. The primary outcome of phase III will be the reduction of drug-related problems (DRPs), which will be detected using the tool. The secondary outcomes will be the proportion of DRPs, the acceptance of pharmaceutical recommendations and the expenditure of time using the tool and inter-rater reliability. This study intervention is approved by the local Ethics Committee. The findings of the study will be presented at national and international scientific conferences and will be published in peer-reviewed journals. DRKS00010995. © Article author(s) (or their

  11. Detection of Patients at High Risk of Medication Errors: Development and Validation of an Algorithm.

    Science.gov (United States)

    Saedder, Eva Aggerholm; Lisby, Marianne; Nielsen, Lars Peter; Rungby, Jørgen; Andersen, Ljubica Vukelic; Bonnerup, Dorthe Krogsgaard; Brock, Birgitte

    2016-02-01

    Medication errors (MEs) are preventable and can result in patient harm and increased expenses in the healthcare system in terms of hospitalization, prolonged hospitalizations and even death. We aimed to develop a screening tool to detect acutely admitted patients at low or high risk of MEs comprised by items found by literature search and the use of theoretical weighting. Predictive variables used for the development of the risk score were found by the literature search. Three retrospective patient populations and one prospective pilot population were used for modelling. The final risk score was evaluated for precision by the use of sensitivity, specificity and area under the ROC (receiver operating characteristic) curves. The variables used in the final risk score were reduced renal function, the total number of drugs and the risk of individual drugs to cause harm and drug-drug interactions. We found a risk score in the prospective population with an area under the ROC curve of 0.76. The final risk score was found to be quite robust as it showed an area under the ROC curve of 0.87 in a recent patient population, 0.74 in a population of internal medicine and 0.66 in an orthopaedic population. We developed a simple and robust score, MERIS, with the ability to detect patients and divide them according to low and high risk of MEs in a general population admitted at acute admissions unit. The accuracy of the risk score was at least as good as other models reported using multiple regression analysis. © 2015 Nordic Association for the Publication of BCPT (former Nordic Pharmacological Society).

  12. Development of an Outdoor Temperature Based Control Algorithm for Residential Mechanical Ventilation Control

    Energy Technology Data Exchange (ETDEWEB)

    Less, Brennan [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Walker, Iain [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Tang, Yihuan [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2014-08-01

    The Incremental Ventilation Energy (IVE) model developed in this study combines the output of simple air exchange models with a limited set of housing characteristics to estimate the associated change in energy demand of homes. The IVE model was designed specifically to enable modellers to use existing databases of housing characteristics to determine the impact of ventilation policy change on a population scale. The IVE model estimates of energy change when applied to US homes with limited parameterisation are shown to be comparable to the estimates of a well-validated, complex residential energy model.

  13. Development of signal processing algorithms for ultrasonic detection of coal seam interfaces

    Science.gov (United States)

    Purcell, D. D.; Ben-Bassat, M.

    1976-01-01

    A pattern recognition system is presented for determining the thickness of coal remaining on the roof and floor of a coal seam. The system was developed to recognize reflected pulse echo signals that are generated by an acoustical transducer and reflected from the coal seam interface. The flexibility of the system, however, should enable it to identify pulse-echo signals generated by radar or other techniques. The main difference being the specific features extracted from the recorded data as a basis for pattern recognition.

  14. Development and validation of QRISK3 risk prediction algorithms to estimate future risk of cardiovascular disease: prospective cohort study.

    Science.gov (United States)

    Hippisley-Cox, Julia; Coupland, Carol; Brindle, Peter

    2017-05-23

    Objectives  To develop and validate updated QRISK3 prediction algorithms to estimate the 10 year risk of cardiovascular disease in women and men accounting for potential new risk factors. Design  Prospective open cohort study. Setting  General practices in England providing data for the QResearch database. Participants  1309 QResearch general practices in England: 981 practices were used to develop the scores and a separate set of 328 practices were used to validate the scores. 7.89 million patients aged 25-84 years were in the derivation cohort and 2.67 million patients in the validation cohort. Patients were free of cardiovascular disease and not prescribed statins at baseline. Methods  Cox proportional hazards models in the derivation cohort to derive separate risk equations in men and women for evaluation at 10 years. Risk factors considered included those already in QRISK2 (age, ethnicity, deprivation, systolic blood pressure, body mass index, total cholesterol: high density lipoprotein cholesterol ratio, smoking, family history of coronary heart disease in a first degree relative aged less than 60 years, type 1 diabetes, type 2 diabetes, treated hypertension, rheumatoid arthritis, atrial fibrillation, chronic kidney disease (stage 4 or 5)) and new risk factors (chronic kidney disease (stage 3, 4, or 5), a measure of systolic blood pressure variability (standard deviation of repeated measures), migraine, corticosteroids, systemic lupus erythematosus (SLE), atypical antipsychotics, severe mental illness, and HIV/AIDs). We also considered erectile dysfunction diagnosis or treatment in men. Measures of calibration and discrimination were determined in the validation cohort for men and women separately and for individual subgroups by age group, ethnicity, and baseline disease status. Main outcome measures  Incident cardiovascular disease recorded on any of the following three linked data sources: general practice, mortality, or hospital admission records

  15. Weak and Strong Convergence of an Algorithm for the Split Common Fixed-Point of Asymptotically Quasi-Nonexpansive Operators

    Directory of Open Access Journals (Sweden)

    Yazheng Dang

    2013-01-01

    Full Text Available Inspired by the Moudafi (2010, we propose an algorithm for solving the split common fixed-point problem for a wide class of asymptotically quasi-nonexpansive operators and the weak and strong convergence of the algorithm are shown under some suitable conditions in Hilbert spaces. The algorithm and its convergence results improve and develop previous results for split feasibility problems.

  16. Methods to Develop an Electronic Medical Record Phenotype Algorithm to Compare the Risk of Coronary Artery Disease across 3 Chronic Disease Cohorts.

    Directory of Open Access Journals (Sweden)

    Katherine P Liao

    Full Text Available Typically, algorithms to classify phenotypes using electronic medical record (EMR data were developed to perform well in a specific patient population. There is increasing interest in analyses which can allow study of a specific outcome across different diseases. Such a study in the EMR would require an algorithm that can be applied across different patient populations. Our objectives were: (1 to develop an algorithm that would enable the study of coronary artery disease (CAD across diverse patient populations; (2 to study the impact of adding narrative data extracted using natural language processing (NLP in the algorithm. Additionally, we demonstrate how to implement CAD algorithm to compare risk across 3 chronic diseases in a preliminary study.We studied 3 established EMR based patient cohorts: diabetes mellitus (DM, n = 65,099, inflammatory bowel disease (IBD, n = 10,974, and rheumatoid arthritis (RA, n = 4,453 from two large academic centers. We developed a CAD algorithm using NLP in addition to structured data (e.g. ICD9 codes in the RA cohort and validated it in the DM and IBD cohorts. The CAD algorithm using NLP in addition to structured data achieved specificity >95% with a positive predictive value (PPV 90% in the training (RA and validation sets (IBD and DM. The addition of NLP data improved the sensitivity for all cohorts, classifying an additional 17% of CAD subjects in IBD and 10% in DM while maintaining PPV of 90%. The algorithm classified 16,488 DM (26.1%, 457 IBD (4.2%, and 245 RA (5.0% with CAD. In a cross-sectional analysis, CAD risk was 63% lower in RA and 68% lower in IBD compared to DM (p<0.0001 after adjusting for traditional cardiovascular risk factors.We developed and validated a CAD algorithm that performed well across diverse patient populations. The addition of NLP into the CAD algorithm improved the sensitivity of the algorithm, particularly in cohorts where the prevalence of CAD was low. Preliminary data suggest

  17. A fuzzy hill-climbing algorithm for the development of a compact associative classifier

    Science.gov (United States)

    Mitra, Soumyaroop; Lam, Sarah S.

    2012-02-01

    Classification, a data mining technique, has widespread applications including medical diagnosis, targeted marketing, and others. Knowledge discovery from databases in the form of association rules is one of the important data mining tasks. An integrated approach, classification based on association rules, has drawn the attention of the data mining community over the last decade. While attention has been mainly focused on increasing classifier accuracies, not much efforts have been devoted towards building interpretable and less complex models. This paper discusses the development of a compact associative classification model using a hill-climbing approach and fuzzy sets. The proposed methodology builds the rule-base by selecting rules which contribute towards increasing training accuracy, thus balancing classification accuracy with the number of classification association rules. The results indicated that the proposed associative classification model can achieve competitive accuracies on benchmark datasets with continuous attributes and lend better interpretability, when compared with other rule-based systems.

  18. An approach to the development and analysis of wind turbine control algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Wu, K.C.

    1998-03-01

    The objective of this project is to develop the capability of symbolically generating an analytical model of a wind turbine for studies of control systems. This report focuses on a theoretical formulation of the symbolic equations of motion (EOMs) modeler for horizontal axis wind turbines. In addition to the power train dynamics, a generic 7-axis rotor assembly is used as the base model from which the EOMs of various turbine configurations can be derived. A systematic approach to generate the EOMs is presented using d`Alembert`s principle and Lagrangian dynamics. A Matlab M file was implemented to generate the EOMs of a two-bladed, free yaw wind turbine. The EOMs will be compared in the future to those of a similar wind turbine modeled with the YawDyn code for verification. This project was sponsored by Sandia National Laboratories as part of the Adaptive Structures and Control Task. This is the final report of Sandia Contract AS-0985.

  19. Design of a test system for the development of advanced video chips and software algorithms.

    Science.gov (United States)

    Falkinger, Marita; Kranzfelder, Michael; Wilhelm, Dirk; Stemp, Verena; Koepf, Susanne; Jakob, Judith; Hille, Andreas; Endress, Wolfgang; Feussner, Hubertus; Schneider, Armin

    2015-04-01

    Visual deterioration is a crucial point in minimally invasive surgery impeding surgical performance. Modern image processing technologies appear to be promising approaches for further image optimization by digital elimination of disturbing particles. To make them mature for clinical application, an experimental test environment for evaluation of possible image interferences would be most helpful. After a comprehensive review of the literature (MEDLINE, IEEE, Google Scholar), a test bed for generation of artificial surgical smoke and mist was evolved. Smoke was generated by a fog machine and mist produced by a nebulizer. The size of resulting droplets was measured microscopically and compared with biological smoke (electrocautery) and mist (ultrasound dissection) emerging during minimally invasive surgical procedures. The particles resulting from artificial generation are in the range of the size of biological droplets. For surgical smoke, the droplet dimension produced by the fog machine was 4.19 µm compared with 4.65 µm generated by electrocautery during a surgical procedure. The size of artificial mist produced by the nebulizer ranged between 45.38 and 48.04 µm compared with the range between 30.80 and 56.27 µm that was generated during minimally invasive ultrasonic dissection. A suitable test bed for artificial smoke and mist generation was developed revealing almost identical droplet characteristics as produced during minimally invasive surgical procedures. The possibility to generate image interferences comparable to those occurring during laparoscopy (electrocautery and ultrasound dissection) provides a basis for the future development of image processing technologies for clinical applications. © The Author(s) 2014.

  20. Development of a Quasi-3D Multiscale Modeling Framework: Motivation, basic algorithm and preliminary results

    Directory of Open Access Journals (Sweden)

    Joon-Hee Jung

    2010-11-01

    Full Text Available A new framework for modeling the atmosphere, which we call the quasi-3D (Q3D multi-scale modeling framework (MMF, is developed with the objective of including cloud-scale three-dimensional effects in a GCM without necessarily using a global cloud-resolving model (CRM. It combines a GCM with a Q3D CRM that has the horizontal domain consisting of two perpendicular sets of channels, each of which contains a locally 3D grid-point array. For computing efficiency, the widths of the channels are chosen to be narrow. Thus, it is crucial to select a proper lateral boundary condition to realistically simulate the statistics of cloud and cloud-associated processes. Among the various possibilities, a periodic lateral boundary condition is chosen for the deviations from background fields that are obtained by interpolations from the GCM grid points. Since the deviations tend to vanish as the GCM grid size approaches that of the CRM, the whole system of the Q3D MMF can converge to a fully 3D global CRM. Consequently, the horizontal resolution of the GCM can be freely chosen depending on the objective of application, without changing the formulation of model physics. To evaluate the newly developed Q3D CRM in an efficient way, idealized experiments have been performed using a small horizontal domain. In these tests, the Q3D CRM uses only one pair of perpendicular channels with only two grid points across each channel. Comparing the simulation results with those of a fully 3D CRM, it is concluded that the Q3D CRM can reproduce most of the important statistics of the 3D solutions, including the vertical distributions of cloud water and precipitants, vertical transports of potential temperature and water vapor, and the variances and covariances of dynamical variables. The main improvement from a corresponding 2D simulation appears in the surface fluxes and the vorticity transports that cause the mean wind to change. A comparison with a simulation using a coarse

  1. Development of a Data Reduction algorithm for Optical Wide Field Patrol

    Directory of Open Access Journals (Sweden)

    Sun-youp Park

    2013-09-01

    Full Text Available The detector subsystem of the Optical Wide-field Patrol (OWL network efficiently acquires the position and time information of moving objects such as artificial satellites through its chopper system, which consists of 4 blades in front of the CCD camera. Using this system, it is possible to get more position data with the same exposure time by changing the streaks of the moving objects into many pieces with the fast rotating blades during sidereal tracking. At the same time, the time data from the rotating chopper can be acquired by the time tagger connected to the photo diode. To analyze the orbits of the targets detected in the image data of such a system, a sequential procedure of determining the positions of separated streak lines was developed that involved calculating the World Coordinate System (WCS solution to transform the positions into equatorial coordinate systems, and finally combining the time log records from the time tagger with the transformed position data. We introduce this procedure and the preliminary results of the application of this procedure to the test observation images.

  2. Methane emissions from tropical wetlands in LPX: Algorithm development and validation using atmospheric measurements

    Science.gov (United States)

    Houweling, S.; Ringeval, B.; Basu, A.; Van Beek, L. P.; Van Bodegom, P.; Spahni, R.; Gatti, L.; Gloor, M.; Roeckmann, T.

    2013-12-01

    Tropical wetlands are an important and highly uncertain term in the global budget of methane. Unlike wetlands in higher latitudes, which are dominated by water logged peatlands, tropical wetlands consist primarily of inundated river floodplains responding seasonally to variations in river discharge. Despite the fact that the hydrology of these systems is obviously very different, process models used for estimating methane emissions from wetlands commonly lack a dedicated parameterization for the tropics. This study is a first attempt to develop such a parameterization for use in the global dynamical vegetation model LPX. The required floodplain extents and water depth are calculated offline using the global hydrological model PCR-GLOBWB, which includes a sophisticated river routing scheme. LPX itself has been extended with a dedicated floodplain land unit and flood tolerant PFTs. The simulated species competition and productivity have been verified using GLC2000 and MODIS, pointing to directions for further model improvement regarding vegetation dynamics and hydrology. LPX simulated methane fluxes have been compared with available in situ measurements from tropical America. Finally, estimates for the Amazon basin have been implemented in the TM5 atmospheric transport model and compared with aircraft measured vertical profiles. The first results that will be presented demonstrate that, despite the limited availability of measurements, useful constraints on the magnitude and seasonality of Amazonian methane emissions can be derived.

  3. Development and validation of case-finding algorithms for the identification of patients with anti-neutrophil cytoplasmic antibody-associated vasculitis in large healthcare administrative databases.

    Science.gov (United States)

    Sreih, Antoine G; Annapureddy, Narender; Springer, Jason; Casey, George; Byram, Kevin; Cruz, Andy; Estephan, Maya; Frangiosa, Vince; George, Michael D; Liu, Mei; Parker, Adam; Sangani, Sapna; Sharim, Rebecca; Merkel, Peter A

    2016-12-01

    The aim of this study was to develop and validate case-finding algorithms for granulomatosis with polyangiitis (Wegener's, GPA), microscopic polyangiitis (MPA), and eosinophilic GPA (Churg-Strauss, EGPA). Two hundred fifty patients per disease were randomly selected from two large healthcare systems using the International Classification of Diseases version 9 (ICD9) codes for GPA/EGPA (446.4) and MPA (446.0). Sixteen case-finding algorithms were constructed using a combination of ICD9 code, encounter type (inpatient or outpatient), physician specialty, use of immunosuppressive medications, and the anti-neutrophil cytoplasmic antibody type. Algorithms with the highest average positive predictive value (PPV) were validated in a third healthcare system. An algorithm excluding patients with eosinophilia or asthma and including the encounter type and physician specialty had the highest PPV for GPA (92.4%). An algorithm including patients with eosinophilia and asthma and the physician specialty had the highest PPV for EGPA (100%). An algorithm including patients with one of the diagnoses (alveolar hemorrhage, interstitial lung disease, glomerulonephritis, and acute or chronic kidney disease), encounter type, physician specialty, and immunosuppressive medications had the highest PPV for MPA (76.2%). When validated in a third healthcare system, these algorithms had high PPV (85.9% for GPA, 85.7% for EGPA, and 61.5% for MPA). Adding the anti-neutrophil cytoplasmic antibody type increased the PPV to 94.4%, 100%, and 81.2% for GPA, EGPA, and MPA, respectively. Case-finding algorithms accurately identify patients with GPA, EGPA, and MPA in administrative databases. These algorithms can be used to assemble population-based cohorts and facilitate future research in epidemiology, drug safety, and comparative effectiveness. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  4. Development and Validation of Case-Finding Algorithms for the Identification of Patients with ANCA-Associated Vasculitis in Large Healthcare Administrative Databases

    Science.gov (United States)

    Sreih, Antoine G.; Annapureddy, Narender; Springer, Jason; Casey, George; Byram, Kevin; Cruz, Andy; Estephan, Maya; Frangiosa, Vince; George, Michael D.; Liu, Mei; Parker, Adam; Sangani, Sapna; Sharim, Rebecca; Merkel, Peter A.

    2016-01-01

    Purpose To develop and validate case-finding algorithms for granulomatosis with polyangiitis (Wegener’s, GPA), microscopic polyangiitis (MPA), and eosinophilic granulomatosis with polyangiitis (Churg-Strauss, EGPA). Methods 250 patients per disease were randomly selected from 2 large healthcare systems using the International Classification of Diseases version 9 (ICD9) codes for GPA/EGPA (446.4) and MPA (446.0). 16 case-finding algorithms were constructed using a combination of ICD9 code, encounter type (inpatient or outpatient), physician specialty, use of immunosuppressive medications, and the anti-neutrophil cytoplasmic antibody (ANCA) type. Algorithms with the highest average positive predictive value (PPV) were validated in a third healthcare system. Results An algorithm excluding patients with eosinophilia or asthma and including the encounter type and physician specialty had the highest PPV for GPA (92.4%). An algorithm including patients with eosinophilia and asthma and the physician specialty had the highest PPV for EGPA (100%). An algorithm including patients with one of the following diagnoses: alveolar hemorrhage, interstitial lung disease, glomerulonephritis, acute or chronic kidney disease, the encounter type, physician specialty, and immunosuppressive medications had the highest PPV for MPA (76.2%). When validated in a third healthcare system, these algorithms had high PPV (85.9% for GPA, 85.7% for EGPA, and 61.5% for MPA). Adding the ANCA type increased the PPV to 94.4%, 100%, and 81.2% for GPA, EGPA, and MPA respectively. Conclusion Case-finding algorithms accurately identify patients with GPA, EGPA, and MPA in administrative databases. These algorithms can be used to assemble population-based cohorts and facilitate future research in epidemiology, drug safety, and comparative effectiveness. PMID:27804171

  5. Pulseq-Graphical Programming Interface: Open source visual environment for prototyping pulse sequences and integrated magnetic resonance imaging algorithm development.

    Science.gov (United States)

    Keerthi Sravan, R; Potdar, Sneha; Poojar, Pavan; Reddy, Ashok Kumar; Kroboth, Stefan; Nielsen, Jon-Fredrik; Zaitsev, Maxim; Venkatesan, Ramesh; Geethanath, Sairam

    2018-03-11

    To provide a single open-source platform for comprehensive MR algorithm development inclusive of simulations, pulse sequence design and deployment, reconstruction, and image analysis. We integrated the "Pulseq" platform for vendor-independent pulse programming with Graphical Programming Interface (GPI), a scientific development environment based on Python. Our integrated platform, Pulseq-GPI, permits sequences to be defined visually and exported to the Pulseq file format for execution on an MR scanner. For comparison, Pulseq files using either MATLAB only ("MATLAB-Pulseq") or Python only ("Python-Pulseq") were generated. We demonstrated three fundamental sequences on a 1.5 T scanner. Execution times of the three variants of implementation were compared on two operating systems. In vitro phantom images indicate equivalence with the vendor supplied implementations and MATLAB-Pulseq. The examples demonstrated in this work illustrate the unifying capability of Pulseq-GPI. The execution times of all the three implementations were fast (a few seconds). The software is capable of user-interface based development and/or command line programming. The tool demonstrated here, Pulseq-GPI, integrates the open-source simulation, reconstruction and analysis capabilities of GPI Lab with the pulse sequence design and deployment features of Pulseq. Current and future work includes providing an ISMRMRD interface and incorporating Specific Absorption Ratio and Peripheral Nerve Stimulation computations. Copyright © 2017. Published by Elsevier Inc.

  6. Development of a screening algorithm for Alzheimer's disease using categorical verbal fluency.

    Directory of Open Access Journals (Sweden)

    Yeon Kyung Chi

    Full Text Available We developed a weighted composite score of the categorical verbal fluency test (CVFT that can more easily and widely screen Alzheimer's disease (AD than the mini-mental status examination (MMSE. We administered the CVFT using animal category and MMSE to 423 community-dwelling mild probable AD patients and their age- and gender-matched cognitively normal controls. To enhance the diagnostic accuracy for AD of the CVFT, we obtained a weighted composite score from subindex scores of the CVFT using a logistic regression model: logit (case  = 1.160+0.474× gender +0.003× age +0.226× education level - 0.089× first-half score - 0.516× switching score -0.303× clustering score +0.534× perseveration score. The area under the receiver operating curve (AUC for AD of this composite score AD was 0.903 (95% CI = 0.883 - 0.923, and was larger than that of the age-, gender- and education-adjusted total score of the CVFT (p<0.001. In 100 bootstrapped re-samples, the composite score consistently showed better diagnostic accuracy, sensitivity and specificity for AD than the total score. Although AUC for AD of the CVFT composite score was slightly smaller than that of the MMSE (0.930, p = 0.006, the CVFT composite score may be a good alternative to the MMSE for screening AD since it is much briefer, cheaper, and more easily applicable over phone or internet than the MMSE.

  7. Developing and evaluating an automated appendicitis risk stratification algorithm for pediatric patients in the emergency department.

    Science.gov (United States)

    Deleger, Louise; Brodzinski, Holly; Zhai, Haijun; Li, Qi; Lingren, Todd; Kirkendall, Eric S; Alessandrini, Evaline; Solti, Imre

    2013-12-01

    To evaluate a proposed natural language processing (NLP) and machine-learning based automated method to risk stratify abdominal pain patients by analyzing the content of the electronic health record (EHR). We analyzed the EHRs of a random sample of 2100 pediatric emergency department (ED) patients with abdominal pain, including all with a final diagnosis of appendicitis. We developed an automated system to extract relevant elements from ED physician notes and lab values and to automatically assign a risk category for acute appendicitis (high, equivocal, or low), based on the Pediatric Appendicitis Score. We evaluated the performance of the system against a manually created gold standard (chart reviews by ED physicians) for recall, specificity, and precision. The system achieved an average F-measure of 0.867 (0.869 recall and 0.863 precision) for risk classification, which was comparable to physician experts. Recall/precision were 0.897/0.952 in the low-risk category, 0.855/0.886 in the high-risk category, and 0.854/0.766 in the equivocal-risk category. The information that the system required as input to achieve high F-measure was available within the first 4 h of the ED visit. Automated appendicitis risk categorization based on EHR content, including information from clinical notes, shows comparable performance to physician chart reviewers as measured by their inter-annotator agreement and represents a promising new approach for computerized decision support to promote application of evidence-based medicine at the point of care.

  8. Total algorithms

    NARCIS (Netherlands)

    Tel, G.

    We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of

  9. Development of Internet algorithms and some calculations of power plant COP

    Science.gov (United States)

    Ustjuzhanin, E. E.; Ochkov, V. F.; Znamensky, V. E.

    2017-11-01

    The authors have analyzed Internet resources containing information on some thermodynamic properties of technically important substances (the water, the air etc.). There are considered databases those possess such resources and are hosted in organizations (Joint Institute for High Temperatures (Russian Academy of Sciences), Standartinform (Russia), National Institute of Standards and Technology (USA), Institute for Thermal Physics (Siberian Branch of the Russian Academy of Sciences), etc.). Currently, a typical form is an Internet resource that includes a text file, for example, it is a file containing tabulated properties, R = (ρ, s, h…), here ρ – the density, s – the entropy, h – the enthalpy of a substance. It is known a small number of Internet resources those have the following characteristic. The resource allows a customer to realize a number of options, for example: i) to enter the input data, Y = (p, T), here p - the pressure, T – the temperature, ii) to calculate R property using “an exe-file” program, iii) to copy the result X = (p, T, ρ, h, s, …). Recently, some researchers (including the authors of this report) have requested a software (SW) that is designed for R property calculations and has a form of an open interactive (OI) Internet resource (“a client function”, “template”). A computing part of OI resource is linked: 1) with a formula, which is applied to calculate R property, 2) with a Mathcad program, Code_1(R,Y). An interactive part of OI resource is based on Informatics and Internet technologies. We have proposed some methods and tools those are related to this part and let us: a) to post OI resource on a remote server, b) to link a client PC with the remote server, c) to implement a number of options to clients. Among these options, there are: i) to calculate R property at given Y arguments, ii) to copy mathematical formulas, iii) to copy Code_1(R,Y) as a whole. We have developed some OI – resources those are

  10. Algorithm developing of gross primary production from its capacity and a canopy conductance index using flux and global observing satellite data

    Science.gov (United States)

    Muramatsu, Kanako; Furumi, Shinobu; Daigo, Motomasa

    2015-10-01

    We plan to estimate gross primary production (GPP) using the SGLI sensor on-board the GCOM-C1 satellite after it is launched in 2017 by the Japan Aerospace Exploration Agency, as we have developed a GPP estimation algorithm that uses SGLI sensor data. The characteristics of this GPP estimation method correspond to photosynthesis. The rate of plant photosynthesis depends on the plant's photosynthesis capacity and the degree to which photosynthesis is suppressed. The photosynthesis capacity depends on the chlorophyll content of leaves, which is a plant physiological parameter, and the degree of suppression of photosynthesis depends on weather conditions. The framework of the estimation method to determine the light-response curve parameters was developed using ux and satellite data in a previous study[1]. We estimated one of the light-response curve parameters based on the linear relationship between GPP capacity at 2000 (μmolm-2s-1) of photosynthetically active radiation and a chlorophyll index (CIgreen [2;3] ). The relationship was determined for seven plant functional types. Decreases in the photosynthetic rate are controlled by stomatal opening and closing. Leaf stomatal conductance is maximal during the morning and decreases in the afternoon. We focused on daily changes in leaf stomatal conductance. We used open shrub flux data and MODIS reflectance data to develop an algorithm for a canopy. We first evaluated the daily changes in GPP capacity estimated from CIgreen and photosynthesis active radiation using light response curves, and GPP observed during a flux experiment. Next, we estimated the canopy conductance using flux data and a big-leaf model using the Penman-Monteith equation[4]. We estimated GPP by multiplying GPP capacity by the normalized canopy conductance at 10:30, the time of satellite observations. The results showed that the estimated daily change in GPP was almost the same as the observed GPP. From this result, we defined a normalized canopy

  11. Development of an algorithm simulator of the planar radioactive source for dosimetric evaluations in accidents with radiopharmaceuticals used in nuclear medicine

    International Nuclear Information System (INIS)

    Claudino, Gutemberg L. Sales; Vieira, Jose Wilson; Leal Neto, Viriato; Lima, Fernando R. Andrade

    2013-01-01

    Objective of this work is to develop an algorithm simulator for dosimetric evaluation of accidents that may happen in Nuclear Medicine using PDF NT (Probability Density Functions). A software was developed using C# and WPF technology, in the integrated environment of Microsoft Visual Studio to organize and present the dosimetric results

  12. Development of an algorithm simulator of the planar radioactive source for dosimetric evaluations in accidents with radiopharmaceuticals used in nuclear medicine

    Energy Technology Data Exchange (ETDEWEB)

    Claudino, Gutemberg L. Sales; Vieira, Jose Wilson; Leal Neto, Viriato, E-mail: berg2020@hotmail.com [Instituto Federal de Educacao, Ciencia e Tecnologia de Pernambuco (IFPE), Recife, PE (Brazil); Lima, Fernando R. Andrade, E-mail: falima@cnen.gov.br [Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE/CNEN-PE), Recife, PE (Brazil)

    2013-07-01

    Objective of this work is to develop an algorithm simulator for dosimetric evaluation of accidents that may happen in Nuclear Medicine using PDF NT (Probability Density Functions). A software was developed using C# and WPF technology, in the integrated environment of Microsoft Visual Studio to organize and present the dosimetric results.

  13. Group leaders optimization algorithm

    Science.gov (United States)

    Daskin, Anmer; Kais, Sabre

    2011-03-01

    We present a new global optimization algorithm in which the influence of the leaders in social groups is used as an inspiration for the evolutionary technique which is designed into a group architecture. To demonstrate the efficiency of the method, a standard suite of single and multi-dimensional optimization functions along with the energies and the geometric structures of Lennard-Jones clusters are given as well as the application of the algorithm on quantum circuit design problems. We show that as an improvement over previous methods, the algorithm scales as N 2.5 for the Lennard-Jones clusters of N-particles. In addition, an efficient circuit design is shown for a two-qubit Grover search algorithm which is a quantum algorithm providing quadratic speedup over the classical counterpart.

  14. Derivation of a regional active-optical reflectance sensor corn algorithm

    Science.gov (United States)

    Active-optical reflectance sensor (AORS) algorithms developed for in-season corn (Zea mays L.) N management have traditionally been derived using sub-regional scale information. However, studies have shown these previously developed AORS algorithms are not consistently accurate when used on a region...

  15. Genetic algorithm guided population pharmacokinetic model development for simvastatin, concurrently or non-concurrently co-administered with amlodipine.

    Science.gov (United States)

    Chaturvedula, Ayyappa; Sale, Mark E; Lee, Howard

    2014-02-01

    An automated model development was performed for simvastatin, co-administered with amlodipine concurrently or non-concurrently (i.e., 4 hours later) in 17 patients with coexisting hyperlipidemia and hypertension. The single objective hybrid genetic algorithm (SOHGA) was implemented in the NONMEM software by defining the search space for structural, statistical and covariate models. Candidate models obtained from the SOHGA runs were further assessed for biological plausibility and the precision of parameter estimates, followed by traditional backward elimination process for model refinement. The final population pharmacokinetic model shows that the elimination rate constant for simvastatin acid, the active form by hydrolysis of its lactone prodrug (i.e., simvastatin), is only 44% in the concurrent amlodipine administration group compared with the non-concurrent group. The application of SOHGA for automated model selection, combined with traditional model selection strategies, appears to save time for model development, which also can generate new hypotheses that are biologically more plausible. © 2013, The American College of Clinical Pharmacology.

  16. Control Algorithms and Simulated Environment Developed and Tested for Multiagent Robotics for Autonomous Inspection of Propulsion Systems

    Science.gov (United States)

    Wong, Edmond

    2005-01-01

    The NASA Glenn Research Center and academic partners are developing advanced multiagent robotic control algorithms that will enable the autonomous inspection and repair of future propulsion systems. In this application, on-wing engine inspections will be performed autonomously by large groups of cooperative miniature robots that will traverse the surfaces of engine components to search for damage. The eventual goal is to replace manual engine inspections that require expensive and time-consuming full engine teardowns and allow the early detection of problems that would otherwise result in catastrophic component failures. As a preliminary step toward the long-term realization of a practical working system, researchers are developing the technology to implement a proof-of-concept testbed demonstration. In a multiagent system, the individual agents are generally programmed with relatively simple controllers that define a limited set of behaviors. However, these behaviors are designed in such a way that, through the localized interaction among individual agents and between the agents and the environment, they result in self-organized, emergent group behavior that can solve a given complex problem, such as cooperative inspection. One advantage to the multiagent approach is that it allows for robustness and fault tolerance through redundancy in task handling. In addition, the relatively simple agent controllers demand minimal computational capability, which in turn allows for greater miniaturization of the robotic agents.

  17. Development of a control algorithm for teleoperation of DFDF(IMEF/M6 hot cell) maintenance equipment

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Chae Youn; Kwon, Hyuk Jo; Kim, Hak Duck; Jun, Ji Myung; Oh, Hee Geun [Chonbuk National University, Chonju (Korea)

    2002-03-01

    Teleoperation has been used for separating operators from the working environment. Thus, it is usually used to perform a work in an inaccessible place such as space, deep sea, etc. Also, it is used to perform a work in an accessible but a very poor working environment such as explosive, poison gas, radioactive area, etc. It is one of the advanced technology-intensive research areas. It has potentially big economical and industrial value. There is a tendency to avoid working in a difficult, dirty or dangerous place, particularly, in a high radioactive area since there always exist a possibility to be in a very dangerous situation. Thus, developing and utilizing of a teleoperation system will minimize the possibility to be exposed in such a extreme situation directly. Recently, there has been many researches on reflecting force information occurring in teleoperation to the operator in addition to visual information. The reflected force information is used to control the teleoperation system bilaterally. It will contribute a lot to improve teleoperation's safety and working efficiency. This study developed a bilateral force reflecting control algorithm. It may be used as a key technology of a teleoperation system for maintaining, repairing and dismantling facilities exposed in a high radioactive. 42 refs., 71 figs., 12 tabs. (Author)

  18. Development of a stereolithography (STL input and computer numerical control (CNC output algorithm for an entry-level 3-D printer

    Directory of Open Access Journals (Sweden)

    Brown, Andrew

    2014-08-01

    Full Text Available This paper presents a prototype Stereolithography (STL file format slicing and tool-path generation algorithm, which serves as a data front-end for a Rapid Prototyping (RP entry- level three-dimensional (3-D printer. Used mainly in Additive Manufacturing (AM, 3-D printers are devices that apply plastic, ceramic, and metal, layer by layer, in all three dimensions on a flat surface (X, Y, and Z axis. 3-D printers, unfortunately, cannot print an object without a special algorithm that is required to create the Computer Numerical Control (CNC instructions for printing. An STL algorithm therefore forms a critical component for Layered Manufacturing (LM, also referred to as RP. The purpose of this study was to develop an algorithm that is capable of processing and slicing an STL file or multiple files, resulting in a tool-path, and finally compiling a CNC file for an entry-level 3- D printer. The prototype algorithm was implemented for an entry-level 3-D printer that utilises the Fused Deposition Modelling (FDM process or Solid Freeform Fabrication (SFF process; an AM technology. Following an experimental method, the full data flow path for the prototype algorithm was developed, starting with STL data files, and then processing the STL data file into a G-code file format by slicing the model and creating a tool-path. This layering method is used by most 3-D printers to turn a 2-D object into a 3-D object. The STL algorithm developed in this study presents innovative opportunities for LM, since it allows engineers and architects to transform their ideas easily into a solid model in a fast, simple, and cheap way. This is accomplished by allowing STL models to be sliced rapidly, effectively, and without error, and finally to be processed and prepared into a G-code print file.

  19. New Optimization Algorithms in Physics

    CERN Document Server

    Hartmann, Alexander K

    2004-01-01

    Many physicists are not aware of the fact that they can solve their problems by applying optimization algorithms. Since the number of such algorithms is steadily increasing, many new algorithms have not been presented comprehensively until now. This presentation of recently developed algorithms applied in physics, including demonstrations of how they work and related results, aims to encourage their application, and as such the algorithms selected cover concepts and methods from statistical physics to optimization problems emerging in theoretical computer science.

  20. Continuous measurements of water surface height and width along a 6.5km river reach for discharge algorithm development

    Science.gov (United States)

    Tuozzolo, S.; Durand, M. T.; Pavelsky, T.; Pentecost, J.

    2015-12-01

    The upcoming Surface Water and Ocean Topography (SWOT) satellite will provide measurements of river width and water surface elevation and slope along continuous swaths of world rivers. Understanding water surface slope and width dynamics in river reaches is important for both developing and validating discharge algorithms to be used on future SWOT data. We collected water surface elevation and river width data along a 6.5km stretch of the Olentangy River in Columbus, Ohio from October to December 2014. Continuous measurements of water surface height were supplemented with periodical river width measurements at twenty sites along the study reach. The water surface slope of the entire reach ranged from during 41.58 cm/km at baseflow to 45.31 cm/km after a storm event. The study reach was also broken into sub-reaches roughly 1km in length to study smaller scale slope dynamics. The furthest upstream sub-reaches are characterized by free-flowing riffle-pool sequences, while the furthest downstream sub-reaches were directly affected by two low-head dams. In the sub-reaches immediately upstream of each dam, baseflow slope is as low as 2 cm/km, while the furthest upstream free-flowing sub-reach has a baseflow slope of 100 cm/km. During high flow events the backwater effect of the dams was observed to propagate upstream: sub-reaches impounded by the dams had increased water surface slopes, while free flowing sub-reaches had decreased water surface slopes. During the largest observed flow event, a stage change of 0.40 m affected sub-reach slopes by as much as 30 cm/km. Further analysis will examine height-width relationships within the study reach and relate cross-sectional flow area to river stage. These relationships can be used in conjunction with slope data to estimate discharge using a modified Manning's equation, and are a core component of discharge algorithms being developed for the SWOT mission.

  1. Novel and efficient tag SNPs selection algorithms.

    Science.gov (United States)

    Chen, Wen-Pei; Hung, Che-Lun; Tsai, Suh-Jen Jane; Lin, Yaw-Ling

    2014-01-01

    SNPs are the most abundant forms of genetic variations amongst species; the association studies between complex diseases and SNPs or haplotypes have received great attention. However, these studies are restricted by the cost of genotyping all SNPs; thus, it is necessary to find smaller subsets, or tag SNPs, representing the rest of the SNPs. In fact, the existing tag SNP selection algorithms are notoriously time-consuming. An efficient algorithm for tag SNP selection was presented, which was applied to analyze the HapMap YRI data. The experimental results show that the proposed algorithm can achieve better performance than the existing tag SNP selection algorithms; in most cases, this proposed algorithm is at least ten times faster than the existing methods. In many cases, when the redundant ratio of the block is high, the proposed algorithm can even be thousands times faster than the previously known methods. Tools and web services for haplotype block analysis integrated by hadoop MapReduce framework are also developed using the proposed algorithm as computation kernels.

  2. Explaining algorithms using metaphors

    CERN Document Server

    Forišek, Michal

    2013-01-01

    There is a significant difference between designing a new algorithm, proving its correctness, and teaching it to an audience. When teaching algorithms, the teacher's main goal should be to convey the underlying ideas and to help the students form correct mental models related to the algorithm. This process can often be facilitated by using suitable metaphors. This work provides a set of novel metaphors identified and developed as suitable tools for teaching many of the 'classic textbook' algorithms taught in undergraduate courses worldwide. Each chapter provides exercises and didactic notes fo

  3. Management of temporary urinary retention after arthroscopic knee surgery in low-dose spinal anesthesia: development of a simple algorithm.

    Science.gov (United States)

    Luger, Thomas J; Garoscio, Ivo; Rehder, Peter; Oberladstätter, Jürgen; Voelckel, Wolfgang

    2008-06-01

    In practice, trauma and orthopedic surgery during spinal anesthesia are often performed with routine urethral catheterization of the bladder to prevent an overdistention of the bladder. However, use of a catheter has inherent risks. Ultrasound examination of the bladder (Bladderscan) can precisely determine the bladder volume. Thus, the aim of this study was to identify parameters indicative of urinary retention after low-dose spinal anesthesia and to develop a simple algorithm for patient care. This prospective pilot study approved by the Ethics Committee enrolled 45 patients after obtaining their written informed consent. Patients who underwent arthroscopic knee surgery received low-dose spinal anesthesia with 1.4 ml 0.5% bupivacaine at level L3/L4. Bladder volume was measured by urinary bladder scanning at baseline, at the end of surgery and up to 4 h later. The incidence of spontaneous urination versus catheterization was assessed and the relative risk for catheterization was calculated. Mann-Whitney test, chi(2) test with Fischer Exact test and the relative odds ratio were performed as appropriate. *P 300 ml postoperatively had a 6.5-fold greater likelihood for urinary retention. In the management of patients with short-lasting spinal anesthesia for arthroscopic knee surgery we recommend monitoring bladder volume by Bladderscan instead of routine catheterization. Anesthesiologists or nurses under protocol should assess bladder volume preoperatively and at the end of surgery. If bladder volume is >300 ml, catheterization should be performed in the OR. Patients with a bladder volume of 500 ml.

  4. PRGPred: A platform for prediction of domains of resistance gene analogue (RGA in Arecaceae developed using machine learning algorithms

    Directory of Open Access Journals (Sweden)

    MATHODIYIL S. MANJULA

    2015-12-01

    Full Text Available Plant disease resistance genes (R-genes are responsible for initiation of defense mechanism against various phytopathogens. The majority of plant R-genes are members of very large multi-gene families, which encode structurally related proteins containing nucleotide binding site domains (NBS and C-terminal leucine rich repeats (LRR. Other classes possess' an extracellular LRR domain, a transmembrane domain and sometimes, an intracellular serine/threonine kinase domain. R-proteins work in pathogen perception and/or the activation of conserved defense signaling networks. In the present study, sequences representing resistance gene analogues (RGAs of coconut, arecanut, oil palm and date palm were collected from NCBI, sorted based on domains and assembled into a database. The sequences were analyzed in PRINTS database to find out the conserved domains and their motifs present in the RGAs. Based on these domains, we have also developed a tool to predict the domains of palm R-genes using various machine learning algorithms. The model files were selected based on the performance of the best classifier in training and testing. All these information is stored and made available in the online ‘PRGpred' database and prediction tool.

  5. A Distributed Multi-hop Low Cost Time Synchronization Algorithm in Wireless Sensor Network developed for Bridge Diagnosis System

    Science.gov (United States)

    Xiao, Haitao; Ogai, Harutoshi; Ding, Zhehan

    Due to the oceanic climate and frequent earthquakes in Japan, bridge health diagnosis is a problem of greater complexity. In bridge diagnosis system, we develop a wireless sensor network to sample and gather the vibration data of bridge. Time synchronization is a crucial component for the wireless sensor network (WSN), because large populations of sensor nodes will collaborate in order to complete the measuring data at the same time, data gathering, data fusion and localization. In the wireless sensor network with large scale of energy limited nodes, multi-hop time synchronization is necessarily applied. To solve above mentioned problem, some protocols such as RBS, TSPN FTPS etc. are proposed. However most of the algorithms mainly focus on the precision of synchronization. In fact energy efficiency is also a challenge in a resource-limited WSN. In this paper an improved time synchronization scheme is proposed with the purpose of reducing energy consumption and lengthening whole WSN'life. Performance analyses, simulations and realization in node hardware of WSN are also presented.

  6. Developing and Multi-Objective Optimization of a Combined Energy Absorber Structure Using Polynomial Neural Networks and Evolutionary Algorithms

    Directory of Open Access Journals (Sweden)

    Amir Najibi

    Full Text Available Abstract In this study a newly developed thin-walled structure with the combination of circular and square sections is investigated in term of crashworthiness. The results of the experimental tests are utilized to validate the Abaqus/ExplicitTM finite element simulations and analysis of the crush phenomenon. Three polynomial meta-models based on the evolved group method of data handling (GMDH neural networks are employed to simply represent the specific energy absorption (SEA, the initial peak crushing load (P1 and the secondary peak crushing load (P2 with respect to the geometrical variables. The training and testing data are extracted from the finite element analysis. The modified genetic algorithm NSGA-II, is used in multi-objective optimisation of the specific energy absorption, primary and secondary peak crushing load according to the geometrical variables. Finally, in each optimisation process, the optimal section energy absorptions are compared with the results of the finite element analysis. The nearest to ideal point and TOPSIS optimisation methods are applied to choose the optimal points.

  7. Algorithm development and simulation outcomes for hypoxic head and neck cancer radiotherapy using a Monte Carlo cell division model

    International Nuclear Information System (INIS)

    Harriss, W.M.; Bezak, E.; Yeoh, E.

    2010-01-01

    Full text: A temporal Monte Carlo tumour model, 'Hyp-RT'. sim ulating hypoxic head and neck cancer has been updated and extended to model radiothcrapy. The aim is to providc a convenient radiobio logical tool for clinicians to evaluate radiotherapy treatment schedules based on many individual tumour properties including oxygenation. FORTRAN95 and JA YA havc been utilised to develop the efficient algorithm, which can propagate 108 cells. Epithelial cell kill is affected by dose, oxygenation and proliferativc status. Accelerated repopulation (AR) has been modelled by increasing the symmetrical stem cell division probability, and reoxygenation (ROx) has been modelled using random incremental boosts of oxygen to the cell po ulation throughout therapy. Results The stem cell percentage and the degree of hypoxia dominate tumour growth rate. For conventional radiotherapy. 15-25% more dose was required for a hypox ic versus oxic tumours, depending on the time of AR onset (0-3 weeks after thc start of treatment). ROx of hypoxic tumours resulted in tumoUJ: sensitisation and therefore a dose reduction, of up to 35%, varying with the time of onset. Fig. I shows results for all combinations of AR and ROx onset times for the moderate hypoxia case. Conclusions In hypoxic tumours, accelerated repopulation and reoxy genation affect ccll kill in the same manner as when the effects are modelled individually. however the degree of the effect is altered and therefore the combined result is difficult to predict. providing evidence for the usefulness of computer models. Simulations have quantitatively

  8. Development of Smart Ventilation Control Algorithms for Humidity Control in High-Performance Homes in Humid U.S. Climates

    Energy Technology Data Exchange (ETDEWEB)

    Less, Brennan [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Walker, Iain [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ticci, Sara [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2017-04-11

    Past field research and simulation studies have shown that high performance homes experience elevated indoor humidity levels for substantial portions of the year in humid climates. This is largely the result of lower sensible cooling loads, which reduces the moisture removed by the cooling system. These elevated humidity levels lead to concerns about occupant comfort, health and building durability. Use of mechanical ventilation at rates specified in ASHRAE Standard 62.2-2013 are often cited as an additional contributor to humidity problems in these homes. Past research has explored solutions, including supplemental dehumidification, cooling system operational enhancements and ventilation system design (e.g., ERV, supply, exhaust, etc.). This project’s goal is to develop and demonstrate (through simulations) smart ventilation strategies that can contribute to humidity control in high performance homes. These strategies must maintain IAQ via equivalence with ASHRAE Standard 62.2-2013. To be acceptable they must not result in excessive energy use. Smart controls will be compared with dehumidifier energy and moisture performance. This work explores the development and performance of smart algorithms for control of mechanical ventilation systems, with the objective of reducing high humidity in modern high performance residences. Simulations of DOE Zero-Energy Ready homes were performed using the REGCAP simulation tool. Control strategies were developed and tested using the Residential Integrated Ventilation (RIVEC) controller, which tracks pollutant exposure in real-time and controls ventilation to provide an equivalent exposure on an annual basis to homes meeting ASHRAE 62.2-2013. RIVEC is used to increase or decrease the real-time ventilation rate to reduce moisture transport into the home or increase moisture removal. This approach was implemented for no-, one- and two-sensor strategies, paired with a variety of control approaches in six humid climates (Miami

  9. Development of algorithms for tsunami detection by High Frequency Radar based on modeling tsunami case studies in the Mediterranean Sea

    Science.gov (United States)

    Grilli, Stéphan; Guérin, Charles-Antoine; Grosdidier, Samuel

    2015-04-01

    Where coastal tsunami hazard is governed by near-field sources, Submarine Mass Failures (SMFs) or earthquakes, tsunami propagation times may be too small for a detection based on deep or shallow water buoys. To offer sufficient warning time, it has been proposed by others to implement early warning systems relying on High Frequency Surface Wave Radar (HFSWR) remote sensing, that has a dense spatial coverage far offshore. A new HFSWR, referred to as STRADIVARIUS, has been recently deployed by Diginext Inc. to cover the "Golfe du Lion" (GDL) in the Western Mediterranean Sea. This radar, which operates at 4.5 MHz, uses a proprietary phase coding technology that allows detection up to 300 km in a bistatic configuration (with a baseline of about 100 km). Although the primary purpose of the radar is vessel detection in relation to homeland security, it can also be used for ocean current monitoring. The current caused by an arriving tsunami will shift the Bragg frequency by a value proportional to a component of its velocity, which can be easily obtained from the Doppler spectrum of the HFSWR signal. Using state of the art tsunami generation and propagation models, we modeled tsunami case studies in the western Mediterranean basin (both seismic and SMFs) and simulated the HFSWR backscattered signal that would be detected for the entire GDL and beyond. Based on simulated HFSWR signal, we developed two types of tsunami detection algorithms: (i) one based on standard Doppler spectra, for which we found that to be detectable within the environmental and background current noises, the Doppler shift requires tsunami currents to be at least 10-15 cm/s, which typically only occurs on the continental shelf in fairly shallow water; (ii) to allow earlier detection, a second algorithm computes correlations of the HFSWR signals at two distant locations, shifted in time by the tsunami propagation time between these locations (easily computed based on bathymetry). We found that this

  10. Development and validation of a new algorithm for the reclassification of genetic variants identified in the BRCA1 and BRCA2 genes.

    Science.gov (United States)

    Pruss, Dmitry; Morris, Brian; Hughes, Elisha; Eggington, Julie M; Esterling, Lisa; Robinson, Brandon S; van Kan, Aric; Fernandes, Priscilla H; Roa, Benjamin B; Gutin, Alexander; Wenstrup, Richard J; Bowles, Karla R

    2014-08-01

    BRCA1 and BRCA2 sequencing analysis detects variants of uncertain clinical significance in approximately 2 % of patients undergoing clinical diagnostic testing in our laboratory. The reclassification of these variants into either a pathogenic or benign clinical interpretation is critical for improved patient management. We developed a statistical variant reclassification tool based on the premise that probands with disease-causing mutations are expected to have more severe personal and family histories than those having benign variants. The algorithm was validated using simulated variants based on approximately 145,000 probands, as well as 286 BRCA1 and 303 BRCA2 true variants. Positive and negative predictive values of ≥99 % were obtained for each gene. Although the history weighting algorithm was not designed to detect alleles of lower penetrance, analysis of the hypomorphic mutations c.5096G>A (p.Arg1699Gln; BRCA1) and c.7878G>C (p.Trp2626Cys; BRCA2) indicated that the history weighting algorithm is able to identify some lower penetrance alleles. The history weighting algorithm is a powerful tool that accurately assigns actionable clinical classifications to variants of uncertain clinical significance. While being developed for reclassification of BRCA1 and BRCA2 variants, the history weighting algorithm is expected to be applicable to other cancer- and non-cancer-related genes.

  11. Changes in the Glycosylation of Kininogen and the Development of a Kininogen-Based Algorithm for the Early Detection of HCC.

    Science.gov (United States)

    Wang, Mengjun; Sanda, Miloslav; Comunale, Mary Ann; Herrera, Harmin; Swindell, Charles; Kono, Yuko; Singal, Amit G; Marrero, Jorge; Block, Timothy; Goldman, Radoslav; Mehta, Anand

    2017-05-01

    Background: Hepatocellular carcinoma (HCC) has the greatest increase in mortality among all solids tumors in the United States related to low rates of early tumor detection. Development of noninvasive biomarkers for the early detection of HCC may reduce HCC-related mortality. Methods: We have developed an algorithm that combines routinely observed clinical values into a single equation that in a study of >3,000 patients from 5 independent sites improved detection of HCC as compared with the currently used biomarker, alpha-fetoprotein (AFP), by 4% to 20%. However, this algorithm had limited benefit in those with AFP HCC, especially in those with AFP HCC increased from 0% (AFP alone) to 89% (for the new algorithm). Glycan analysis revealed that kininogen has several glycan modifications that have been associated with HCC, but often not with specific proteins, including increased levels of core and outer-arm fucosylation and increased branching. Conclusions: An algorithm combining fucosylated kininogen, AFP, and clinical characteristics is highly accurate for early HCC detection. Impact: Our biomarker algorithm could significantly improve early HCC detection and curative treatment eligibility in patients with cirrhosis. Cancer Epidemiol Biomarkers Prev; 26(5); 795-803. ©2017 AACR . ©2017 American Association for Cancer Research.

  12. A fast meteor detection algorithm

    Science.gov (United States)

    Gural, P.

    2016-01-01

    A low latency meteor detection algorithm for use with fast steering mirrors had been previously developed to track and telescopically follow meteors in real-time (Gural, 2007). It has been rewritten as a generic clustering and tracking software module for meteor detection that meets both the demanding throughput requirements of a Raspberry Pi while also maintaining a high probability of detection. The software interface is generalized to work with various forms of front-end video pre-processing approaches and provides a rich product set of parameterized line detection metrics. Discussion will include the Maximum Temporal Pixel (MTP) compression technique as a fast thresholding option for feeding the detection module, the detection algorithm trade for maximum processing throughput, details on the clustering and tracking methodology, processing products, performance metrics, and a general interface description.

  13. Rapid fish stock depletion in previously unexploited seamounts: the ...

    African Journals Online (AJOL)

    Rapid fish stock depletion in previously unexploited seamounts: the case of Beryx splendens from the Sierra Leone Rise (Gulf of Guinea) ... A spectral analysis and red-noise spectra procedure (REDFIT) algorithm was used to identify the red-noise spectrum from the gaps in the observed time-series of catch per unit effort by ...

  14. Development and investigation of an inverse problem solution algorithm for determination of Ap stars magnetic field geometry

    International Nuclear Information System (INIS)

    Piskunov, N.E.

    1985-01-01

    Mathematical formulation of the inverse problem of determination of magnetic field geometry from the polarization profiles of spectral lines is gven. The solving algorithm is proposed. A set of model calculations has shown the effectiveness of the algorithm, the high precision of magnetic star model parameters obtained and also the advantages of the inverse problem method over the commonly used method of interpretation of effective field curves

  15. The Development of Geo-KOMPSAT-2A (GK-2A) Convective Initiation Algorithm over the Korea peninsular

    Science.gov (United States)

    Kim, H. S.; Chung, S. R.; Lee, B. I.; Baek, S.; Jeon, E.

    2016-12-01

    The rapid development of convection can bring heavy rainfall that suffers a great deal of damages to society as well as threatens human life. The high accurate forecast of the strong convection is essentially demanded to prevent those disasters from the severe weather. Since a geostationary satellite is the most suitable instrument for monitoring the single cloud's lifecycle from its formation to extinction, it has been attempted to capture the precursor signals of convection clouds by satellite. Keeping pace with the launch of Geo-KOMPSAT-2A (GK-2A) in 2018, we planned to produce convective initiation (CI) defined as the indicator of potential cloud objects to bring heavy precipitation within two hours. The CI algorithm for GK-2A is composed of four stages. The beginning is to subtract mature cloud pixels, a sort of convective cloud mask by visible (VIS) albedo and infrared (IR) brightness temperature thresholds. Then, the remained immature cloud pixels are clustered as a cloud object by watershed techniques. Each clustering object is undergone 'Interest Fields' tests for IR data that reflect cloud microphysical properties at the current and their temporal changes; the cloud depth, updraft strength and production of glaciations. All thresholds of 'Interest fields' were optimized for Korean-type convective clouds. Based on scores from tests, it is decided whether the cloud object would develop as a convective cell or not. Here we show the result of case study in this summer over the Korea peninsular by using Himawari-8 VIS and IR data. Radar echo and data were used for validation. This study suggests that CI products of GK-2A would contribute to enhance accuracy of the very short range forecast over the Korea peninsular.

  16. Developing a semi-analytical algorithm to estimate particulate organic carbon (POC) levels in inland eutrophic turbid water based on MERIS images: A case study of Lake Taihu

    Science.gov (United States)

    Lyu, Heng; Wang, Yannan; Jin, Qi; Shi, Lei; Li, Yunmei; Wang, Qiao

    2017-10-01

    Particulate organic carbon (POC) plays an important role in the carbon cycle in water due to its biological pump process. In the open ocean, algorithms can accurately estimate the surface POC concentration. However, no suitable POC-estimation algorithm based on MERIS bands is available for inland turbid eutrophic water. A total of 228 field samples were collected from Lake Taihu in different seasons between 2013 and 2015. At each site, the optical parameters and water quality were analyzed. Using in situ data, it was found that POC-estimation algorithms developed for the open ocean and coastal waters using remote sensing reflectance were not suitable for inland turbid eutrophic water. The organic suspended matter (OSM) concentration was found to be the best indicator of the POC concentration, and POC has an exponential relationship with the OSM concentration. Through an analysis of the POC concentration and optical parameters, it was found that the absorption peak of total suspended matter (TSM) at 665 nm was the optimum parameter to estimate POC. As a result, MERIS band 7, MERIS band 10 and MERIS band 12 were used to derive the absorption coefficient of TSM at 665 nm, and then, a semi-analytical algorithm was used to estimate the POC concentration for inland turbid eutrophic water. An accuracy assessment showed that the developed semi-analytical algorithm could be successfully applied with a MAPE of 31.82% and RMSE of 2.68 mg/L. The developed algorithm was successfully applied to a MERIS image, and two full-resolution MERIS images, acquired on August 13, 2010, and December 7, 2010, were used to map the POC spatial distribution in Lake Taihu in summer and winter.

  17. Comparison of switching control algorithms effective in restricting the switching in the neighborhood of the origin

    International Nuclear Information System (INIS)

    Joung, JinWook; Chung, Lan; Smyth, Andrew W

    2010-01-01

    The active interaction control (AIC) system consisting of a primary structure, an auxiliary structure and an interaction element was proposed to protect the primary structure against earthquakes and winds. The objective of the AIC system in reducing the responses of the primary structure is fulfilled by activating or deactivating the switching between the engagement and the disengagement of the primary and auxiliary structures through the interaction element. The status of the interaction element is controlled by switching control algorithms. The previously developed switching control algorithms require an excessive amount of switching, which is inefficient. In this paper, the excessive amount of switching is restricted by imposing an appropriately designed switching boundary region, where switching is prohibited, on pre-designed engagement–disengagement conditions. Two different approaches are used in designing the newly proposed AID-off and AID-off 2 algorithms. The AID-off 2 algorithm is designed to affect deactivated switching regions explicitly, unlike the AID-off algorithm, which follows the same procedure of designing the engagement–disengagement conditions of the previously developed algorithms, by using the current status of the AIC system. Both algorithms are shown to be effective in reducing the amount of switching times triggered from the previously developed AID algorithm under an appropriately selected control sampling period for different earthquakes, but the AID-off 2 algorithm outperforms the AID-off algorithm in reducing the number of switching times

  18. Development and Validation of an International Risk Prediction Algorithm for Episodes of Major Depression in General Practice Attendees The PredictD Study

    NARCIS (Netherlands)

    King, Michael; Walker, Carl; Levy, Gus; Bottomley, Christian; Royston, Patrick; Weich, Scott; Bellon-Saameno, Juan Angel; Moreno, Berta; Svab, Igor; Rotar, Danica; Rifel, J.; Maaroos, Heidi-Ingrid; Aluoja, Anu; Kalda, Ruth; Neeleman, Jan; Geerlings, Mirjam I.; Xavier, Miguel; Carraca, Idalmiro; Goncalves-Pereira, Manuel; Vicente, Benjamin; Saldivia, Sandra; Melipillan, Roberto; Torres-Gonzalez, Francisco; Nazareth, Irwin

    2008-01-01

    Context: Strategies for prevention of depression are hindered by lack of evidence about the combined predictive effect of known risk factors. Objectives: To develop a risk algorithm for onset of major depression. Design: Cohort of adult general practice attendees followed up at 6 and 12 months. We

  19. Developing and evaluating a machine learning based algorithm to predict the need of pediatric intensive care unit transfer for newly hospitalized children.

    Science.gov (United States)

    Zhai, Haijun; Brady, Patrick; Li, Qi; Lingren, Todd; Ni, Yizhao; Wheeler, Derek S; Solti, Imre

    2014-08-01

    Early warning scores (EWS) are designed to identify early clinical deterioration by combining physiologic and/or laboratory measures to generate a quantified score. Current EWS leverage only a small fraction of Electronic Health Record (EHR) content. The planned widespread implementation of EHRs brings the promise of abundant data resources for prediction purposes. The three specific aims of our research are: (1) to develop an EHR-based automated algorithm to predict the need for Pediatric Intensive Care Unit (PICU) transfer in the first 24h of admission; (2) to evaluate the performance of the new algorithm on a held-out test data set; and (3) to compare the effectiveness of the new algorithm's with those of two published Pediatric Early Warning Scores (PEWS). The cases were comprised of 526 encounters with 24-h Pediatric Intensive Care Unit (PICU) transfer. In addition to the cases, we randomly selected 6772 control encounters from 62516 inpatient admissions that were never transferred to the PICU. We used 29 variables in a logistic regression and compared our algorithm against two published PEWS on a held-out test data set. The logistic regression algorithm achieved 0.849 (95% CI 0.753-0.945) sensitivity, 0.859 (95% CI 0.850-0.868) specificity and 0.912 (95% CI 0.905-0.919) area under the curve (AUC) in the test set. Our algorithm's AUC was significantly higher, by 11.8 and 22.6% in the test set, than two published PEWS. The novel algorithm achieved higher sensitivity, specificity, and AUC than the two PEWS reported in the literature. Copyright © 2014 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  20. Maize sugary enhancer1 (se1) is a presence-absence variant of a previously uncharacterized gene and development of educational videos to raise the profile of plant breeding and improve curricula

    Science.gov (United States)

    Haro von Mogel, Karl J.

    Carbohydrate metabolism is a biologically, economically, and culturally important process in crop plants. Humans have selected many crop species such as maize (Zea mays L.) in ways that have resulted in changes to carbohydrate metabolic pathways, and understanding the underlying genetics of this pathway is therefore exceedingly important. A previously uncharacterized starch metabolic pathway mutant, sugary enhancer1 (se1), is a recessive modifier of sugary1 (su1) sweet corn that increases the sugar content while maintaining an appealing creamy texture. This allele has been incorporated into many sweet corn varieties since its discovery in the 1970s, however, testing for the presence of this allele has been difficult. A genetic stock was developed that allowed the presence of se1 to be visually scored in segregating ears, which were used to genetically map se1 to the deletion of a single gene model located on the distal end of the long arm of chromosome 2. An analysis of homology found that this gene is specific to monocots, and the gene is expressed in the endosperm and developing leaf. The se1 allele increased water soluble polysaccharide (WSP) and decreased amylopectin in maize endosperm, but there was no overall effect on starch content in mature leaves due to se1. This discovery will lead to a greater understanding of starch metabolism, and the marker developed will assist in breeding. There is a present need for increased training for plant breeders to meet the growing needs of the human population. To raise the profile of plant breeding among young students, a series of videos called Fields of Study was developed. These feature interviews with plant breeders who talk about what they do as plant breeders and what they enjoy about their chosen profession. To help broaden the education of students in college biology courses, and assist with the training of plant breeders, a second video series, Pollination Methods was developed. Each video focuses on one or two

  1. Development and Assessment of a Computer Algorithm for Stroke Vascular Localization Using Components of the National Institutes of Health Stroke Scale.

    Science.gov (United States)

    Lerner, David P; Tseng, Bertrand P; Goldstein, Larry B

    2016-02-01

    The National Institutes of Health Stroke Scale (NIHSS) was not intended to be used to determine the stroke's vascular distribution. The aim of this study was to develop, assess the reliability, and validate a computer algorithm based on the NIHSS for this purpose. Two cohorts of patients with ischemic stroke having similar distributions of Oxfordshire localizations (total anterior, partial anterior, lacunar, and posterior circulation) based on neuroimaging were identified. The first cohort (n = 40) was used to develop a computer algorithm for vascular localization using a modified version of the NIHSS (NIHSS-Localization [NIHSS-Loc]) that included the laterality of selected deficits; the second (n = 20) was used to assess the reliability of algorithm-based localizations compared to those of 2 vascular neurologists. The validity of the algorithm-based localizations was assessed in comparison to neuroimaging. Agreement was assessed using the unweighted kappa (κ) statistic. Agreement between the 2 raters using the standard NIHSS was slight to moderate (κ = .36, 95% confidence interval [CI] .10-.61). Inter-rater agreement significantly improved to the substantial to almost perfect range using the NIHSS-Loc (κ = .88, 95% CI .73-1.00). Agreement was perfect when the 2 raters entered the data into the NIHSS-Loc computer algorithm (κ = 1.00, 95% CI 1.00-1.00). Agreement between the algorithm localization and neuroimaging results was fair to moderate (κ = .59, 95% CI .35-.84) and not significantly different from the localizations of either rater using the NIHSS-Loc. A computerized, modified version of the standard NIHSS can be used to reliably and validly assign the vascular distribution of an acute ischemic stroke. Copyright © 2015 National Stroke Association. Published by Elsevier Inc. All rights reserved.

  2. Improving GPU-accelerated adaptive IDW interpolation algorithm using fast kNN search.

    Science.gov (United States)

    Mei, Gang; Xu, Nengxiong; Xu, Liangliang

    2016-01-01

    This paper presents an efficient parallel Adaptive Inverse Distance Weighting (AIDW) interpolation algorithm on modern Graphics Processing Unit (GPU). The presented algorithm is an improvement of our previous GPU-accelerated AIDW algorithm by adopting fast k-nearest neighbors (kNN) search. In AIDW, it needs to find several nearest neighboring data points for each interpolated point to adaptively determine the power parameter; and then the desired prediction value of the interpolated point is obtained by weighted interpolating using the power parameter. In this work, we develop a fast kNN search approach based on the space-partitioning data structure, even grid, to improve the previous GPU-accelerated AIDW algorithm. The improved algorithm is composed of the stages of kNN search and weighted interpolating. To evaluate the performance of the improved algorithm, we perform five groups of experimental tests. The experimental results indicate: (1) the improved algorithm can achieve a speedup of up to 1017 over the corresponding serial algorithm; (2) the improved algorithm is at least two times faster than our previous GPU-accelerated AIDW algorithm; and (3) the utilization of fast kNN search can significantly improve the computational efficiency of the entire GPU-accelerated AIDW algorithm.

  3. Some computational challenges of developing efficient parallel algorithms for data-dependent computations in thermal-hydraulics supercomputer applications

    International Nuclear Information System (INIS)

    Woodruff, S.B.

    1994-01-01

    The Transient Reactor Analysis Code (TRAC), which features a two-fluid treatment of thermal-hydraulics, is designed to model transients in water reactors and related facilities. One of the major computational costs associated with TRAC and similar codes is calculating constitutive coefficients. Although the formulations for these coefficients are local, the costs are flow-regime- or data-dependent; i.e., the computations needed for a given spatial node often vary widely as a function of time. Consequently, a fixed, uniform assignment of nodes to prallel processors will result in degraded computational efficiency due to the poor load balancing. A standard method for treating data-dependent models on vector architectures has been to use gather operations (or indirect adressing) to sort the nodes into subsets that (temporarily) share a common computational model. However, this method is not effective on distributed memory data parallel architectures, where indirect adressing involves expensive communication overhead. Another serious problem with this method involves software engineering challenges in the areas of maintainability and extensibility. For example, an implementation that was hand-tuned to achieve good computational efficiency would have to be rewritten whenever the decision tree governing the sorting was modified. Using an example based on the calculation of the wall-to-liquid and wall-to-vapor heat-transfer coefficients for three nonboiling flow regimes, we describe how the use of the Fortran 90 WHERE construct and automatic inlining of functions can be used to ameliorate this problem while improving both efficiency and software engineering. Unfortunately, a general automatic solution to the load-balancing problem associated with data-dependent computations is not yet available for massively parallel architectures. We discuss why developers should either wait for such solutions or consider alternative numerical algorithms, such as a neural network

  4. Algebraic Algorithm Design and Local Search

    National Research Council Canada - National Science Library

    Graham, Robert

    1996-01-01

    .... Algebraic techniques have been applied successfully to algorithm synthesis by the use of algorithm theories and design tactics, an approach pioneered in the Kestrel Interactive Development System (KIDS...

  5. Evaluation the Quality of Cloud Dataset from the Goddard Multi-Scale Modeling Framework for Supporting GPM Algorithm Development

    Science.gov (United States)

    Chern, J.; Tao, W.; Mohr, K. I.; Matsui, T.; Lang, S. E.

    2013-12-01

    With recent rapid advancement in computational technology, the multi-scale modeling framework (MMF) that replaces conventional cloud parameterizations with a cloud-resolving model (CRM) in each grid column of a GCM has been developed and improved at NASA Goddard. The Goddard MMF is based on the coupling of the Goddard Cumulus Ensemble (GCE), a CRM model, and the Goddard GEOS global model. In recent years, a few new and improved microphysical schemes are developed and implemented to the GCE based on observations from field campaigns. These schemes have been incorporated into the MMF. The MMF has global coverage and can provide detailed cloud properties such as cloud amount, hydrometeors types, and vertical profile of water contents at high spatial and temporal resolution of a cloud-resolving model. When coupled with the Goddard Satellite Data Simulation Unit (GSDSU), a system with multi-satellite, multi-sensor, and multi-spectrum satellite simulators, the MMF system can provide radiances and backscattering similar to what satellite directly observed. In this study, one-year (2007) MMF simulation has been performed with the new 4-ice (cloud ice, snow, graupel and hail) microphysical scheme. The GEOS global model is run at 2o x 2.5o resolution and the embedded two-dimensional GCEs each has 64 columns at 4 km horizontal resolution. The large-scale forcing from the GCM is nudged to EC-Interim analysis to reduce the influence of MMF model biases on the cloud-resolving model results. The simulation provides more than 300 millions of vertical profiles of cloud dataset in different season, geographic locations, and climate regimes. This cloud dataset is used to supplement observations over data sparse areas for supporting GPM algorithm development. The model simulated mean and variability of surface rainfall and snowfall, cloud and precipitation types, cloud properties, radiances and backscattering are evaluated against satellite observations. We will assess the strengths

  6. SEBAL-A: A Remote Sensing ET Algorithm that Accounts for Advection with Limited Data. Part I: Development and Validation

    Directory of Open Access Journals (Sweden)

    Mcebisi Mkhwanazi

    2015-11-01

    Full Text Available The Surface Energy Balance Algorithm for Land (SEBAL is one of the remote sensing (RS models that are increasingly being used to determine evapotranspiration (ET. SEBAL is a widely used model, mainly due to the fact that it requires minimum weather data, and also no prior knowledge of surface characteristics is needed. However, it has been observed that it underestimates ET under advective conditions due to its disregard of advection as another source of energy available for evaporation. A modified SEBAL model was therefore developed in this study. An advection component, which is absent in the original SEBAL, was introduced such that the energy available for evapotranspiration was a sum of net radiation and advected heat energy. The improved SEBAL model was termed SEBAL-Advection or SEBAL-A. An important aspect of the improved model is the estimation of advected energy using minimal weather data. While other RS models would require hourly weather data to be able to account for advection (e.g., METRIC, SEBAL-A only requires daily averages of limited weather data, making it appropriate even in areas where weather data at short time steps may not be available. In this study, firstly, the original SEBAL model was evaluated under advective and non-advective conditions near Rocky Ford in southeastern Colorado, a semi-arid area where afternoon advection is common occurrence. The SEBAL model was found to incur large errors when there was advection (which was indicated by higher wind speed and warm and dry air. SEBAL-A was then developed and validated in the same area under standard surface conditions, which were described as healthy alfalfa with height of 40–60 cm, without water-stress. ET values estimated using the original and modified SEBAL were compared to large weighing lysimeter-measured ET values. When the SEBAL ET was compared to SEBAL-A ET values, the latter showed improved performance, with the ET Mean Bias Error (MBE reduced from −17

  7. An efficient algorithm for function optimization: modified stem cells algorithm

    Science.gov (United States)

    Taherdangkoo, Mohammad; Paziresh, Mahsa; Yazdi, Mehran; Bagheri, Mohammad

    2013-03-01

    In this paper, we propose an optimization algorithm based on the intelligent behavior of stem cell swarms in reproduction and self-organization. Optimization algorithms, such as the Genetic Algorithm (GA), Particle Swarm Optimization (PSO) algorithm, Ant Colony Optimization (ACO) algorithm and Artificial Bee Colony (ABC) algorithm, can give solutions to linear and non-linear problems near to the optimum for many applications; however, in some case, they can suffer from becoming trapped in local optima. The Stem Cells Algorithm (SCA) is an optimization algorithm inspired by the natural behavior of stem cells in evolving themselves into new and improved cells. The SCA avoids the local optima problem successfully. In this paper, we have made small changes in the implementation of this algorithm to obtain improved performance over previous versions. Using a series of benchmark functions, we assess the performance of the proposed algorithm and compare it with that of the other aforementioned optimization algorithms. The obtained results prove the superiority of the Modified Stem Cells Algorithm (MSCA).

  8. Learning from nature: Nature-inspired algorithms

    DEFF Research Database (Denmark)

    Albeanu, Grigore; Madsen, Henrik; Popentiu-Vladicescu, Florin

    2016-01-01

    During last decade, the nature has inspired researchers to develop new algorithms. The largest collection of nature-inspired algorithms is biology-inspired: swarm intelligence (particle swarm optimization, ant colony optimization, cuckoo search, bees' algorithm, bat algorithm, firefly algorithm etc...

  9. Genetic Algorithms and Local Search

    Science.gov (United States)

    Whitley, Darrell

    1996-01-01

    The first part of this presentation is a tutorial level introduction to the principles of genetic search and models of simple genetic algorithms. The second half covers the combination of genetic algorithms with local search methods to produce hybrid genetic algorithms. Hybrid algorithms can be modeled within the existing theoretical framework developed for simple genetic algorithms. An application of a hybrid to geometric model matching is given. The hybrid algorithm yields results that improve on the current state-of-the-art for this problem.

  10. 3D protein structure prediction with genetic tabu search algorithm.

    Science.gov (United States)

    Zhang, Xiaolong; Wang, Ting; Luo, Huiping; Yang, Jack Y; Deng, Youping; Tang, Jinshan; Yang, Mary Qu

    2010-05-28

    Protein structure prediction (PSP) has important applications in different fields, such as drug design, disease prediction, and so on. In protein structure prediction, there are two important issues. The first one is the design of the structure model and the second one is the design of the optimization technology. Because of the complexity of the realistic protein structure, the structure model adopted in this paper is a simplified model, which is called off-lattice AB model. After the structure model is assumed, optimization technology is needed for searching the best conformation of a protein sequence based on the assumed structure model. However, PSP is an NP-hard problem even if the simplest model is assumed. Thus, many algorithms have been developed to solve the global optimization problem. In this paper, a hybrid algorithm, which combines genetic algorithm (GA) and tabu search (TS) algorithm, is developed to complete this task. In order to develop an efficient optimization algorithm, several improved strategies are developed for the proposed genetic tabu search algorithm. The combined use of these strategies can improve the efficiency of the algorithm. In these strategies, tabu search introduced into the crossover and mutation operators can improve the local search capability, the adoption of variable population size strategy can maintain the diversity of the population, and the ranking selection strategy can improve the possibility of an individual with low energy value entering into next generation. Experiments are performed with Fibonacci sequences and real protein sequences. Experimental results show that the lowest energy obtained by the proposed GATS algorithm is lower than that obtained by previous methods. The hybrid algorithm has the advantages from both genetic algorithm and tabu search algorithm. It makes use of the advantage of multiple search points in genetic algorithm, and can overcome poor hill-climbing capability in the conventional genetic

  11. [Placental complications after a previous cesarean section].

    Science.gov (United States)

    Milosević, Jelena; Lilić, Vekoslav; Tasić, Marija; Radović-Janosević, Dragana; Stefanović, Milan; Antić, Vladimir

    2009-01-01

    The incidence of cesarean section has been rising in the past 50 years. With the increased number of cesarean sections, the number of pregnancies with the previous cesarean section rises as well. The aim of this study was to establish the influence of the previous cesarean section on the development of placental complications: placenta previa, placental abruption and placenta accreta, as well as to determine the influence of the number of previous cesarean sections on the complication development. The research was conducted at the Clinic of Gynecology and Obstetrics in Nis covering 10-year-period (from 1995 to 2005) with 32358 deliveries, 1280 deliveries after a previous cesarean section, 131 cases of placenta previa and 118 cases of placental abruption. The experimental groups was presented by the cases of placenta previa or placental abruption with prior cesarean section in obstetrics history, opposite to the control group having the same conditions but without a cesarean section in medical history. The incidence of placenta previa in the control group was 0.33%, opposite to the 1.86% incidence after one cesarean section (pcesarean sections and as high as 14.28% after three cesarean sections in obstetric history. Placental abruption was recorded as placental complication in 0.33% pregnancies in the control group, while its incidence was 1.02% after one cesarean section (pcesarean sections. The difference in the incidence of intrapartal hysterectomy between the group with prior cesarean section (0.86%) and without it (0.006%) shows a high statistical significance (pcesarean section is an important risk factor for the development of placental complications.

  12. Genetic Algorithms in Noisy Environments

    OpenAIRE

    THEN, T. W.; CHONG, EDWIN K. P.

    1993-01-01

    Genetic Algorithms (GA) have been widely used in the areas of searching, function optimization, and machine learning. In many of these applications, the effect of noise is a critical factor in the performance of the genetic algorithms. While it hals been shown in previous siiudies that genetic algorithms are still able to perform effectively in the presence of noise, the problem of locating the global optimal solution at the end of the search has never been effectively addressed. Furthermore,...

  13. Development and analysis of an economizer control strategy algorithm to promote an opportunity for energy savings in air conditioning installations

    Energy Technology Data Exchange (ETDEWEB)

    Neto, Jose H.M.; Azevedo, Walter L. [Centro Federal de Educacao Tecnologica de Minas Gerais (CEFET), Belo Horizonte, MG (Brazil). Dept. de Engenharia Mecanica]. E-mail: henrique@daem.des.cefetmg.br

    2000-07-01

    This work presents an algorithm control strategy denominated enthalpy economizer. The objective of this algorithm strategy is to determine the adequate fractions of outside and return air flowrates entering a cooling coil based on the analysis of the outside, return and supply air enthalpies, rather than on the analysis of the dry bulb temperatures. The proposed algorithm predicts the actual opening position of the outside and return air dampers in order to provide the lower mixing air enthalpy. First, the psychometrics properties of the outside and return air are calculated from actual measurements of the dry and wet bulb temperatures. Then, three distinct cases are analyzed: the enthalpy of the outside air is lower than the enthalpy of the supply air (free cooling); the enthalpy of the outside air is higher than the enthalpy of the return air; the enthalpy of the outside air is lower than the enthalpy of the return air and higher than the temperature of the supply air. Different outside air conditions were selected in order to represent typical weather data of Brazilians cities, as well as typical return air conditions. It was found that the enthalpy control strategy could promote an opportunity for energy savings mainly during mild nights and wintertime periods as well as during warm afternoons and summertime periods, depending on the outside air relative humidity. The proposed algorithm works well and can be integrated in some commercial automation software to reduce energy consumption and electricity demand. (author)

  14. Development of a brief assessment and algorithm for ascertaining dementia in low-income and middle-income countries: the 10/66 short dementia diagnostic schedule.

    Science.gov (United States)

    Stewart, Robert; Guerchet, Maëlenn; Prince, Martin

    2016-05-25

    To develop and evaluate a short version of the 10/66 dementia diagnostic schedule for use in low-income and middle-income countries. Split-half analysis for algorithm development and testing; cross-evaluation of short-schedule and standard-schedule algorithms in 12 community surveys. (1) The 10/66 pilot sample data set of people aged 60 years and over in 25 international centres each recruiting the following samples: (a) dementia; (b) depression, no dementia; (c) no dementia, high education and (d) no dementia, low education. (2) Cross-sectional surveys of people aged 65 years or more from 12 urban and rural sites in 8 countries (Cuba, Dominican Republic, Peru, Mexico, Venezuela, India, China and Puerto Rico). In the 10/66 pilot samples, the algorithm for the short schedule was developed in 1218 participants and tested in 1211 randomly selected participants; it was evaluated against the algorithm for the standard 10/66 schedule in 16 536 survey participants. The short diagnostic schedule was derived from the Community Screening Instrument for Dementia, the CERAD 10-word list recall task and the Euro-D depression screen; it was evaluated against clinically assigned groups in the pilot data and against the standard schedule (using the Geriatric Mental State (GMS) rather than Euro-D) in the surveys. In the pilot test sample, the short-schedule algorithm ascertained dementia with 94.2% sensitivity. Specificities were 80.2% in depression, 96.6% in the high-education group and 92.7% in the low-education group. In survey samples, it coincided with standard algorithm dementia classifications with over 95% accuracy in most sites. Estimated dementia prevalences in the survey samples were not consistently higher or lower using the short compared to standard schedule. For epidemiological studies of dementia in low-income and middle-income settings where the GMS interview (and/or interviewer training required) is not feasible, the short 10/66 schedule and algorithm provide

  15. Combined Simulated Annealing Algorithm for the Discrete Facility Location Problem

    Directory of Open Access Journals (Sweden)

    Jin Qin

    2012-01-01

    Full Text Available The combined simulated annealing (CSA algorithm was developed for the discrete facility location problem (DFLP in the paper. The method is a two-layer algorithm, in which the external subalgorithm optimizes the decision of the facility location decision while the internal subalgorithm optimizes the decision of the allocation of customer's demand under the determined location decision. The performance of the CSA is tested by 30 instances with different sizes. The computational results show that CSA works much better than the previous algorithm on DFLP and offers a new reasonable alternative solution method to it.

  16. SU-G-JeP1-07: Development of a Programmable Motion Testbed for the Validation of Ultrasound Tracking Algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Shepard, A; Matrosic, C; Zagzebski, J; Bednarz, B [University of Wisconsin, Madison, WI (United States)

    2016-06-15

    Purpose: To develop an advanced testbed that combines a 3D motion stage and ultrasound phantom to optimize and validate 2D and 3D tracking algorithms for real-time motion management during radiation therapy. Methods: A Siemens S2000 Ultrasound scanner utilizing a 9L4 transducer was coupled with the Washington University 4D Phantom to simulate patient motion. The transducer was securely fastened to the 3D stage and positioned to image three cylinders of varying contrast in a Gammex 404GS LE phantom. The transducer was placed within a water bath above the phantom in order to maintain sufficient coupling for the entire range of simulated motion. A programmed motion sequence was used to move the transducer during image acquisition and a cine video was acquired for one minute to allow for long sequence tracking. Images were analyzed using a normalized cross-correlation block matching tracking algorithm and compared to the known motion of the transducer relative to the phantom. Results: The setup produced stable ultrasound motion traces consistent with those programmed into the 3D motion stage. The acquired ultrasound images showed minimal artifacts and an image quality that was more than suitable for tracking algorithm verification. Comparisons of a block matching tracking algorithm with the known motion trace for the three features resulted in an average tracking error of 0.59 mm. Conclusion: The high accuracy and programmability of the 4D phantom allows for the acquisition of ultrasound motion sequences that are highly customizable; allowing for focused analysis of some common pitfalls of tracking algorithms such as partial feature occlusion or feature disappearance, among others. The design can easily be modified to adapt to any probe such that the process can be extended to 3D acquisition. Further development of an anatomy specific phantom better resembling true anatomical landmarks could lead to an even more robust validation. This work is partially funded by NIH

  17. Developement of a same-side kaon tagging algorithm of B^0_s decays for measuring delta m_s at CDF II

    Energy Technology Data Exchange (ETDEWEB)

    Menzemer, Stephanie; /Heidelberg U.

    2006-06-01

    The authors developed a Same-Side Kaon Tagging algorithm to determine the production flavor of B{sub s}{sup 0} mesons. Until the B{sub s}{sup 0} mixing frequency is clearly observed the performance of the Same-Side Kaon Tagging algorithm can not be measured on data but has to be determined on Monte Carlo simulation. Data and Monte Carlo agreement has been evaluated for both the B{sub s}{sup 0} and the high statistics B{sup +} and B{sup 0} modes. Extensive systematic studies were performed to quantify potential discrepancies between data and Monte Carlo. The final optimized tagging algorithm exploits the particle identification capability of the CDF II detector. it achieves a tagging performance of {epsilon}D{sup 2} = 4.0{sub -1.2}{sup +0.9} on the B{sub s}{sup 0} {yields} D{sub s}{sup -} {pi}{sup +} sample. The Same-Side Kaon Tagging algorithm presented here has been applied to the ongoing B{sub s}{sup 0} mixing analysis, and has provided a factor of 3-4 increase in the effective statistical size of the sample. This improvement results in the first direct measurement of the B{sub s}{sup 0} mixing frequency.

  18. Parallel algorithms for placement and routing in VLSI design. Ph.D. Thesis

    Science.gov (United States)

    Brouwer, Randall Jay

    1991-01-01

    The computational requirements for high quality synthesis, analysis, and verification of very large scale integration (VLSI) designs have rapidly increased with the fast growing complexity of these designs. Research in the past has focused on the development of heuristic algorithms, special purpose hardware accelerators, or parallel algorithms for the numerous design tasks to decrease the time required for solution. Two new parallel algorithms are proposed for two VLSI synthesis tasks, standard cell placement and global routing. The first algorithm, a parallel algorithm for global routing, uses hierarchical techniques to decompose the routing problem into independent routing subproblems that are solved in parallel. Results are then presented which compare the routing quality to the results of other published global routers and which evaluate the speedups attained. The second algorithm, a parallel algorithm for cell placement and global routing, hierarchically integrates a quadrisection placement algorithm, a bisection placement algorithm, and the previous global routing algorithm. Unique partitioning techniques are used to decompose the various stages of the algorithm into independent tasks which can be evaluated in parallel. Finally, results are presented which evaluate the various algorithm alternatives and compare the algorithm performance to other placement programs. Measurements are presented on the parallel speedups available.

  19. Long-term maturation of congenital diaphragmatic hernia treatment results: toward development of a severity-specific treatment algorithm.

    Science.gov (United States)

    Kays, David W; Islam, Saleem; Larson, Shawn D; Perkins, Joy; Talbert, James L

    2013-10-01

    To assess the impact of varying approaches to congenital diaphragmatic hernia (CDH) repair timing on survival and need for ECMO when controlled for anatomic and physiologic disease severity in a large consecutive series of patients with CDH. Our publication of 60 consecutive patients with CDH in 1999 showed that survival was significantly improved by limiting lung inflation pressures and eliminating hyperventilation. We retrospectively reviewed 268 consecutive patients with CDH, combining 208 new patients with the 60 previously reported. Management and ventilator strategy were highly consistent throughout. Varying approaches to surgical timing were applied as the series matured. Patients with anatomically less severe left liver-down CDH had significantly increased need for ECMO if repaired in the first 48 hours, whereas patients with more severe left liver-up CDH survived at a higher rate when repair was performed before ECMO. Overall survival of 268 patients was 78%. Survival was 88% for those without lethal associated anomalies. Of these, 99% of left liver-down CDH survived, 91% of right CDH survived, and 76% of left liver-up CDH survived. This study shows that patients with anatomically less severe CDH benefit from delayed surgery whereas patients with anatomically more severe CDH may benefit from a more aggressive surgical approach. These findings show that patients respond differently across the CDH anatomic severity spectrum and lay the foundation for the development of risk-specific treatment protocols for patients with CDH.

  20. Parallel algorithms for continuum dynamics

    International Nuclear Information System (INIS)

    Hicks, D.L.; Liebrock, L.M.

    1987-01-01

    Simply porting existing parallel programs to a new parallel processor may not achieve the full speedup possible; to achieve the maximum efficiency may require redesigning the parallel algorithms for the specific architecture. The authors discuss here parallel algorithms that were developed first for the HEP processor and then ported to the CRAY X-MP/4, the ELXSI/10, and the Intel iPSC/32. Focus is mainly on the most recent parallel processing results produced, i.e., those on the Intel Hypercube. The applications are simulations of continuum dynamics in which the momentum and stress gradients are important. Examples of these are inertial confinement fusion experiments, severe breaks in the coolant system of a reactor, weapons physics, shock-wave physics. Speedup efficiencies on the Intel iPSC Hypercube are very sensitive to the ratio of communication to computation. Great care must be taken in designing algorithms for this machine to avoid global communication. This is much more critical on the iPSC than it was on the three previous parallel processors

<