WorldWideScience

Sample records for previously developed algorithm

  1. Multisensor data fusion algorithm development

    Energy Technology Data Exchange (ETDEWEB)

    Yocky, D.A.; Chadwick, M.D.; Goudy, S.P.; Johnson, D.K.

    1995-12-01

    This report presents a two-year LDRD research effort into multisensor data fusion. We approached the problem by addressing the available types of data, preprocessing that data, and developing fusion algorithms using that data. The report reflects these three distinct areas. First, the possible data sets for fusion are identified. Second, automated registration techniques for imagery data are analyzed. Third, two fusion techniques are presented. The first fusion algorithm is based on the two-dimensional discrete wavelet transform. Using test images, the wavelet algorithm is compared against intensity modulation and intensity-hue-saturation image fusion algorithms that are available in commercial software. The wavelet approach outperforms the other two fusion techniques by preserving spectral/spatial information more precisely. The wavelet fusion algorithm was also applied to Landsat Thematic Mapper and SPOT panchromatic imagery data. The second algorithm is based on a linear-regression technique. We analyzed the technique using the same Landsat and SPOT data.

  2. Technical Note: A novel leaf sequencing optimization algorithm which considers previous underdose and overdose events for MLC tracking radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Wisotzky, Eric, E-mail: eric.wisotzky@charite.de, E-mail: eric.wisotzky@ipk.fraunhofer.de; O’Brien, Ricky; Keall, Paul J., E-mail: paul.keall@sydney.edu.au [Radiation Physics Laboratory, Sydney Medical School, University of Sydney, Sydney, NSW 2006 (Australia)

    2016-01-15

    Purpose: Multileaf collimator (MLC) tracking radiotherapy is complex as the beam pattern needs to be modified due to the planned intensity modulation as well as the real-time target motion. The target motion cannot be planned; therefore, the modified beam pattern differs from the original plan and the MLC sequence needs to be recomputed online. Current MLC tracking algorithms use a greedy heuristic in that they optimize for a given time, but ignore past errors. To overcome this problem, the authors have developed and improved an algorithm that minimizes large underdose and overdose regions. Additionally, previous underdose and overdose events are taken into account to avoid regions with high quantity of dose events. Methods: The authors improved the existing MLC motion control algorithm by introducing a cumulative underdose/overdose map. This map represents the actual projection of the planned tumor shape and logs occurring dose events at each specific regions. These events have an impact on the dose cost calculation and reduce recurrence of dose events at each region. The authors studied the improvement of the new temporal optimization algorithm in terms of the L1-norm minimization of the sum of overdose and underdose compared to not accounting for previous dose events. For evaluation, the authors simulated the delivery of 5 conformal and 14 intensity-modulated radiotherapy (IMRT)-plans with 7 3D patient measured tumor motion traces. Results: Simulations with conformal shapes showed an improvement of L1-norm up to 8.5% after 100 MLC modification steps. Experiments showed comparable improvements with the same type of treatment plans. Conclusions: A novel leaf sequencing optimization algorithm which considers previous dose events for MLC tracking radiotherapy has been developed and investigated. Reductions in underdose/overdose are observed for conformal and IMRT delivery.

  3. Developing Scoring Algorithms

    Science.gov (United States)

    We developed scoring procedures to convert screener responses to estimates of individual dietary intake for fruits and vegetables, dairy, added sugars, whole grains, fiber, and calcium using the What We Eat in America 24-hour dietary recall data from the 2003-2006 NHANES.

  4. STAR Algorithm Integration Team - Facilitating operational algorithm development

    Science.gov (United States)

    Mikles, V. J.

    2015-12-01

    The NOAA/NESDIS Center for Satellite Research and Applications (STAR) provides technical support of the Joint Polar Satellite System (JPSS) algorithm development and integration tasks. Utilizing data from the S-NPP satellite, JPSS generates over thirty Environmental Data Records (EDRs) and Intermediate Products (IPs) spanning atmospheric, ocean, cryosphere, and land weather disciplines. The Algorithm Integration Team (AIT) brings technical expertise and support to product algorithms, specifically in testing and validating science algorithms in a pre-operational environment. The AIT verifies that new and updated algorithms function in the development environment, enforces established software development standards, and ensures that delivered packages are functional and complete. AIT facilitates the development of new JPSS-1 algorithms by implementing a review approach based on the Enterprise Product Lifecycle (EPL) process. Building on relationships established during the S-NPP algorithm development process and coordinating directly with science algorithm developers, the AIT has implemented structured reviews with self-contained document suites. The process has supported algorithm improvements for products such as ozone, active fire, vegetation index, and temperature and moisture profiles.

  5. To develop a universal gamut mapping algorithm

    International Nuclear Information System (INIS)

    Morovic, J.

    1998-10-01

    When a colour image from one colour reproduction medium (e.g. nature, a monitor) needs to be reproduced on another (e.g. on a monitor or in print) and these media have different colour ranges (gamuts), it is necessary to have a method for mapping between them. If such a gamut mapping algorithm can be used under a wide range of conditions, it can also be incorporated in an automated colour reproduction system and considered to be in some sense universal. In terms of preliminary work, a colour reproduction system was implemented, for which a new printer characterisation model (including grey-scale correction) was developed. Methods were also developed for calculating gamut boundary descriptors and for calculating gamut boundaries along given lines from them. The gamut mapping solution proposed in this thesis is a gamut compression algorithm developed with the aim of being accurate and universally applicable. It was arrived at by way of an evolutionary gamut mapping development strategy for the purposes of which five test images were reproduced between a CRT and printed media obtained using an inkjet printer. Initially, a number of previously published algorithms were chosen and psychophysically evaluated whereby an important characteristic of this evaluation was that it also considered the performance of algorithms for individual colour regions within the test images used. New algorithms were then developed on their basis, subsequently evaluated and this process was repeated once more. In this series of experiments the new GCUSP algorithm, which consists of a chroma-dependent lightness compression followed by a compression towards the lightness of the reproduction cusp on the lightness axis, gave the most accurate and stable performance overall. The results of these experiments were also useful for improving the understanding of some gamut mapping factors - in particular gamut difference. In addition to looking at accuracy, the pleasantness of reproductions obtained

  6. Developing an Enhanced Lightning Jump Algorithm for Operational Use

    Science.gov (United States)

    Schultz, Christopher J.; Petersen, Walter A.; Carey, Lawrence D.

    2009-01-01

    Overall Goals: 1. Build on the lightning jump framework set through previous studies. 2. Understand what typically occurs in nonsevere convection with respect to increases in lightning. 3. Ultimately develop a lightning jump algorithm for use on the Geostationary Lightning Mapper (GLM). 4 Lightning jump algorithm configurations were developed (2(sigma), 3(sigma), Threshold 10 and Threshold 8). 5 algorithms were tested on a population of 47 nonsevere and 38 severe thunderstorms. Results indicate that the 2(sigma) algorithm performed best over the entire thunderstorm sample set with a POD of 87%, a far of 35%, a CSI of 59% and a HSS of 75%.

  7. Development and Evaluation of Algorithms for Breath Alcohol Screening.

    Science.gov (United States)

    Ljungblad, Jonas; Hök, Bertil; Ekström, Mikael

    2016-04-01

    Breath alcohol screening is important for traffic safety, access control and other areas of health promotion. A family of sensor devices useful for these purposes is being developed and evaluated. This paper is focusing on algorithms for the determination of breath alcohol concentration in diluted breath samples using carbon dioxide to compensate for the dilution. The examined algorithms make use of signal averaging, weighting and personalization to reduce estimation errors. Evaluation has been performed by using data from a previously conducted human study. It is concluded that these features in combination will significantly reduce the random error compared to the signal averaging algorithm taken alone.

  8. Developing Scoring Algorithms (Earlier Methods)

    Science.gov (United States)

    We developed scoring procedures to convert screener responses to estimates of individual dietary intake for fruits and vegetables, dairy, added sugars, whole grains, fiber, and calcium using the What We Eat in America 24-hour dietary recall data from the 2003-2006 NHANES.

  9. Model based development of engine control algorithms

    NARCIS (Netherlands)

    Dekker, H.J.; Sturm, W.L.

    1996-01-01

    Model based development of engine control systems has several advantages. The development time and costs are strongly reduced because much of the development and optimization work is carried out by simulating both engine and control system. After optimizing the control algorithm it can be executed

  10. Critical function monitoring system algorithm development

    International Nuclear Information System (INIS)

    Harmon, D.L.

    1984-01-01

    Accurate critical function status information is a key to operator decision-making during events threatening nuclear power plant safety. The Critical Function Monitoring System provides continuous critical function status monitoring by use of algorithms which mathematically represent the processes by which an operating staff would determine critical function status. This paper discusses in detail the systematic design methodology employed to develop adequate Critical Function Monitoring System algorithms

  11. Predictive factors for the development of diabetes in women with previous gestational diabetes mellitus

    DEFF Research Database (Denmark)

    Damm, P.; Kühl, C.; Bertelsen, Aksel

    1992-01-01

    OBJECTIVES: The purpose of this study was to determine the incidence of diabetes in women with previous dietary-treated gestational diabetes mellitus and to identify predictive factors for development of diabetes. STUDY DESIGN: Two to 11 years post partum, glucose tolerance was investigated in 241...... women with previous dietary-treated gestational diabetes mellitus and 57 women without previous gestational diabetes mellitus (control group). RESULTS: Diabetes developed in 42 (17.4%) women with previous gestational diabetes mellitus (3.7% insulin-dependent diabetes mellitus and 13.7% non...... of previous patients with gestational diabetes mellitus in whom plasma insulin was measured during an oral glucose tolerance test in late pregnancy a low insulin response at diagnosis was found to be an independent predictive factor for diabetes development. CONCLUSIONS: Women with previous dietary...

  12. Development of a Thermal Equilibrium Prediction Algorithm

    International Nuclear Information System (INIS)

    Aviles-Ramos, Cuauhtemoc

    2002-01-01

    A thermal equilibrium prediction algorithm is developed and tested using a heat conduction model and data sets from calorimetric measurements. The physical model used in this study is the exact solution of a system of two partial differential equations that govern the heat conduction in the calorimeter. A multi-parameter estimation technique is developed and implemented to estimate the effective volumetric heat generation and thermal diffusivity in the calorimeter measurement chamber, and the effective thermal diffusivity of the heat flux sensor. These effective properties and the exact solution are used to predict the heat flux sensor voltage readings at thermal equilibrium. Thermal equilibrium predictions are carried out considering only 20% of the total measurement time required for thermal equilibrium. A comparison of the predicted and experimental thermal equilibrium voltages shows that the average percentage error from 330 data sets is only 0.1%. The data sets used in this study come from calorimeters of different sizes that use different kinds of heat flux sensors. Furthermore, different nuclear material matrices were assayed in the process of generating these data sets. This study shows that the integration of this algorithm into the calorimeter data acquisition software will result in an 80% reduction of measurement time. This reduction results in a significant cutback in operational costs for the calorimetric assay of nuclear materials. (authors)

  13. Algorithms

    Indian Academy of Sciences (India)

    ticians but also forms the foundation of computer science. Two ... with methods of developing algorithms for solving a variety of problems but ... applications of computers in science and engineer- ... numerical calculus are as important. We will ...

  14. DEVELOPMENT OF A NEW ALGORITHM FOR KEY AND S-BOX GENERATION IN BLOWFISH ALGORITHM

    Directory of Open Access Journals (Sweden)

    TAYSEER S. ATIA

    2014-08-01

    Full Text Available Blowfish algorithm is a block cipher algorithm, its strong, simple algorithm used to encrypt data in block of size 64-bit. Key and S-box generation process in this algorithm require time and memory space the reasons that make this algorithm not convenient to be used in smart card or application requires changing secret key frequently. In this paper a new key and S-box generation process was developed based on Self Synchronization Stream Cipher (SSS algorithm where the key generation process for this algorithm was modified to be used with the blowfish algorithm. Test result shows that the generation process requires relatively slow time and reasonably low memory requirement and this enhance the algorithm and gave it the possibility for different usage.

  15. Development of GPT-based optimization algorithm

    International Nuclear Information System (INIS)

    White, J.R.; Chapman, D.M.; Biswas, D.

    1985-01-01

    The University of Lowell and Westinghouse Electric Corporation are involved in a joint effort to evaluate the potential benefits of generalized/depletion perturbation theory (GPT/DTP) methods for a variety of light water reactor (LWR) physics applications. One part of that work has focused on the development of a GPT-based optimization algorithm for the overall design, analysis, and optimization of LWR reload cores. The use of GPT sensitivity data in formulating the fuel management optimization problem is conceptually straightforward; it is the actual execution of the concept that is challenging. Thus, the purpose of this paper is to address some of the major difficulties, to outline our approach to these problems, and to present some illustrative examples of an efficient GTP-based optimization scheme

  16. Development of Speckle Interferometry Algorithm and System

    International Nuclear Information System (INIS)

    Shamsir, A. A. M.; Jafri, M. Z. M.; Lim, H. S.

    2011-01-01

    Electronic speckle pattern interferometry (ESPI) method is a wholefield, non destructive measurement method widely used in the industries such as detection of defects on metal bodies, detection of defects in intergrated circuits in digital electronics components and in the preservation of priceless artwork. In this research field, this method is widely used to develop algorithms and to develop a new laboratory setup for implementing the speckle pattern interferometry. In speckle interferometry, an optically rough test surface is illuminated with an expanded laser beam creating a laser speckle pattern in the space surrounding the illuminated region. The speckle pattern is optically mixed with a second coherent light field that is either another speckle pattern or a smooth light field. This produces an interferometric speckle pattern that will be detected by sensor to count the change of the speckle pattern due to force given. In this project, an experimental setup of ESPI is proposed to analyze a stainless steel plate using 632.8 nm (red) wavelength of lights.

  17. Development of a Framework for Genetic Algorithms

    OpenAIRE

    Wååg, Håkan

    2009-01-01

    Genetic algorithms is a method of optimization that can be used tosolve many different kinds of problems. This thesis focuses ondeveloping a framework for genetic algorithms that is capable ofsolving at least the two problems explored in the work. Otherproblems are supported by allowing user-made extensions.The purpose of this thesis is to explore the possibilities of geneticalgorithms for optimization problems and artificial intelligenceapplications.To test the framework two applications are...

  18. A Developed ESPRIT Algorithm for DOA Estimation

    Science.gov (United States)

    Fayad, Youssef; Wang, Caiyun; Cao, Qunsheng; Hafez, Alaa El-Din Sayed

    2015-05-01

    A novel algorithm for estimating direction of arrival (DOAE) for target, which aspires to contribute to increase the estimation process accuracy and decrease the calculation costs, has been carried out. It has introduced time and space multiresolution in Estimation of Signal Parameter via Rotation Invariance Techniques (ESPRIT) method (TS-ESPRIT) to realize subspace approach that decreases errors caused by the model's nonlinearity effect. The efficacy of the proposed algorithm is verified by using Monte Carlo simulation, the DOAE accuracy has evaluated by closed-form Cramér-Rao bound (CRB) which reveals that the proposed algorithm's estimated results are better than those of the normal ESPRIT methods leading to the estimator performance enhancement.

  19. Mentoring to develop research selfefficacy, with particular reference to previously disadvantaged individuals

    OpenAIRE

    S. Schulze

    2010-01-01

    The development of inexperienced researchers is crucial. In response to the lack of research self-efficacy of many previously disadvantaged individuals, the article examines how mentoring can enhance the research self-efficacy of mentees. The study is grounded in the self-efficacy theory (SET) – an aspect of the social cognitive theory (SCT). Insights were gained from an in-depth study of SCT, SET and mentoring, and from a completed mentoring project. This led to the formulation of three basi...

  20. Algorithms

    Indian Academy of Sciences (India)

    polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming.

  1. Development of Educational Support System for Algorithm using Flowchart

    Science.gov (United States)

    Ohchi, Masashi; Aoki, Noriyuki; Furukawa, Tatsuya; Takayama, Kanta

    Recently, an information technology is indispensable for the business and industrial developments. However, it has been a social problem that the number of software developers has been insufficient. To solve the problem, it is necessary to develop and implement the environment for learning the algorithm and programming language. In the paper, we will describe the algorithm study support system for a programmer using the flowchart. Since the proposed system uses Graphical User Interface(GUI), it will become easy for a programmer to understand the algorithm in programs.

  2. developed algorithm for the application of british method of concret

    African Journals Online (AJOL)

    t-iyke

    Most of the methods of concrete mix design developed over the years were geared towards manual approach. ... Key words: Concrete mix design; British method; Manual Approach; Algorithm. ..... Statistics for Science and Engineering.

  3. Mentoring to develop research selfefficacy, with particular reference to previously disadvantaged individuals

    Directory of Open Access Journals (Sweden)

    S. Schulze

    2010-07-01

    Full Text Available The development of inexperienced researchers is crucial. In response to the lack of research self-efficacy of many previously disadvantaged individuals, the article examines how mentoring can enhance the research self-efficacy of mentees. The study is grounded in the self-efficacy theory (SET – an aspect of the social cognitive theory (SCT. Insights were gained from an in-depth study of SCT, SET and mentoring, and from a completed mentoring project. This led to the formulation of three basic principles. Firstly, institutions need to provide supportive environmental conditions that facilitate research selfefficacy. This implies a supportive and efficient collective system. The possible effects of performance ratings and reward systems at the institution also need to be considered. Secondly, mentoring needs to create opportunities for young researchers to experience successful learning as a result of appropriate action. To this end, mentees need to be involved in actual research projects in small groups. At the same time the mentor needs to facilitate skills development by coaching and encouragement. Thirdly, mentors need to encourage mentees to believe in their ability to successfully complete research projects. This implies encouraging positive emotional states, stimulating self-reflection and self-comparison with others in the group, giving positive evaluative feedback and being an intentional role model.

  4. Development of morphing algorithms for Histfactory using information geometry

    Energy Technology Data Exchange (ETDEWEB)

    Bandyopadhyay, Anjishnu; Brock, Ian [University of Bonn (Germany); Cranmer, Kyle [New York University (United States)

    2016-07-01

    Many statistical analyses are based on likelihood fits. In any likelihood fit we try to incorporate all uncertainties, both systematic and statistical. We generally have distributions for the nominal and ±1 σ variations of a given uncertainty. Using that information, Histfactory morphs the distributions for any arbitrary value of the given uncertainties. In this talk, a new morphing algorithm will be presented, which is based on information geometry. The algorithm uses the information about the difference between various probability distributions. Subsequently, we map this information onto geometrical structures and develop the algorithm on the basis of different geometrical properties. Apart from varying all nuisance parameters together, this algorithm can also probe both small (< 1 σ) and large (> 2 σ) variations. It will also be shown how this algorithm can be used for interpolating other forms of probability distributions.

  5. Development of a Novel Locomotion Algorithm for Snake Robot

    International Nuclear Information System (INIS)

    Khan, Raisuddin; Billah, Md Masum; Watanabe, Mitsuru; Shafie, A A

    2013-01-01

    A novel algorithm for snake robot locomotion is developed and analyzed in this paper. Serpentine is one of the renowned locomotion for snake robot in disaster recovery mission to overcome narrow space navigation. Several locomotion for snake navigation, such as concertina or rectilinear may be suitable for narrow spaces, but is highly inefficient if the same type of locomotion is used even in open spaces resulting friction reduction which make difficulties for snake movement. A novel locomotion algorithm has been proposed based on the modification of the multi-link snake robot, the modifications include alterations to the snake segments as well elements that mimic scales on the underside of the snake body. Snake robot can be able to navigate in the narrow space using this developed locomotion algorithm. The developed algorithm surmount the others locomotion limitation in narrow space navigation

  6. Hepatic steatosis development with four weeks of physical inactivity in previously active, hyperphagic OLETF rats.

    Science.gov (United States)

    Linden, Melissa A; Meers, Grace M; Ruebel, Meghan L; Jenkins, Nathan T; Booth, Frank W; Laughlin, M Harold; Ibdah, Jamal A; Thyfault, John P; Rector, R Scott

    2013-05-01

    Physical activity-induced prevention of hepatic steatosis is maintained during short-term (7-day) transitions to an inactive state; however, whether these protective effects are present under a longer duration of physical inactivity is largely unknown. Here, we sought to determine whether previous physical activity had protective effects on hepatic steatosis and metabolic health following 4 wk of physical inactivity. Four-week old, hyperphagic, male Otsuka Long-Evans Tokushima fatty (OLETF) rats were randomly assigned to either a sedentary group for 16 wk (OLETF-SED), given access to running wheels for 16 wk with wheels locked 5 h (OLETF-WL5hr) or given access to running wheels for 12 wk with wheels locked 4 wk (OLETF-WL4wk) prior to death. Four weeks of physical inactivity caused hepatic steatosis development, but liver triglycerides remained 60% lower than OLETF-SED (P inactivity, whereas markers of fatty acid uptake and lipogenesis remained relatively suppressed following 4 wk of inactivity. In addition, 4 wk of inactivity caused a complete loss of activity-induced increases in serum IL-6 and reductions in regulated upon activation, normal T-cell expressed, and secreted (RANTES), and a partial loss in reductions in leptin, monocyte chemoattractant protein-1, and TNF-α. In conclusion, 4 wk of physical inactivity does not result in a complete loss in physical activity-induced benefits but does cause deterioration in the liver phenotype and overall metabolic health in hyperphagic OLETF rats.

  7. Algorithms

    Indian Academy of Sciences (India)

    to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted algorithm design paradigms. In this article, we illustrate algorithm design techniques such as balancing, greedy strategy, dynamic programming strategy, and backtracking or traversal of ...

  8. B ampersand W PWR advanced control system algorithm development

    International Nuclear Information System (INIS)

    Winks, R.W.; Wilson, T.L.; Amick, M.

    1992-01-01

    This paper discusses algorithm development of an Advanced Control System for the B ampersand W Pressurized Water Reactor (PWR) nuclear power plant. The paper summarizes the history of the project, describes the operation of the algorithm, and presents transient results from a simulation of the plant and control system. The history discusses the steps in the development process and the roles played by the utility owners, B ampersand W Nuclear Service Company (BWNS), Oak Ridge National Laboratory (ORNL), and the Foxboro Company. The algorithm description is a brief overview of the features of the control system. The transient results show that operation of the algorithm in a normal power maneuvering mode and in a moderately large upset following a feedwater pump trip

  9. Algorithm development for Maxwell's equations for computational electromagnetism

    Science.gov (United States)

    Goorjian, Peter M.

    1990-01-01

    A new algorithm has been developed for solving Maxwell's equations for the electromagnetic field. It solves the equations in the time domain with central, finite differences. The time advancement is performed implicitly, using an alternating direction implicit procedure. The space discretization is performed with finite volumes, using curvilinear coordinates with electromagnetic components along those directions. Sample calculations are presented of scattering from a metal pin, a square and a circle to demonstrate the capabilities of the new algorithm.

  10. An ATR architecture for algorithm development and testing

    Science.gov (United States)

    Breivik, Gøril M.; Løkken, Kristin H.; Brattli, Alvin; Palm, Hans C.; Haavardsholm, Trym

    2013-05-01

    A research platform with four cameras in the infrared and visible spectral domains is under development at the Norwegian Defence Research Establishment (FFI). The platform will be mounted on a high-speed jet aircraft and will primarily be used for image acquisition and for development and test of automatic target recognition (ATR) algorithms. The sensors on board produce large amounts of data, the algorithms can be computationally intensive and the data processing is complex. This puts great demands on the system architecture; it has to run in real-time and at the same time be suitable for algorithm development. In this paper we present an architecture for ATR systems that is designed to be exible, generic and efficient. The architecture is module based so that certain parts, e.g. specific ATR algorithms, can be exchanged without affecting the rest of the system. The modules are generic and can be used in various ATR system configurations. A software framework in C++ that handles large data ows in non-linear pipelines is used for implementation. The framework exploits several levels of parallelism and lets the hardware processing capacity be fully utilised. The ATR system is under development and has reached a first level that can be used for segmentation algorithm development and testing. The implemented system consists of several modules, and although their content is still limited, the segmentation module includes two different segmentation algorithms that can be easily exchanged. We demonstrate the system by applying the two segmentation algorithms to infrared images from sea trial recordings.

  11. A Developed Artificial Bee Colony Algorithm Based on Cloud Model

    Directory of Open Access Journals (Sweden)

    Ye Jin

    2018-04-01

    Full Text Available The Artificial Bee Colony (ABC algorithm is a bionic intelligent optimization method. The cloud model is a kind of uncertainty conversion model between a qualitative concept T ˜ that is presented by nature language and its quantitative expression, which integrates probability theory and the fuzzy mathematics. A developed ABC algorithm based on cloud model is proposed to enhance accuracy of the basic ABC algorithm and avoid getting trapped into local optima by introducing a new select mechanism, replacing the onlooker bees’ search formula and changing the scout bees’ updating formula. Experiments on CEC15 show that the new algorithm has a faster convergence speed and higher accuracy than the basic ABC and some cloud model based ABC variants.

  12. Effect of previous exhaustive exercise on metabolism and fatigue development during intense exercise in humans

    DEFF Research Database (Denmark)

    Iaia, F. M.; Perez-Gomez, J.; Nordsborg, Nikolai

    2010-01-01

    The present study examined how metabolic response and work capacity are affected by previous exhaustive exercise. Seven subjects performed an exhaustive cycle exercise ( approximately 130%-max; EX2) after warm-up (CON) and 2 min after an exhaustive bout at a very high (VH; approximately 30 s), high...

  13. Applied economic model development algorithm for electronics company

    Directory of Open Access Journals (Sweden)

    Mikhailov I.

    2017-01-01

    Full Text Available The purpose of this paper is to report about received experience in the field of creating the actual methods and algorithms that help to simplify development of applied decision support systems. It reports about an algorithm, which is a result of two years research and have more than one-year practical verification. In a case of testing electronic components, the time of the contract conclusion is crucial point to make the greatest managerial mistake. At this stage, it is difficult to achieve a realistic assessment of time-limit and of wage-fund for future work. The creation of estimating model is possible way to solve this problem. In the article is represented an algorithm for creation of those models. The algorithm is based on example of the analytical model development that serves for amount of work estimation. The paper lists the algorithm’s stages and explains their meanings with participants’ goals. The implementation of the algorithm have made possible twofold acceleration of these models development and fulfilment of management’s requirements. The resulting models have made a significant economic effect. A new set of tasks was identified to be further theoretical study.

  14. Search for 'Little Higgs' and reconstruction algorithms developments in Atlas

    International Nuclear Information System (INIS)

    Rousseau, D.

    2007-05-01

    This document summarizes developments of framework and reconstruction algorithms for the ATLAS detector at the LHC. A library of reconstruction algorithms has been developed in a more and more complex environment. The reconstruction software originally designed on an optimistic Monte-Carlo simulation, has been confronted with a more detailed 'as-built' simulation. The 'Little Higgs' is an effective theory which can be taken for granted, or as an opportunity to study heavy resonances. In several cases, these resonances can be detected in original channels like tZ, ZH or WH. (author)

  15. DIDADTIC TOOLS FOR THE STUDENTS’ ALGORITHMIC THINKING DEVELOPMENT

    Directory of Open Access Journals (Sweden)

    T. P. Pushkaryeva

    2017-01-01

    Full Text Available Introduction. Modern engineers must possess high potential of cognitive abilities, in particular, the algorithmic thinking (AT. In this regard, the training of future experts (university graduates of technical specialities has to provide the knowledge of principles and ways of designing of various algorithms, abilities to analyze them, and to choose the most optimal variants for engineering activity implementation. For full formation of AT skills it is necessary to consider all channels of psychological perception and cogitative processing of educational information: visual, auditory, and kinesthetic.The aim of the present research is theoretical basis of design, development and use of resources for successful development of AT during the educational process of training in programming.Methodology and research methods. Methodology of the research involves the basic thesis of cognitive psychology and information approach while organizing the educational process. The research used methods: analysis; modeling of cognitive processes; designing training tools that take into account the mentality and peculiarities of information perception; diagnostic efficiency of the didactic tools. Results. The three-level model for future engineers training in programming aimed at development of AT skills was developed. The model includes three components: aesthetic, simulative, and conceptual. Stages to mastering a new discipline are allocated. It is proved that for development of AT skills when training in programming it is necessary to use kinesthetic tools at the stage of mental algorithmic maps formation; algorithmic animation and algorithmic mental maps at the stage of algorithmic model and conceptual images formation. Kinesthetic tools for development of students’ AT skills when training in algorithmization and programming are designed. Using of kinesthetic training simulators in educational process provide the effective development of algorithmic style of

  16. Algorithm integration using ADL (Algorithm Development Library) for improving CrIMSS EDR science product quality

    Science.gov (United States)

    Das, B.; Wilson, M.; Divakarla, M. G.; Chen, W.; Barnet, C.; Wolf, W.

    2013-05-01

    Algorithm Development Library (ADL) is a framework that mimics the operational system IDPS (Interface Data Processing Segment) that is currently being used to process data from instruments aboard Suomi National Polar-orbiting Partnership (S-NPP) satellite. The satellite was launched successfully in October 2011. The Cross-track Infrared and Microwave Sounder Suite (CrIMSS) consists of the Advanced Technology Microwave Sounder (ATMS) and Cross-track Infrared Sounder (CrIS) instruments that are on-board of S-NPP. These instruments will also be on-board of JPSS (Joint Polar Satellite System) that will be launched in early 2017. The primary products of the CrIMSS Environmental Data Record (EDR) include global atmospheric vertical temperature, moisture, and pressure profiles (AVTP, AVMP and AVPP) and Ozone IP (Intermediate Product from CrIS radiances). Several algorithm updates have recently been proposed by CrIMSS scientists that include fixes to the handling of forward modeling errors, a more conservative identification of clear scenes, indexing corrections for daytime products, and relaxed constraints between surface temperature and air temperature for daytime land scenes. We have integrated these improvements into the ADL framework. This work compares the results from ADL emulation of future IDPS system incorporating all the suggested algorithm updates with the current official processing results by qualitative and quantitative evaluations. The results prove these algorithm updates improve science product quality.

  17. Development and Application of a Portable Health Algorithms Test System

    Science.gov (United States)

    Melcher, Kevin J.; Fulton, Christopher E.; Maul, William A.; Sowers, T. Shane

    2007-01-01

    This paper describes the development and initial demonstration of a Portable Health Algorithms Test (PHALT) System that is being developed by researchers at the NASA Glenn Research Center (GRC). The PHALT System was conceived as a means of evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT System allows systems health management algorithms to be developed in a graphical programming environment; to be tested and refined using system simulation or test data playback; and finally, to be evaluated in a real-time hardware-in-the-loop mode with a live test article. In this paper, PHALT System development is described through the presentation of a functional architecture, followed by the selection and integration of hardware and software. Also described is an initial real-time hardware-in-the-loop demonstration that used sensor data qualification algorithms to diagnose and isolate simulated sensor failures in a prototype Power Distribution Unit test-bed. Success of the initial demonstration is highlighted by the correct detection of all sensor failures and the absence of any real-time constraint violations.

  18. Development of real-time plasma analysis and control algorithms for the TCV tokamak using SIMULINK

    International Nuclear Information System (INIS)

    Felici, F.; Le, H.B.; Paley, J.I.; Duval, B.P.; Coda, S.; Moret, J.-M.; Bortolon, A.; Federspiel, L.; Goodman, T.P.; Hommen, G.; Karpushov, A.; Piras, F.; Pitzschke, A.; Romero, J.; Sevillano, G.; Sauter, O.; Vijvers, W.

    2014-01-01

    Highlights: • A new digital control system for the TCV tokamak has been commissioned. • The system is entirely programmable by SIMULINK, allowing rapid algorithm development. • Different control system nodes can run different algorithms at varying sampling times. • The previous control system functions have been emulated and improved. • New capabilities include MHD control, profile control, equilibrium reconstruction. - Abstract: One of the key features of the new digital plasma control system installed on the TCV tokamak is the possibility to rapidly design, test and deploy real-time algorithms. With this flexibility the new control system has been used for a large number of new experiments which exploit TCV's powerful actuators consisting of 16 individually controllable poloidal field coils and 7 real-time steerable electron cyclotron (EC) launchers. The system has been used for various applications, ranging from event-based real-time MHD control to real-time current diffusion simulations. These advances have propelled real-time control to one of the cornerstones of the TCV experimental program. Use of the SIMULINK graphical programming language to directly program the control system has greatly facilitated algorithm development and allowed a multitude of different algorithms to be deployed in a short time. This paper will give an overview of the developed algorithms and their application in physics experiments

  19. Development of Base Transceiver Station Selection Algorithm for ...

    African Journals Online (AJOL)

    TEMS) equipment was carried out on the existing BTSs, and a linear algorithm optimization program based on the spectral link efficiency of each BTS was developed, the output of this site optimization gives the selected number of base station sites ...

  20. Development of information preserving data compression algorithm for CT images

    International Nuclear Information System (INIS)

    Kobayashi, Yoshio

    1989-01-01

    Although digital imaging techniques in radiology develop rapidly, problems arise in archival storage and communication of image data. This paper reports on a new information preserving data compression algorithm for computed tomographic (CT) images. This algorithm consists of the following five processes: 1. Pixels surrounding the human body showing CT values smaller than -900 H.U. are eliminated. 2. Each pixel is encoded by its numerical difference from its neighboring pixel along a matrix line. 3. Difference values are encoded by a newly designed code rather than the natural binary code. 4. Image data, obtained with the above process, are decomposed into bit planes. 5. The bit state transitions in each bit plane are encoded by run length coding. Using this new algorithm, the compression ratios of brain, chest, and abdomen CT images are 4.49, 4.34. and 4.40 respectively. (author)

  1. Solving Optimization Problems via Vortex Optimization Algorithm and Cognitive Development Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Ahmet Demir

    2017-01-01

    Full Text Available In the fields which require finding the most appropriate value, optimization became a vital approach to employ effective solutions. With the use of optimization techniques, many different fields in the modern life have found solutions to their real-world based problems. In this context, classical optimization techniques have had an important popularity. But after a while, more advanced optimization problems required the use of more effective techniques. At this point, Computer Science took an important role on providing software related techniques to improve the associated literature. Today, intelligent optimization techniques based on Artificial Intelligence are widely used for optimization problems. The objective of this paper is to provide a comparative study on the employment of classical optimization solutions and Artificial Intelligence solutions for enabling readers to have idea about the potential of intelligent optimization techniques. At this point, two recently developed intelligent optimization algorithms, Vortex Optimization Algorithm (VOA and Cognitive Development Optimization Algorithm (CoDOA, have been used to solve some multidisciplinary optimization problems provided in the source book Thomas' Calculus 11th Edition and the obtained results have compared with classical optimization solutions. 

  2. Sustainable development, tourism and territory. Previous elements towards a systemic approach

    Directory of Open Access Journals (Sweden)

    Pierre TORRENTE

    2009-01-01

    Full Text Available Today, tourism is one of the major challenges for many countries and territories. The balance of payments, an ever-increasing number of visitors and the significant development of the tourism offer clearly illustrate the booming trend in this sector. This macro-economic approach is often used by the organizations in charge of tourism, WTO for instance. Quantitative assessments which consider the satisfaction of customers’ needs as an end in itself have prevailed both in tourism development schemes and in prospective approaches since the sixties.

  3. Machine Learning Algorithms Outperform Conventional Regression Models in Predicting Development of Hepatocellular Carcinoma

    Science.gov (United States)

    Singal, Amit G.; Mukherjee, Ashin; Elmunzer, B. Joseph; Higgins, Peter DR; Lok, Anna S.; Zhu, Ji; Marrero, Jorge A; Waljee, Akbar K

    2015-01-01

    Background Predictive models for hepatocellular carcinoma (HCC) have been limited by modest accuracy and lack of validation. Machine learning algorithms offer a novel methodology, which may improve HCC risk prognostication among patients with cirrhosis. Our study's aim was to develop and compare predictive models for HCC development among cirrhotic patients, using conventional regression analysis and machine learning algorithms. Methods We enrolled 442 patients with Child A or B cirrhosis at the University of Michigan between January 2004 and September 2006 (UM cohort) and prospectively followed them until HCC development, liver transplantation, death, or study termination. Regression analysis and machine learning algorithms were used to construct predictive models for HCC development, which were tested on an independent validation cohort from the Hepatitis C Antiviral Long-term Treatment against Cirrhosis (HALT-C) Trial. Both models were also compared to the previously published HALT-C model. Discrimination was assessed using receiver operating characteristic curve analysis and diagnostic accuracy was assessed with net reclassification improvement and integrated discrimination improvement statistics. Results After a median follow-up of 3.5 years, 41 patients developed HCC. The UM regression model had a c-statistic of 0.61 (95%CI 0.56-0.67), whereas the machine learning algorithm had a c-statistic of 0.64 (95%CI 0.60–0.69) in the validation cohort. The machine learning algorithm had significantly better diagnostic accuracy as assessed by net reclassification improvement (pmachine learning algorithm (p=0.047). Conclusion Machine learning algorithms improve the accuracy of risk stratifying patients with cirrhosis and can be used to accurately identify patients at high-risk for developing HCC. PMID:24169273

  4. Developing and Implementing the Data Mining Algorithms in RAVEN

    International Nuclear Information System (INIS)

    Sen, Ramazan Sonat; Maljovec, Daniel Patrick; Alfonsi, Andrea; Rabiti, Cristian

    2015-01-01

    The RAVEN code is becoming a comprehensive tool to perform probabilistic risk assessment, uncertainty quantification, and verification and validation. The RAVEN code is being developed to support many programs and to provide a set of methodologies and algorithms for advanced analysis. Scientific computer codes can generate enormous amounts of data. To post-process and analyze such data might, in some cases, take longer than the initial software runtime. Data mining algorithms/methods help in recognizing and understanding patterns in the data, and thus discover knowledge in databases. The methodologies used in the dynamic probabilistic risk assessment or in uncertainty and error quantification analysis couple system/physics codes with simulation controller codes, such as RAVEN. RAVEN introduces both deterministic and stochastic elements into the simulation while the system/physics code model the dynamics deterministically. A typical analysis is performed by sampling values of a set of parameter values. A major challenge in using dynamic probabilistic risk assessment or uncertainty and error quantification analysis for a complex system is to analyze the large number of scenarios generated. Data mining techniques are typically used to better organize and understand data, i.e. recognizing patterns in the data. This report focuses on development and implementation of Application Programming Interfaces (APIs) for different data mining algorithms, and the application of these algorithms to different databases.

  5. Developing and Implementing the Data Mining Algorithms in RAVEN

    Energy Technology Data Exchange (ETDEWEB)

    Sen, Ramazan Sonat [Idaho National Lab. (INL), Idaho Falls, ID (United States); Maljovec, Daniel Patrick [Idaho National Lab. (INL), Idaho Falls, ID (United States); Alfonsi, Andrea [Idaho National Lab. (INL), Idaho Falls, ID (United States); Rabiti, Cristian [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    The RAVEN code is becoming a comprehensive tool to perform probabilistic risk assessment, uncertainty quantification, and verification and validation. The RAVEN code is being developed to support many programs and to provide a set of methodologies and algorithms for advanced analysis. Scientific computer codes can generate enormous amounts of data. To post-process and analyze such data might, in some cases, take longer than the initial software runtime. Data mining algorithms/methods help in recognizing and understanding patterns in the data, and thus discover knowledge in databases. The methodologies used in the dynamic probabilistic risk assessment or in uncertainty and error quantification analysis couple system/physics codes with simulation controller codes, such as RAVEN. RAVEN introduces both deterministic and stochastic elements into the simulation while the system/physics code model the dynamics deterministically. A typical analysis is performed by sampling values of a set of parameter values. A major challenge in using dynamic probabilistic risk assessment or uncertainty and error quantification analysis for a complex system is to analyze the large number of scenarios generated. Data mining techniques are typically used to better organize and understand data, i.e. recognizing patterns in the data. This report focuses on development and implementation of Application Programming Interfaces (APIs) for different data mining algorithms, and the application of these algorithms to different databases.

  6. Algorithms

    Indian Academy of Sciences (India)

    algorithm design technique called 'divide-and-conquer'. One of ... Turtle graphics, September. 1996. 5. ... whole list named 'PO' is a pointer to the first element of the list; ..... Program for computing matrices X and Y and placing the result in C *).

  7. Algorithms

    Indian Academy of Sciences (India)

    algorithm that it is implicitly understood that we know how to generate the next natural ..... Explicit comparisons are made in line (1) where maximum and minimum is ... It can be shown that the function T(n) = 3/2n -2 is the solution to the above ...

  8. Development of antibiotic regimens using graph based evolutionary algorithms.

    Science.gov (United States)

    Corns, Steven M; Ashlock, Daniel A; Bryden, Kenneth M

    2013-12-01

    This paper examines the use of evolutionary algorithms in the development of antibiotic regimens given to production animals. A model is constructed that combines the lifespan of the animal and the bacteria living in the animal's gastro-intestinal tract from the early finishing stage until the animal reaches market weight. This model is used as the fitness evaluation for a set of graph based evolutionary algorithms to assess the impact of diversity control on the evolving antibiotic regimens. The graph based evolutionary algorithms have two objectives: to find an antibiotic treatment regimen that maintains the weight gain and health benefits of antibiotic use and to reduce the risk of spreading antibiotic resistant bacteria. This study examines different regimens of tylosin phosphate use on bacteria populations divided into Gram positive and Gram negative types, with a focus on Campylobacter spp. Treatment regimens were found that provided decreased antibiotic resistance relative to conventional methods while providing nearly the same benefits as conventional antibiotic regimes. By using a graph to control the information flow in the evolutionary algorithm, a variety of solutions along the Pareto front can be found automatically for this and other multi-objective problems. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  9. Development of target-tracking algorithms using neural network

    Energy Technology Data Exchange (ETDEWEB)

    Park, Dong Sun; Lee, Joon Whaoan; Yoon, Sook; Baek, Seong Hyun; Lee, Myung Jae [Chonbuk National University, Chonjoo (Korea)

    1998-04-01

    The utilization of remote-control robot system in atomic power plants or nuclear-related facilities grows rapidly, to protect workers form high radiation environments. Such applications require complete stability of the robot system, so that precisely tracking the robot is essential for the whole system. This research is to accomplish the goal by developing appropriate algorithms for remote-control robot systems. A neural network tracking system is designed and experimented to trace a robot Endpoint. This model is aimed to utilized the excellent capabilities of neural networks; nonlinear mapping between inputs and outputs, learning capability, and generalization capability. The neural tracker consists of two networks for position detection and prediction. Tracking algorithms are developed and experimented for the two models. Results of the experiments show that both models are promising as real-time target-tracking systems for remote-control robot systems. (author). 10 refs., 47 figs.

  10. Development of hybrid artificial intelligent based handover decision algorithm

    Directory of Open Access Journals (Sweden)

    A.M. Aibinu

    2017-04-01

    Full Text Available The possibility of seamless handover remains a mirage despite the plethora of existing handover algorithms. The underlying factor responsible for this has been traced to the Handover decision module in the Handover process. Hence, in this paper, the development of novel hybrid artificial intelligent handover decision algorithm has been developed. The developed model is made up of hybrid of Artificial Neural Network (ANN based prediction model and Fuzzy Logic. On accessing the network, the Received Signal Strength (RSS was acquired over a period of time to form a time series data. The data was then fed to the newly proposed k-step ahead ANN-based RSS prediction system for estimation of prediction model coefficients. The synaptic weights and adaptive coefficients of the trained ANN was then used to compute the k-step ahead ANN based RSS prediction model coefficients. The predicted RSS value was later codified as Fuzzy sets and in conjunction with other measured network parameters were fed into the Fuzzy logic controller in order to finalize handover decision process. The performance of the newly developed k-step ahead ANN based RSS prediction algorithm was evaluated using simulated and real data acquired from available mobile communication networks. Results obtained in both cases shows that the proposed algorithm is capable of predicting ahead the RSS value to about ±0.0002 dB. Also, the cascaded effect of the complete handover decision module was also evaluated. Results obtained show that the newly proposed hybrid approach was able to reduce ping-pong effect associated with other handover techniques.

  11. Algorithms

    Indian Academy of Sciences (India)

    will become clear in the next article when we discuss a simple logo like programming language. ... Rod B may be used as an auxiliary store. The problem is to find an algorithm which performs this task. ... No disks are moved from A to Busing C as auxiliary rod. • move _disk (A, C);. (No + l)th disk is moved from A to C directly ...

  12. Development of computational algorithms for quantification of pulmonary structures

    International Nuclear Information System (INIS)

    Oliveira, Marcela de; Alvarez, Matheus; Alves, Allan F.F.; Miranda, Jose R.A.; Pina, Diana R.

    2012-01-01

    The high-resolution computed tomography has become the imaging diagnostic exam most commonly used for the evaluation of the squeals of Paracoccidioidomycosis. The subjective evaluations the radiological abnormalities found on HRCT images do not provide an accurate quantification. The computer-aided diagnosis systems produce a more objective assessment of the abnormal patterns found in HRCT images. Thus, this research proposes the development of algorithms in MATLAB® computing environment can quantify semi-automatically pathologies such as pulmonary fibrosis and emphysema. The algorithm consists in selecting a region of interest (ROI), and by the use of masks, filter densities and morphological operators, to obtain a quantification of the injured area to the area of a healthy lung. The proposed method was tested on ten HRCT scans of patients with confirmed PCM. The results of semi-automatic measurements were compared with subjective evaluations performed by a specialist in radiology, falling to a coincidence of 80% for emphysema and 58% for fibrosis. (author)

  13. Development of computed tomography system and image reconstruction algorithm

    International Nuclear Information System (INIS)

    Khairiah Yazid; Mohd Ashhar Khalid; Azaman Ahmad; Khairul Anuar Mohd Salleh; Ab Razak Hamzah

    2006-01-01

    Computed tomography is one of the most advanced and powerful nondestructive inspection techniques, which is currently used in many different industries. In several CT systems, detection has been by combination of an X-ray image intensifier and charge -coupled device (CCD) camera or by using line array detector. The recent development of X-ray flat panel detector has made fast CT imaging feasible and practical. Therefore this paper explained the arrangement of a new detection system which is using the existing high resolution (127 μm pixel size) flat panel detector in MINT and the image reconstruction technique developed. The aim of the project is to develop a prototype flat panel detector based CT imaging system for NDE. The prototype consisted of an X-ray tube, a flat panel detector system, a rotation table and a computer system to control the sample motion and image acquisition. Hence this project is divided to two major tasks, firstly to develop image reconstruction algorithm and secondly to integrate X-ray imaging components into one CT system. The image reconstruction algorithm using filtered back-projection method is developed and compared to other techniques. The MATLAB program is the tools used for the simulations and computations for this project. (Author)

  14. Genetic Algorithms for Development of New Financial Products

    Directory of Open Access Journals (Sweden)

    Eder Oliveira Abensur

    2007-06-01

    Full Text Available New Product Development (NPD is recognized as a fundamental activity that has a relevant impact on the performance of companies. Despite the relevance of the financial market there is a lack of work on new financial product development. The aim of this research is to propose the use of Genetic Algorithms (GA as an alternative procedure for evaluating the most favorable combination of variables for the product launch. The paper focuses on: (i determining the essential variables of the financial product studied (investment fund; (ii determining how to evaluate the success of a new investment fund launch and (iii how GA can be applied to the financial product development problem. The proposed framework was tested using 4 years of real data from the Brazilian financial market and the results suggest that this is an innovative development methodology and useful for designing complex financial products with many attributes.

  15. SPECIAL LIBRARIES OF FRAGMENTS OF ALGORITHMIC NETWORKS TO AUTOMATE THE DEVELOPMENT OF ALGORITHMIC MODELS

    Directory of Open Access Journals (Sweden)

    V. E. Marley

    2015-01-01

    Full Text Available Summary. The concept of algorithmic models appeared from the algorithmic approach in which the simulated object, the phenomenon appears in the form of process, subject to strict rules of the algorithm, which placed the process of operation of the facility. Under the algorithmic model is the formalized description of the scenario subject specialist for the simulated process, the structure of which is comparable with the structure of the causal and temporal relationships between events of the process being modeled, together with all information necessary for its software implementation. To represent the structure of algorithmic models used algorithmic network. Normally, they were defined as loaded finite directed graph, the vertices which are mapped to operators and arcs are variables, bound by operators. The language of algorithmic networks has great features, the algorithms that it can display indifference the class of all random algorithms. In existing systems, automation modeling based on algorithmic nets, mainly used by operators working with real numbers. Although this reduces their ability, but enough for modeling a wide class of problems related to economy, environment, transport, technical processes. The task of modeling the execution of schedules and network diagrams is relevant and useful. There are many counting systems, network graphs, however, the monitoring process based analysis of gaps and terms of graphs, no analysis of prediction execution schedule or schedules. The library is designed to build similar predictive models. Specifying source data to obtain a set of projections from which to choose one and take it for a new plan.

  16. Development of Image Reconstruction Algorithms in electrical Capacitance Tomography

    International Nuclear Information System (INIS)

    Fernandez Marron, J. L.; Alberdi Primicia, J.; Barcala Riveira, J. M.

    2007-01-01

    The Electrical Capacitance Tomography (ECT) has not obtained a good development in order to be used at industrial level. That is due first to difficulties in the measurement of very little capacitances (in the range of femto farads) and second to the problem of reconstruction on- line of the images. This problem is due also to the small numbers of electrodes (maximum 16), that made the usual algorithms of reconstruction has many errors. In this work it is described a new purely geometrical method that could be used for this purpose. (Author) 4 refs

  17. Development of a Crosstalk Suppression Algorithm for KID Readout

    Science.gov (United States)

    Lee, Kyungmin; Ishitsuka, H.; Oguri, S.; Suzuki, J.; Tajima, O.; Tomita, N.; Won, Eunil; Yoshida, M.

    2018-06-01

    The GroundBIRD telescope aims to detect B-mode polarization of the cosmic microwave background radiation using the kinetic inductance detector array as a polarimeter. For the readout of the signal from detector array, we have developed a frequency division multiplexing readout system based on a digital down converter method. These techniques in general have the leakage problems caused by the crosstalks. The window function was applied in the field programmable gate arrays to mitigate the effect of these problems and tested it in algorithm level.

  18. Development of an algorithm for quantifying extremity biological tissue

    International Nuclear Information System (INIS)

    Pavan, Ana L.M.; Miranda, Jose R.A.; Pina, Diana R. de

    2013-01-01

    The computerized radiology (CR) has become the most widely used device for image acquisition and production, since its introduction in the 80s. The detection and early diagnosis, obtained via CR, are important for the successful treatment of diseases such as arthritis, metabolic bone diseases, tumors, infections and fractures. However, the standards used for optimization of these images are based on international protocols. Therefore, it is necessary to compose radiographic techniques for CR system that provides a secure medical diagnosis, with doses as low as reasonably achievable. To this end, the aim of this work is to develop a quantifier algorithm of tissue, allowing the construction of a homogeneous end used phantom to compose such techniques. It was developed a database of computed tomography images of hand and wrist of adult patients. Using the Matlab ® software, was developed a computational algorithm able to quantify the average thickness of soft tissue and bones present in the anatomical region under study, as well as the corresponding thickness in simulators materials (aluminium and lucite). This was possible through the application of mask and Gaussian removal technique of histograms. As a result, was obtained an average thickness of soft tissue of 18,97 mm and bone tissue of 6,15 mm, and their equivalents in materials simulators of 23,87 mm of acrylic and 1,07mm of aluminum. The results obtained agreed with the medium thickness of biological tissues of a patient's hand pattern, enabling the construction of an homogeneous phantom

  19. Leadership development in the age of the algorithm.

    Science.gov (United States)

    Buckingham, Marcus

    2012-06-01

    By now we expect personalized content--it's routinely served up by online retailers and news services, for example. But the typical leadership development program still takes a formulaic, one-size-fits-all approach. And it rarely happens that an excellent technique can be effectively transferred from one leader to all others. Someone trying to adopt a practice from a leader with a different style usually seems stilted and off--a Franken-leader. Breakthrough work at Hilton Hotels and other organizations shows how companies can use an algorithmic model to deliver training tips uniquely suited to each individual's style. It's a five-step process: First, a company must choose a tool with which to identify each person's leadership type. Second, it should assess its best leaders, and third, it should interview them about their techniques. Fourth, it should use its algorithmic model to feed tips drawn from those techniques to developing leaders of the same type. And fifth, it should make the system dynamically intelligent, with user reactions sharpening the content and targeting of tips. The power of this kind of system--highly customized, based on peer-to-peer sharing, and continually evolving--will soon overturn the generic model of leadership development. And such systems will inevitably break through any one organization, until somewhere in the cloud the best leadership tips from all over are gathered, sorted, and distributed according to which ones suit which people best.

  20. Integrated Graphics Operations and Analysis Lab Development of Advanced Computer Graphics Algorithms

    Science.gov (United States)

    Wheaton, Ira M.

    2011-01-01

    The focus of this project is to aid the IGOAL in researching and implementing algorithms for advanced computer graphics. First, this project focused on porting the current International Space Station (ISS) Xbox experience to the web. Previously, the ISS interior fly-around education and outreach experience only ran on an Xbox 360. One of the desires was to take this experience and make it into something that can be put on NASA s educational site for anyone to be able to access. The current code works in the Unity game engine which does have cross platform capability but is not 100% compatible. The tasks for an intern to complete this portion consisted of gaining familiarity with Unity and the current ISS Xbox code, porting the Xbox code to the web as is, and modifying the code to work well as a web application. In addition, a procedurally generated cloud algorithm will be developed. Currently, the clouds used in AGEA animations and the Xbox experiences are a texture map. The desire is to create a procedurally generated cloud algorithm to provide dynamically generated clouds for both AGEA animations and the Xbox experiences. This task consists of gaining familiarity with AGEA and the plug-in interface, developing the algorithm, creating an AGEA plug-in to implement the algorithm inside AGEA, and creating a Unity script to implement the algorithm for the Xbox. This portion of the project was unable to be completed in the time frame of the internship; however, the IGOAL will continue to work on it in the future.

  1. Evidence-based algorithm for heparin dosing before cardiopulmonary bypass. Part 1: Development of the algorithm.

    Science.gov (United States)

    McKinney, Mark C; Riley, Jeffrey B

    2007-12-01

    The incidence of heparin resistance during adult cardiac surgery with cardiopulmonary bypass has been reported at 15%-20%. The consistent use of a clinical decision-making algorithm may increase the consistency of patient care and likely reduce the total required heparin dose and other problems associated with heparin dosing. After a directed survey of practicing perfusionists regarding treatment of heparin resistance and a literature search for high-level evidence regarding the diagnosis and treatment of heparin resistance, an evidence-based decision-making algorithm was constructed. The face validity of the algorithm decisive steps and logic was confirmed by a second survey of practicing perfusionists. The algorithm begins with review of the patient history to identify predictors for heparin resistance. The definition for heparin resistance contained in the algorithm is an activated clotting time 450 IU/kg heparin loading dose. Based on the literature, the treatment for heparin resistance used in the algorithm is anti-thrombin III supplement. The algorithm seems to be valid and is supported by high-level evidence and clinician opinion. The next step is a human randomized clinical trial to test the clinical procedure guideline algorithm vs. current standard clinical practice.

  2. Simple Algorithms for Distributed Leader Election in Anonymous Synchronous Rings and Complete Networks Inspired by Neural Development in Fruit Flies.

    Science.gov (United States)

    Xu, Lei; Jeavons, Peter

    2015-11-01

    Leader election in anonymous rings and complete networks is a very practical problem in distributed computing. Previous algorithms for this problem are generally designed for a classical message passing model where complex messages are exchanged. However, the need to send and receive complex messages makes such algorithms less practical for some real applications. We present some simple synchronous algorithms for distributed leader election in anonymous rings and complete networks that are inspired by the development of the neural system of the fruit fly. Our leader election algorithms all assume that only one-bit messages are broadcast by nodes in the network and processors are only able to distinguish between silence and the arrival of one or more messages. These restrictions allow implementations to use a simpler message-passing architecture. Even with these harsh restrictions our algorithms are shown to achieve good time and message complexity both analytically and experimentally.

  3. Developing operation algorithms for vision subsystems in autonomous mobile robots

    Science.gov (United States)

    Shikhman, M. V.; Shidlovskiy, S. V.

    2018-05-01

    The paper analyzes algorithms for selecting keypoints on the image for the subsequent automatic detection of people and obstacles. The algorithm is based on the histogram of oriented gradients and the support vector method. The combination of these methods allows successful selection of dynamic and static objects. The algorithm can be applied in various autonomous mobile robots.

  4. Algorithms

    Indian Academy of Sciences (India)

    of programs, illustrate a method of establishing the ... importance of methods of establishing the correctness of .... Thus, the proof will be different for each input ..... Formal methods are pivotal in the design, development, and maintenance of ...

  5. PCTFPeval: a web tool for benchmarking newly developed algorithms for predicting cooperative transcription factor pairs in yeast.

    Science.gov (United States)

    Lai, Fu-Jou; Chang, Hong-Tsun; Wu, Wei-Sheng

    2015-01-01

    Computational identification of cooperative transcription factor (TF) pairs helps understand the combinatorial regulation of gene expression in eukaryotic cells. Many advanced algorithms have been proposed to predict cooperative TF pairs in yeast. However, it is still difficult to conduct a comprehensive and objective performance comparison of different algorithms because of lacking sufficient performance indices and adequate overall performance scores. To solve this problem, in our previous study (published in BMC Systems Biology 2014), we adopted/proposed eight performance indices and designed two overall performance scores to compare the performance of 14 existing algorithms for predicting cooperative TF pairs in yeast. Most importantly, our performance comparison framework can be applied to comprehensively and objectively evaluate the performance of a newly developed algorithm. However, to use our framework, researchers have to put a lot of effort to construct it first. To save researchers time and effort, here we develop a web tool to implement our performance comparison framework, featuring fast data processing, a comprehensive performance comparison and an easy-to-use web interface. The developed tool is called PCTFPeval (Predicted Cooperative TF Pair evaluator), written in PHP and Python programming languages. The friendly web interface allows users to input a list of predicted cooperative TF pairs from their algorithm and select (i) the compared algorithms among the 15 existing algorithms, (ii) the performance indices among the eight existing indices, and (iii) the overall performance scores from two possible choices. The comprehensive performance comparison results are then generated in tens of seconds and shown as both bar charts and tables. The original comparison results of each compared algorithm and each selected performance index can be downloaded as text files for further analyses. Allowing users to select eight existing performance indices and 15

  6. Development of a parallel genetic algorithm using MPI and its application in a nuclear reactor core. Design optimization

    International Nuclear Information System (INIS)

    Waintraub, Marcel; Pereira, Claudio M.N.A.; Baptista, Rafael P.

    2005-01-01

    This work presents the development of a distributed parallel genetic algorithm applied to a nuclear reactor core design optimization. In the implementation of the parallelism, a 'Message Passing Interface' (MPI) library, standard for parallel computation in distributed memory platforms, has been used. Another important characteristic of MPI is its portability for various architectures. The main objectives of this paper are: validation of the results obtained by the application of this algorithm in a nuclear reactor core optimization problem, through comparisons with previous results presented by Pereira et al.; and performance test of the Brazilian Nuclear Engineering Institute (IEN) cluster in reactors physics optimization problems. The experiments demonstrated that the developed parallel genetic algorithm using the MPI library presented significant gains in the obtained results and an accentuated reduction of the processing time. Such results ratify the use of the parallel genetic algorithms for the solution of nuclear reactor core optimization problems. (author)

  7. Development and testing of incident detection algorithms. Vol. 2, research methodology and detailed results.

    Science.gov (United States)

    1976-04-01

    The development and testing of incident detection algorithms was based on Los Angeles and Minneapolis freeway surveillance data. Algorithms considered were based on times series and pattern recognition techniques. Attention was given to the effects o...

  8. Development and evaluation of thermal model reduction algorithms for spacecraft

    Science.gov (United States)

    Deiml, Michael; Suderland, Martin; Reiss, Philipp; Czupalla, Markus

    2015-05-01

    This paper is concerned with the topic of the reduction of thermal models of spacecraft. The work presented here has been conducted in cooperation with the company OHB AG, formerly Kayser-Threde GmbH, and the Institute of Astronautics at Technische Universität München with the goal to shorten and automatize the time-consuming and manual process of thermal model reduction. The reduction of thermal models can be divided into the simplification of the geometry model for calculation of external heat flows and radiative couplings and into the reduction of the underlying mathematical model. For simplification a method has been developed which approximates the reduced geometry model with the help of an optimization algorithm. Different linear and nonlinear model reduction techniques have been evaluated for their applicability in reduction of the mathematical model. Thereby the compatibility with the thermal analysis tool ESATAN-TMS is of major concern, which restricts the useful application of these methods. Additional model reduction methods have been developed, which account to these constraints. The Matrix Reduction method allows the approximation of the differential equation to reference values exactly expect for numerical errors. The summation method enables a useful, applicable reduction of thermal models that can be used in industry. In this work a framework for model reduction of thermal models has been created, which can be used together with a newly developed graphical user interface for the reduction of thermal models in industry.

  9. Developing Information Power Grid Based Algorithms and Software

    Science.gov (United States)

    Dongarra, Jack

    1998-01-01

    This was an exploratory study to enhance our understanding of problems involved in developing large scale applications in a heterogeneous distributed environment. It is likely that the large scale applications of the future will be built by coupling specialized computational modules together. For example, efforts now exist to couple ocean and atmospheric prediction codes to simulate a more complete climate system. These two applications differ in many respects. They have different grids, the data is in different unit systems and the algorithms for inte,-rating in time are different. In addition the code for each application is likely to have been developed on different architectures and tend to have poor performance when run on an architecture for which the code was not designed, if it runs at all. Architectural differences may also induce differences in data representation which effect precision and convergence criteria as well as data transfer issues. In order to couple such dissimilar codes some form of translation must be present. This translation should be able to handle interpolation from one grid to another as well as construction of the correct data field in the correct units from available data. Even if a code is to be developed from scratch, a modular approach will likely be followed in that standard scientific packages will be used to do the more mundane tasks such as linear algebra or Fourier transform operations. This approach allows the developers to concentrate on their science rather than becoming experts in linear algebra or signal processing. Problems associated with this development approach include difficulties associated with data extraction and translation from one module to another, module performance on different nodal architectures, and others. In addition to these data and software issues there exists operational issues such as platform stability and resource management.

  10. Development of a Grapevine Pruning Algorithm for Using in Pruning

    Directory of Open Access Journals (Sweden)

    S. M Hosseini

    2017-10-01

    Full Text Available Introduction Great areas of the orchards in the world are dedicated to cultivation of the grapevine. Normally grape vineyards are pruned twice a year. Among the operations of grape production, winter pruning of the bushes is the only operation that still has not been fully mechanized while it is known as the most laborious jobs in the farm. Some of the grape producing countries use various mechanical machines to prune the grapevines, but in most cases, these machines do not have a good performance. Therefore intelligent pruning machine seems to be necessary in this regard and this intelligent pruning machines can reduce the labor required to prune the vineyards. It this study in was attempted to develop an algorithm that uses image processing techniques to identify which parts of the grapevine should be cut. Stereo vision technique was used to obtain three dimensional images from the bare bushes whose leaves were fallen in autumn. Stereo vision systems are used to determine the depth from two images taken at the same time but from slightly different viewpoints using two cameras. Each pair of images of a common scene is related by a popular geometry, and corresponding points in the images pairs are constrained to lie on pairs of conjugate popular lines. Materials and Methods Photos were taken from gardens of the Research Center for Agriculture and Natural Resources of Fars province, Iran. At first, the distance between the plants and the cameras should be determined. The distance between the plants and cameras can be obtained by using the stereo vision techniques. Therefore, this method was used in this paper by two pictures taken from each plant with the left and right cameras. The algorithm was written in MATLAB. To facilitate the segmentation of the branches from the rows at the back, a blue plate with dimensions of 2×2 m2 were used at the background. After invoking the images, branches were segmented from the background to produce the binary

  11. A DIFFERENTIAL EVOLUTION ALGORITHM DEVELOPED FOR A NURSE SCHEDULING PROBLEM

    Directory of Open Access Journals (Sweden)

    Shahnazari-Shahrezaei, P.

    2012-11-01

    Full Text Available Nurse scheduling is a type of manpower allocation problem that tries to satisfy hospital managers objectives and nurses preferences as much as possible by generating fair shift schedules. This paper presents a nurse scheduling problem based on a real case study, and proposes two meta-heuristics a differential evolution algorithm (DE and a greedy randomised adaptive search procedure (GRASP to solve it. To investigate the efficiency of the proposed algorithms, two problems are solved. Furthermore, some comparison metrics are applied to examine the reliability of the proposed algorithms. The computational results in this paper show that the proposed DE outperforms the GRASP.

  12. Validation of a Previously Developed Geospatial Model That Predicts the Prevalence of Listeria monocytogenes in New York State Produce Fields.

    Science.gov (United States)

    Weller, Daniel; Shiwakoti, Suvash; Bergholz, Peter; Grohn, Yrjo; Wiedmann, Martin; Strawn, Laura K

    2016-02-01

    Technological advancements, particularly in the field of geographic information systems (GIS), have made it possible to predict the likelihood of foodborne pathogen contamination in produce production environments using geospatial models. Yet, few studies have examined the validity and robustness of such models. This study was performed to test and refine the rules associated with a previously developed geospatial model that predicts the prevalence of Listeria monocytogenes in produce farms in New York State (NYS). Produce fields for each of four enrolled produce farms were categorized into areas of high or low predicted L. monocytogenes prevalence using rules based on a field's available water storage (AWS) and its proximity to water, impervious cover, and pastures. Drag swabs (n = 1,056) were collected from plots assigned to each risk category. Logistic regression, which tested the ability of each rule to accurately predict the prevalence of L. monocytogenes, validated the rules based on water and pasture. Samples collected near water (odds ratio [OR], 3.0) and pasture (OR, 2.9) showed a significantly increased likelihood of L. monocytogenes isolation compared to that for samples collected far from water and pasture. Generalized linear mixed models identified additional land cover factors associated with an increased likelihood of L. monocytogenes isolation, such as proximity to wetlands. These findings validated a subset of previously developed rules that predict L. monocytogenes prevalence in produce production environments. This suggests that GIS and geospatial models can be used to accurately predict L. monocytogenes prevalence on farms and can be used prospectively to minimize the risk of preharvest contamination of produce. Copyright © 2016, American Society for Microbiology. All Rights Reserved.

  13. Validation of a Previously Developed Geospatial Model That Predicts the Prevalence of Listeria monocytogenes in New York State Produce Fields

    Science.gov (United States)

    Weller, Daniel; Shiwakoti, Suvash; Bergholz, Peter; Grohn, Yrjo; Wiedmann, Martin

    2015-01-01

    Technological advancements, particularly in the field of geographic information systems (GIS), have made it possible to predict the likelihood of foodborne pathogen contamination in produce production environments using geospatial models. Yet, few studies have examined the validity and robustness of such models. This study was performed to test and refine the rules associated with a previously developed geospatial model that predicts the prevalence of Listeria monocytogenes in produce farms in New York State (NYS). Produce fields for each of four enrolled produce farms were categorized into areas of high or low predicted L. monocytogenes prevalence using rules based on a field's available water storage (AWS) and its proximity to water, impervious cover, and pastures. Drag swabs (n = 1,056) were collected from plots assigned to each risk category. Logistic regression, which tested the ability of each rule to accurately predict the prevalence of L. monocytogenes, validated the rules based on water and pasture. Samples collected near water (odds ratio [OR], 3.0) and pasture (OR, 2.9) showed a significantly increased likelihood of L. monocytogenes isolation compared to that for samples collected far from water and pasture. Generalized linear mixed models identified additional land cover factors associated with an increased likelihood of L. monocytogenes isolation, such as proximity to wetlands. These findings validated a subset of previously developed rules that predict L. monocytogenes prevalence in produce production environments. This suggests that GIS and geospatial models can be used to accurately predict L. monocytogenes prevalence on farms and can be used prospectively to minimize the risk of preharvest contamination of produce. PMID:26590280

  14. DEVELOPMENT OF 2D HUMAN BODY MODELING USING THINNING ALGORITHM

    Directory of Open Access Journals (Sweden)

    K. Srinivasan

    2010-11-01

    Full Text Available Monitoring the behavior and activities of people in Video surveillance has gained more applications in Computer vision. This paper proposes a new approach to model the human body in 2D view for the activity analysis using Thinning algorithm. The first step of this work is Background subtraction which is achieved by the frame differencing algorithm. Thinning algorithm has been used to find the skeleton of the human body. After thinning, the thirteen feature points like terminating points, intersecting points, shoulder, elbow, and knee points have been extracted. Here, this research work attempts to represent the body model in three different ways such as Stick figure model, Patch model and Rectangle body model. The activities of humans have been analyzed with the help of 2D model for the pre-defined poses from the monocular video data. Finally, the time consumption and efficiency of our proposed algorithm have been evaluated.

  15. [Incidence and clinical risk factors for the development of diabetes mellitus in women with previous gestational diabetes].

    Science.gov (United States)

    Domínguez-Vigo, P; Álvarez-Silvares, E; Alves-Pérez M T; Domínguez-Sánchez, J; González-González, A

    2016-04-01

    Gestational diabetes is considered a variant of diabetes mellitus as they share a common pathophysiological basis: insulin resistance in target and insufficient secretion of it by pancreatic p-cell bodies. Pregnancy is a unique physiological situation provides an opportunity to identify future risk of diabetes mellitus. To determine the long-term incidence of diabetes mellitus in women who have previously been diagnosed with gestational diabetes and identifying clinical risk factors for developing the same. nested case-control cohort study. 671 patients between 1996 and 2009 were diagnosed with gestational diabetes were selected. The incidence of diabetes mellitus was estimated and 2 subgroups were formed: Group A or cases: women who develop diabetes mellitus after diagnosis of gestational diabetes. Group B or control: random sample of 71 women with a history of gestational diabetes in the follow-up period remained normoglycemic. Both groups were studied up to 18 years postpartum. By studying Kaplan Meier survival of the influence of different gestational variables it was obtained in the later development of diabetes mellitus with time parameter and COX models for categorical variables were applied. Significant variables were studied by multivariate Cox analysis. In all analyzes the Hazard ratio was calculated with confidence intervals at 95%. The incidence of diabetes mellitus was 10.3% in patients with a history of gestational diabetes. They were identified as risk factors in the index pregnancy to later development of diabetes mellitus: greater than 35 and younger than 27 years maternal age, BMI greater than 30 kg/m2, hypertensive disorders of pregnancy, insulin therapy, poor metabolic control and more than a complicated pregnancy with gestational diabetes. Clinical factors have been identified in the pregnancy complicated by gestational diabetes that determine a higher probability of progression to diabetes mellitus in the medium and long term.

  16. Planning policy, sustainability and housebuilder practices: The move into (and out of?) the redevelopment of previously developed land.

    Science.gov (United States)

    Karadimitriou, Nikos

    2013-05-01

    This paper explores the transformations of the housebuilding industry under the policy requirement to build on previously developed land (PDL). This requirement was a key lever in promoting the sustainable urban development agenda of UK governments from the early 1990s to 2010 and has survived albeit somewhat relaxed and permutated in the latest National Planning Policy Framework (NPPF). The paper therefore looks at the way in which the policy push towards densification and mixed use affected housebuilders' business strategy and practices and their ability to cope with the 2007 downturn of the housing market and its aftermath. It also points out the eventual feedback of some of these practices into planning policy. Following the gradual shift of British urban policy focus towards sustainability which started in the early 1990s, new configurations of actors, new skills, strategies and approaches to managing risk emerged in property development and housebuilding. There were at least two ways in which housebuilders could have responded to the requirements of developing long term mixed use high density projects on PDL. One way was to develop new products and to employ practices and combinations of practices involving phasing, a flexible approach to planning applications and innovative production methods. Alternatively, they could approach PDL development as a temporary turn of policy or view mixed use high density schemes as a niche market to be explored without drastically overhauling the business model of the entire firm. These transformations of the UK housebuilding sector were unfolding during a long period of buoyancy in the housing market which came to an end in 2007. Very little is known both about how housebuilder strategies and production practices evolved during the boom years as well as about how these firms coped with the effects of the 2007 market downturn. The paper draws on published data (company annual reports, government statistics) and primary

  17. Planning policy, sustainability and housebuilder practices: The move into (and out of?) the redevelopment of previously developed land

    Science.gov (United States)

    Karadimitriou, Nikos

    2013-01-01

    This paper explores the transformations of the housebuilding industry under the policy requirement to build on previously developed land (PDL). This requirement was a key lever in promoting the sustainable urban development agenda of UK governments from the early 1990s to 2010 and has survived albeit somewhat relaxed and permutated in the latest National Planning Policy Framework (NPPF). The paper therefore looks at the way in which the policy push towards densification and mixed use affected housebuilders’ business strategy and practices and their ability to cope with the 2007 downturn of the housing market and its aftermath. It also points out the eventual feedback of some of these practices into planning policy. Following the gradual shift of British urban policy focus towards sustainability which started in the early 1990s, new configurations of actors, new skills, strategies and approaches to managing risk emerged in property development and housebuilding. There were at least two ways in which housebuilders could have responded to the requirements of developing long term mixed use high density projects on PDL. One way was to develop new products and to employ practices and combinations of practices involving phasing, a flexible approach to planning applications and innovative production methods. Alternatively, they could approach PDL development as a temporary turn of policy or view mixed use high density schemes as a niche market to be explored without drastically overhauling the business model of the entire firm. These transformations of the UK housebuilding sector were unfolding during a long period of buoyancy in the housing market which came to an end in 2007. Very little is known both about how housebuilder strategies and production practices evolved during the boom years as well as about how these firms coped with the effects of the 2007 market downturn. The paper draws on published data (company annual reports, government statistics) and primary

  18. Development of a Simple Tool for Identifying Alcohol Use Disorder in Female Korean Drinkers from Previous Questionnaires.

    Science.gov (United States)

    Seo, Yu Ri; Kim, Jong Sung; Kim, Sung Soo; Yoon, Seok Joon; Suh, Won Yoon; Youn, Kwangmi

    2016-01-01

    This study aimed to develop a simple tool for identifying alcohol use disorders in female Korean drinkers from previous questionnaires. This research was conducted on 400 women who consumed at least one alcoholic drink during the past month and visited the health promotion center at Chungnam National University Hospital between June 2013 to May 2014. Drinking habits and alcohol use disorders were assessed by structured interviews using the Diagnostic and Statistical Manual of Mental Disorders, 5th Edition diagnostic criteria. The subjects were also asked to answer the Alcohol Use Disorders Identification Test (AUDIT), AUDIT-Consumption, CAGE (Cut down, Annoyed, Guilty, Eye-opener), TWEAK (Tolerance, Worried, Eye-opener, Amnesia, Kut down), TACE (Tolerance, Annoyed, Cut down, Eye-opener), and NET (Normal drinker, Eye-opener, Tolerance) questionnaires. The area under receiver operating characteristic (AUROC) of each question of the questionnaires on alcohol use disorders was assessed. After combining two questions with the largest AUROC, it was compared to other previous questionnaires. Among the 400 subjects, 58 (14.5%) were identified as having an alcohol use disorder. Two questions with the largest AUROC were question no. 7 in AUDIT, "How often during the last year have you had a feeling of guilt or remorse after drinking?" and question no. 5 in AUDIT, "How often during the past year have you failed to do what was normally expected from you because of drinking?" with an AUROC (95% confidence interval [CI]) of 0.886 (0.850-0.915) and 0.862 (0.824-0.894), respectively. The AUROC (95% CI) of the combination of the two questions was 0.958 (0.934-0.976) with no significant difference as compared to the existing AUDIT with the largest AUROC. The above results suggest that the simple tool consisting of questions no. 5 and no. 7 in AUDIT is useful in identifying alcohol use disorders in Korean female drinkers.

  19. Development of an inter-layer solute transport algorithm for SOLTR computer program. Part 1. The algorithm

    International Nuclear Information System (INIS)

    Miller, I.; Roman, K.

    1979-12-01

    In order to perform studies of the influence of regional groundwater flow systems on the long-term performance of potential high-level nuclear waste repositories, it was determined that an adequate computer model would have to consider the full three-dimensional flow system. Golder Associates' SOLTR code, while three-dimensional, has an overly simple algorithm for simulating the passage of radionuclides from one aquifier to another above or below it. Part 1 of this report describes the algorithm developed to provide SOLTR with an improved capability for simulating interaquifer transport

  20. 34 CFR 606.24 - How does the Secretary use an applicant's performance under a previous development grant when...

    Science.gov (United States)

    2010-07-01

    ... those goals and objectives, the Secretary may— (1) Decide not to fund the applicant; or (2) Fund the..., or is making substantial progress toward fulfilling, the goals and objectives of the previous grant... did not fulfill the goals and objectives of a previous grant or is not making substantial progress...

  1. 34 CFR 607.24 - How does the Secretary use an applicant's performance under a previous development grant when...

    Science.gov (United States)

    2010-07-01

    ... objectives, the Secretary may— (1) Decide not to fund the applicant; or (2) Fund the applicant but impose... progress toward fulfilling, the goals and objectives of the previous grant, including, but not limited to... the goals and objectives of a previous grant or is not making substantial progress towards fulfilling...

  2. Development and verification of an analytical algorithm to predict absorbed dose distributions in ocular proton therapy using Monte Carlo simulations

    International Nuclear Information System (INIS)

    Koch, Nicholas C; Newhauser, Wayne D

    2010-01-01

    Proton beam radiotherapy is an effective and non-invasive treatment for uveal melanoma. Recent research efforts have focused on improving the dosimetric accuracy of treatment planning and overcoming the present limitation of relative analytical dose calculations. Monte Carlo algorithms have been shown to accurately predict dose per monitor unit (D/MU) values, but this has yet to be shown for analytical algorithms dedicated to ocular proton therapy, which are typically less computationally expensive than Monte Carlo algorithms. The objective of this study was to determine if an analytical method could predict absolute dose distributions and D/MU values for a variety of treatment fields like those used in ocular proton therapy. To accomplish this objective, we used a previously validated Monte Carlo model of an ocular nozzle to develop an analytical algorithm to predict three-dimensional distributions of D/MU values from pristine Bragg peaks and therapeutically useful spread-out Bragg peaks (SOBPs). Results demonstrated generally good agreement between the analytical and Monte Carlo absolute dose calculations. While agreement in the proximal region decreased for beams with less penetrating Bragg peaks compared with the open-beam condition, the difference was shown to be largely attributable to edge-scattered protons. A method for including this effect in any future analytical algorithm was proposed. Comparisons of D/MU values showed typical agreement to within 0.5%. We conclude that analytical algorithms can be employed to accurately predict absolute proton dose distributions delivered by an ocular nozzle.

  3. Development of an automatic identification algorithm for antibiogram analysis

    OpenAIRE

    Costa, LFR; Eduardo Silva; Noronha, VT; Ivone Vaz-Moreira; Olga C Nunes; de Andrade, MM

    2015-01-01

    Routinely, diagnostic and microbiology laboratories perform antibiogram analysis which can present some difficulties leading to misreadings and intra and inter-reader deviations. An Automatic Identification Algorithm (AIA) has been proposed as a solution to overcome some issues associated with the disc diffusion method, which is the main goal of this work. ALA allows automatic scanning of inhibition zones obtained by antibiograms. More than 60 environmental isolates were tested using suscepti...

  4. Development of traffic light control algorithm in smart municipal network

    OpenAIRE

    Kuzminykh, Ievgeniia

    2016-01-01

    This paper presents smart system that bypasses the normal functioning algorithm of traffic lights, triggers a green light when the lights are red or reset the timer of the traffic lights when they are about to turn red. Different pieces of hardware like microcontroller units, transceivers, resistors, diodes, LEDs, a digital compass and accelerometer will be coupled together and programed to create unified complex intelligent system.

  5. Development of pattern recognition algorithms for the central drift chamber of the Belle II detector

    Energy Technology Data Exchange (ETDEWEB)

    Trusov, Viktor

    2016-11-04

    In this thesis, the development of one of the pattern recognition algorithms for the Belle II experiment based on conformal and Legendre transformations is presented. In order to optimize the performance of the algorithm (CPU time and efficiency) specialized processing steps have been introduced. To show achieved results, Monte-Carlo based efficiency measurements of the tracking algorithms in the Central Drift Chamber (CDC) has been done.

  6. DEVELOPMENT OF THE ALGORITHM FOR CHOOSING THE OPTIMAL SCENARIO FOR THE DEVELOPMENT OF THE REGION'S ECONOMY

    Directory of Open Access Journals (Sweden)

    I. S. Borisova

    2018-01-01

    Full Text Available Purpose: the article deals with the development of an algorithm for choosing the optimal scenario for the development of the regional economy. Since the "Strategy for socio-economic development of the Lipetsk region for the period until 2020" does not contain scenarios for the development of the region, the algorithm for choosing the optimal scenario for the development of the regional economy is formalized. The scenarios for the development of the economy of the Lipetsk region according to the indicators of the Program of social and economic development are calculated: "Quality of life index", "Average monthly nominal wage", "Level of registered unemployment", "Growth rate of gross regional product", "The share of innovative products in the total volume of goods shipped, works performed and services rendered by industrial organizations", "Total volume of atmospheric pollution per unit GRP" and "Satisfaction of the population with the activity of executive bodies of state power of the region". Based on the calculation of development scenarios, the dynamics of the values of these indicators was developed in the implementation of scenarios for the development of the economy of the Lipetsk region in 2016–2020. Discounted financial costs of economic participants for realization of scenarios of development of economy of the Lipetsk region are estimated. It is shown that the current situation in the economy of the Russian Federation assumes the choice of a paradigm for the innovative development of territories and requires all participants in economic relations at the regional level to concentrate their resources on the creation of new science-intensive products. An assessment of the effects of the implementation of reasonable scenarios for the development of the economy of the Lipetsk region was carried out. It is shown that the most acceptable is the "base" scenario, which assumes a consistent change in the main indicators. The specific economic

  7. Development of modelling algorithm of technological systems by statistical tests

    Science.gov (United States)

    Shemshura, E. A.; Otrokov, A. V.; Chernyh, V. G.

    2018-03-01

    The paper tackles the problem of economic assessment of design efficiency regarding various technological systems at the stage of their operation. The modelling algorithm of a technological system was performed using statistical tests and with account of the reliability index allows estimating the level of machinery technical excellence and defining the efficiency of design reliability against its performance. Economic feasibility of its application shall be determined on the basis of service quality of a technological system with further forecasting of volumes and the range of spare parts supply.

  8. DEVELOPING INCIDENT DETECTION ALGORITHM BASED ON THE MAMDANI FUZZY INFERENCE ALGORITHM

    Directory of Open Access Journals (Sweden)

    Andrey Borisovich Nikolaev

    2017-09-01

    Full Text Available Application of fuzzy logic in the incident detection system allows making a decision under uncertainty. The phase of incident detection is a process of finding difficulties in traffic. The difficulty in traffic is the main sign that there was a road accident and requires a reaction for its elimination. This leads to the use of input data that must be relevant to the vehicles and the road. These data must be considered together, and should be compared with the corresponding values for further analysis. The main parameters of the traffic flow, which can characterize its current state, are a flow rate, a volume flow. Necessary to analyze the input data received from the sensors. After processing the input data, using the previously entered fuzzy rules, will be taken action that will improve the situation in traffic or at least not allow it worse.

  9. Motion Cueing Algorithm Development: Human-Centered Linear and Nonlinear Approaches

    Science.gov (United States)

    Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.

    2005-01-01

    While the performance of flight simulator motion system hardware has advanced substantially, the development of the motion cueing algorithm, the software that transforms simulated aircraft dynamics into realizable motion commands, has not kept pace. Prior research identified viable features from two algorithms: the nonlinear "adaptive algorithm", and the "optimal algorithm" that incorporates human vestibular models. A novel approach to motion cueing, the "nonlinear algorithm" is introduced that combines features from both approaches. This algorithm is formulated by optimal control, and incorporates a new integrated perception model that includes both visual and vestibular sensation and the interaction between the stimuli. Using a time-varying control law, the matrix Riccati equation is updated in real time by a neurocomputing approach. Preliminary pilot testing resulted in the optimal algorithm incorporating a new otolith model, producing improved motion cues. The nonlinear algorithm vertical mode produced a motion cue with a time-varying washout, sustaining small cues for longer durations and washing out large cues more quickly compared to the optimal algorithm. The inclusion of the integrated perception model improved the responses to longitudinal and lateral cues. False cues observed with the NASA adaptive algorithm were absent. The neurocomputing approach was crucial in that the number of presentations of an input vector could be reduced to meet the real time requirement without degrading the quality of the motion cues.

  10. Design requirements and development of an airborne descent path definition algorithm for time navigation

    Science.gov (United States)

    Izumi, K. H.; Thompson, J. L.; Groce, J. L.; Schwab, R. W.

    1986-01-01

    The design requirements for a 4D path definition algorithm are described. These requirements were developed for the NASA ATOPS as an extension of the Local Flow Management/Profile Descent algorithm. They specify the processing flow, functional and data architectures, and system input requirements, and recommended the addition of a broad path revision (reinitialization) function capability. The document also summarizes algorithm design enhancements and the implementation status of the algorithm on an in-house PDP-11/70 computer. Finally, the requirements for the pilot-computer interfaces, the lateral path processor, and guidance and steering function are described.

  11. Texas Medication Algorithm Project: development and feasibility testing of a treatment algorithm for patients with bipolar disorder.

    Science.gov (United States)

    Suppes, T; Swann, A C; Dennehy, E B; Habermacher, E D; Mason, M; Crismon, M L; Toprac, M G; Rush, A J; Shon, S P; Altshuler, K Z

    2001-06-01

    Use of treatment guidelines for treatment of major psychiatric illnesses has increased in recent years. The Texas Medication Algorithm Project (TMAP) was developed to study the feasibility and process of developing and implementing guidelines for bipolar disorder, major depressive disorder, and schizophrenia in the public mental health system of Texas. This article describes the consensus process used to develop the first set of TMAP algorithms for the Bipolar Disorder Module (Phase 1) and the trial testing the feasibility of their implementation in inpatient and outpatient psychiatric settings across Texas (Phase 2). The feasibility trial answered core questions regarding implementation of treatment guidelines for bipolar disorder. A total of 69 patients were treated with the original algorithms for bipolar disorder developed in Phase 1 of TMAP. Results support that physicians accepted the guidelines, followed recommendations to see patients at certain intervals, and utilized sequenced treatment steps differentially over the course of treatment. While improvements in clinical symptoms (24-item Brief Psychiatric Rating Scale) were observed over the course of enrollment in the trial, these conclusions are limited by the fact that physician volunteers were utilized for both treatment and ratings. and there was no control group. Results from Phases 1 and 2 indicate that it is possible to develop and implement a treatment guideline for patients with a history of mania in public mental health clinics in Texas. TMAP Phase 3, a recently completed larger and controlled trial assessing the clinical and economic impact of treatment guidelines and patient and family education in the public mental health system of Texas, improves upon this methodology.

  12. Development Modules for Specification of Requirements for a System of Verification of Parallel Algorithms

    Directory of Open Access Journals (Sweden)

    Vasiliy Yu. Meltsov

    2012-05-01

    Full Text Available This paper presents the results of the development of one of the modules of the system verification of parallel algorithms that are used to verify the inference engine. This module is designed to build the specification requirements, the feasibility of which on the algorithm is necessary to prove (test.

  13. Spectrophotometric determination of uranium with arsenazo previous liquid-liquid extraction and colour development in organic medium

    International Nuclear Information System (INIS)

    Palomares Delgado, F.; Vera Palomino, J.; Petrement Eguiluz, J. C.

    1964-01-01

    The determination of uranium with arsenazo is hindered by a great number of cation which form stable complexes with the reactive and may given rise to serious interferences. By studying the optimum conditions of uranium the extraction be means of tributylphosphate solutions dissolved in methylisobuthylketone, under conditions for previous masking of the interfering cations, an organic extract was obtained containing all the uranium together with small amounts of iron. The possible interference derived from the latter element is avoided by reduction with hydroxylammoniumchlorid followed by complex formation of the Fe(II)-ortophenantroline compound in alcoholic medium. (Author) 17 refs

  14. Development of radio frequency interference detection algorithms for passive microwave remote sensing

    Science.gov (United States)

    Misra, Sidharth

    Radio Frequency Interference (RFI) signals are man-made sources that are increasingly plaguing passive microwave remote sensing measurements. RFI is of insidious nature, with some signals low power enough to go undetected but large enough to impact science measurements and their results. With the launch of the European Space Agency (ESA) Soil Moisture and Ocean Salinity (SMOS) satellite in November 2009 and the upcoming launches of the new NASA sea-surface salinity measuring Aquarius mission in June 2011 and soil-moisture measuring Soil Moisture Active Passive (SMAP) mission around 2015, active steps are being taken to detect and mitigate RFI at L-band. An RFI detection algorithm was designed for the Aquarius mission. The algorithm performance was analyzed using kurtosis based RFI ground-truth. The algorithm has been developed with several adjustable location dependant parameters to control the detection statistics (false-alarm rate and probability of detection). The kurtosis statistical detection algorithm has been compared with the Aquarius pulse detection method. The comparative study determines the feasibility of the kurtosis detector for the SMAP radiometer, as a primary RFI detection algorithm in terms of detectability and data bandwidth. The kurtosis algorithm has superior detection capabilities for low duty-cycle radar like pulses, which are more prevalent according to analysis of field campaign data. Most RFI algorithms developed have generally been optimized for performance with individual pulsed-sinusoidal RFI sources. A new RFI detection model is developed that takes into account multiple RFI sources within an antenna footprint. The performance of the kurtosis detection algorithm under such central-limit conditions is evaluated. The SMOS mission has a unique hardware system, and conventional RFI detection techniques cannot be applied. Instead, an RFI detection algorithm for SMOS is developed and applied in the angular domain. This algorithm compares

  15. Developing robust arsenic awareness prediction models using machine learning algorithms.

    Science.gov (United States)

    Singh, Sushant K; Taylor, Robert W; Rahman, Mohammad Mahmudur; Pradhan, Biswajeet

    2018-04-01

    Arsenic awareness plays a vital role in ensuring the sustainability of arsenic mitigation technologies. Thus far, however, few studies have dealt with the sustainability of such technologies and its associated socioeconomic dimensions. As a result, arsenic awareness prediction has not yet been fully conceptualized. Accordingly, this study evaluated arsenic awareness among arsenic-affected communities in rural India, using a structured questionnaire to record socioeconomic, demographic, and other sociobehavioral factors with an eye to assessing their association with and influence on arsenic awareness. First a logistic regression model was applied and its results compared with those produced by six state-of-the-art machine-learning algorithms (Support Vector Machine [SVM], Kernel-SVM, Decision Tree [DT], k-Nearest Neighbor [k-NN], Naïve Bayes [NB], and Random Forests [RF]) as measured by their accuracy at predicting arsenic awareness. Most (63%) of the surveyed population was found to be arsenic-aware. Significant arsenic awareness predictors were divided into three types: (1) socioeconomic factors: caste, education level, and occupation; (2) water and sanitation behavior factors: number of family members involved in water collection, distance traveled and time spent for water collection, places for defecation, and materials used for handwashing after defecation; and (3) social capital and trust factors: presence of anganwadi and people's trust in other community members, NGOs, and private agencies. Moreover, individuals' having higher social network positively contributed to arsenic awareness in the communities. Results indicated that both the SVM and the RF algorithms outperformed at overall prediction of arsenic awareness-a nonlinear classification problem. Lower-caste, less educated, and unemployed members of the population were found to be the most vulnerable, requiring immediate arsenic mitigation. To this end, local social institutions and NGOs could play a

  16. Preliminary Development and Evaluation of Lightning Jump Algorithms for the Real-Time Detection of Severe Weather

    Science.gov (United States)

    Schultz, Christopher J.; Petersen, Walter A.; Carey, Lawrence D.

    2009-01-01

    Previous studies have demonstrated that rapid increases in total lightning activity (intracloud + cloud-to-ground) are often observed tens of minutes in advance of the occurrence of severe weather at the ground. These rapid increases in lightning activity have been termed "lightning jumps." Herein, we document a positive correlation between lightning jumps and the manifestation of severe weather in thunderstorms occurring across the Tennessee Valley and Washington D.C. A total of 107 thunderstorms were examined in this study, with 69 of the 107 thunderstorms falling into the category of non-severe, and 38 into the category of severe. From the dataset of 69 isolated non-severe thunderstorms, an average peak 1 minute flash rate of 10 flashes/min was determined. A variety of severe thunderstorm types were examined for this study including an MCS, MCV, tornadic outer rainbands of tropical remnants, supercells, and pulse severe thunderstorms. Of the 107 thunderstorms, 85 thunderstorms (47 non-severe, 38 severe) from the Tennessee Valley and Washington D.C tested 6 lightning jump algorithm configurations (Gatlin, Gatlin 45, 2(sigma), 3(sigma), Threshold 10, and Threshold 8). Performance metrics for each algorithm were then calculated, yielding encouraging results from the limited sample of 85 thunderstorms. The 2(sigma) lightning jump algorithm had a high probability of detection (POD; 87%), a modest false alarm rate (FAR; 33%), and a solid Heidke Skill Score (HSS; 0.75). A second and more simplistic lightning jump algorithm named the Threshold 8 lightning jump algorithm also shows promise, with a POD of 81% and a FAR of 41%. Average lead times to severe weather occurrence for these two algorithms were 23 minutes and 20 minutes, respectively. The overall goal of this study is to advance the development of an operationally-applicable jump algorithm that can be used with either total lightning observations made from the ground, or in the near future from space using the

  17. A prediction algorithm for first onset of major depression in the general population: development and validation.

    Science.gov (United States)

    Wang, JianLi; Sareen, Jitender; Patten, Scott; Bolton, James; Schmitz, Norbert; Birney, Arden

    2014-05-01

    Prediction algorithms are useful for making clinical decisions and for population health planning. However, such prediction algorithms for first onset of major depression do not exist. The objective of this study was to develop and validate a prediction algorithm for first onset of major depression in the general population. Longitudinal study design with approximate 3-year follow-up. The study was based on data from a nationally representative sample of the US general population. A total of 28 059 individuals who participated in Waves 1 and 2 of the US National Epidemiologic Survey on Alcohol and Related Conditions and who had not had major depression at Wave 1 were included. The prediction algorithm was developed using logistic regression modelling in 21 813 participants from three census regions. The algorithm was validated in participants from the 4th census region (n=6246). Major depression occurred since Wave 1 of the National Epidemiologic Survey on Alcohol and Related Conditions, assessed by the Alcohol Use Disorder and Associated Disabilities Interview Schedule-diagnostic and statistical manual for mental disorders IV. A prediction algorithm containing 17 unique risk factors was developed. The algorithm had good discriminative power (C statistics=0.7538, 95% CI 0.7378 to 0.7699) and excellent calibration (F-adjusted test=1.00, p=0.448) with the weighted data. In the validation sample, the algorithm had a C statistic of 0.7259 and excellent calibration (Hosmer-Lemeshow χ(2)=3.41, p=0.906). The developed prediction algorithm has good discrimination and calibration capacity. It can be used by clinicians, mental health policy-makers and service planners and the general public to predict future risk of having major depression. The application of the algorithm may lead to increased personalisation of treatment, better clinical decisions and more optimal mental health service planning.

  18. A Prototype Hail Detection Algorithm and Hail Climatology Developed with the Advanced Microwave Sounding Unit (AMSU)

    Science.gov (United States)

    Ferraro, Ralph; Beauchamp, James; Cecil, Dan; Heymsfeld, Gerald

    2015-01-01

    In previous studies published in the open literature, a strong relationship between the occurrence of hail and the microwave brightness temperatures (primarily at 37 and 85 GHz) was documented. These studies were performed with the Nimbus-7 SMMR, the TRMM Microwave Imager (TMI) and most recently, the Aqua AMSR-E sensor. This lead to climatologies of hail frequency from TMI and AMSR-E, however, limitations include geographical domain of the TMI sensor (35 S to 35 N) and the overpass time of the Aqua satellite (130 am/pm local time), both of which reduce an accurate mapping of hail events over the global domain and the full diurnal cycle. Nonetheless, these studies presented exciting, new applications for passive microwave sensors. Since 1998, NOAA and EUMETSAT have been operating the AMSU-A/B and the MHS on several operational satellites: NOAA-15 through NOAA-19; MetOp-A and -B. With multiple satellites in operation since 2000, the AMSU/MHS sensors provide near global coverage every 4 hours, thus, offering a much larger time and temporal sampling than TRMM or AMSR-E. With similar observation frequencies near 30 and 85 GHz and additionally three at the 183 GHz water vapor band, the potential to detect strong convection associated with severe storms on a more comprehensive time and space scale exists. In this study, we develop a prototype AMSU-based hail detection algorithm through the use of collocated satellite and surface hail reports over the continental U.S. for a 12-year period (2000-2011). Compared with the surface observations, the algorithm detects approximately 40 percent of hail occurrences. The simple threshold algorithm is then used to generate a hail climatology that is based on all available AMSU observations during 2000-11 that is stratified in several ways, including total hail occurrence by month (March through September), total annual, and over the diurnal cycle. Independent comparisons are made compared to similar data sets derived from other

  19. Development of Automatic Cluster Algorithm for Microcalcification in Digital Mammography

    International Nuclear Information System (INIS)

    Choi, Seok Yoon; Kim, Chang Soo

    2009-01-01

    Digital Mammography is an efficient imaging technique for the detection and diagnosis of breast pathological disorders. Six mammographic criteria such as number of cluster, number, size, extent and morphologic shape of microcalcification, and presence of mass, were reviewed and correlation with pathologic diagnosis were evaluated. It is very important to find breast cancer early when treatment can reduce deaths from breast cancer and breast incision. In screening breast cancer, mammography is typically used to view the internal organization. Clusterig microcalcifications on mammography represent an important feature of breast mass, especially that of intraductal carcinoma. Because microcalcification has high correlation with breast cancer, a cluster of a microcalcification can be very helpful for the clinical doctor to predict breast cancer. For this study, three steps of quantitative evaluation are proposed : DoG filter, adaptive thresholding, Expectation maximization. Through the proposed algorithm, each cluster in the distribution of microcalcification was able to measure the number calcification and length of cluster also can be used to automatically diagnose breast cancer as indicators of the primary diagnosis.

  20. The development of computational algorithms for manipulator inverse kinematics

    International Nuclear Information System (INIS)

    Sasaki, Shinobu

    1989-10-01

    A solution technique of the inverse kinematics for multi-joint robot manipulators has been considered to be one of the most cumbersome treatment due to non-linearity properties inclusive of trigonometric functions. The most traditional approach is to use the Jacobian matrix on linearization assumptions. This iterative technique, however, is attended with numerical problems having significant influences on the solution characteristics such as initial guess dependence and singularities. Taking these facts into consideration, new approaches have been proposed from different standpoints, which are based on polynomial transformation of kinematic model, the minimization technique in mathematical programming, vector-geometrical concept, and the separation of joint variables associated with the optimization problem. In terms of computer simulations, each approach was identified to be a useful algorithm which leads to theoretically accurate solutions to complicated inverse problems. In this way, the short-term goal of our studies on manipulator inverse problem in the R and D project of remote handling technology was accomplished with success, and consequently the present report sums up the results of basic studies on this matter. (author)

  1. AeroADL: applying the integration of the Suomi-NPP science algorithms with the Algorithm Development Library to the calibration and validation task

    Science.gov (United States)

    Houchin, J. S.

    2014-09-01

    A common problem for the off-line validation of the calibration algorithms and algorithm coefficients is being able to run science data through the exact same software used for on-line calibration of that data. The Joint Polar Satellite System (JPSS) program solved part of this problem by making the Algorithm Development Library (ADL) available, which allows the operational algorithm code to be compiled and run on a desktop Linux workstation using flat file input and output. However, this solved only part of the problem, as the toolkit and methods to initiate the processing of data through the algorithms were geared specifically toward the algorithm developer, not the calibration analyst. In algorithm development mode, a limited number of sets of test data are staged for the algorithm once, and then run through the algorithm over and over as the software is developed and debugged. In calibration analyst mode, we are continually running new data sets through the algorithm, which requires significant effort to stage each of those data sets for the algorithm without additional tools. AeroADL solves this second problem by providing a set of scripts that wrap the ADL tools, providing both efficient means to stage and process an input data set, to override static calibration coefficient look-up-tables (LUT) with experimental versions of those tables, and to manage a library containing multiple versions of each of the static LUT files in such a way that the correct set of LUTs required for each algorithm are automatically provided to the algorithm without analyst effort. Using AeroADL, The Aerospace Corporation's analyst team has demonstrated the ability to quickly and efficiently perform analysis tasks for both the VIIRS and OMPS sensors with minimal training on the software tools.

  2. Development of a new genetic algorithm to solve the feedstock scheduling problem in an anaerobic digester

    Science.gov (United States)

    Cram, Ana Catalina

    As worldwide environmental awareness grow, alternative sources of energy have become important to mitigate climate change. Biogas in particular reduces greenhouse gas emissions that contribute to global warming and has the potential of providing 25% of the annual demand for natural gas in the U.S. In 2011, 55,000 metric tons of methane emissions were reduced and 301 metric tons of carbon dioxide emissions were avoided through the use of biogas alone. Biogas is produced by anaerobic digestion through the fermentation of organic material. It is mainly composed of methane with a rage of 50 to 80% in its concentration. Carbon dioxide covers 20 to 50% and small amounts of hydrogen, carbon monoxide and nitrogen. The biogas production systems are anaerobic digestion facilities and the optimal operation of an anaerobic digester requires the scheduling of all batches from multiple feedstocks during a specific time horizon. The availability times, biomass quantities, biogas production rates and storage decay rates must all be taken into account for maximal biogas production to be achieved during the planning horizon. Little work has been done to optimize the scheduling of different types of feedstock in anaerobic digestion facilities to maximize the total biogas produced by these systems. Therefore, in the present thesis, a new genetic algorithm is developed with the main objective of obtaining the optimal sequence in which different feedstocks will be processed and the optimal time to allocate to each feedstock in the digester with the main objective of maximizing the production of biogas considering different types of feedstocks, arrival times and decay rates. Moreover, all batches need to be processed in the digester in a specified time with the restriction that only one batch can be processed at a time. The developed algorithm is applied to 3 different examples and a comparison with results obtained in previous studies is presented.

  3. The reliability of the Associate Platinum digital foot scanner in measuring previously developed footprint characteristics: a technical note.

    Science.gov (United States)

    Papuga, M Owen; Burke, Jeanmarie R

    2011-02-01

    An ink pad and paper, pressure-sensitive platforms, and photography have previously been used to collect footprint data used in clinical assessment. Digital scanners have been widely used more recently to collect such data. The purpose of this study was to evaluate the intra- and interrater reliability of a flatbed digital image scanning technology to capture footprint data. This study used a repeated-measures design on 32 (16 male 16 female) healthy subjects. The following measured indices of footprint were recorded from 2-dimensional images of the plantar surface of the foot recorded with an Associate Platinum (Foot Levelers Inc, Roanoke, VA) digital foot scanner: Staheli index, Chippaux-Smirak index, arch angle, and arch index. Intraclass correlation coefficient (ICC) values were calculated to evaluate intrarater, interday, and interclinician reliability. The ICC values for intrarater reliability were greater than or equal to .817, indicating an excellent level of reproducibility in assessing the collected images. Analyses of variance revealed that there were no significant differences between raters for each index (P > .05). The ICC values also indicated excellent reliability (.881-.971) between days and clinicians in all but one of the indices of footprint, arch angle (.689), with good reliability between clinicians. The full-factorial analysis of variance model did not reveal any interaction effects (P > .05), which indicated that indices of footprint were not changing across days and clinicians. Scanning technology used in this study demonstrated good intra- and interrater reliability measurements of footprint indices, as demonstrated by high ICC values. Copyright © 2011 National University of Health Sciences. Published by Mosby, Inc. All rights reserved.

  4. Development and validation of an algorithm for laser application in wound treatment

    Directory of Open Access Journals (Sweden)

    Diequison Rite da Cunha

    2017-12-01

    Full Text Available ABSTRACT Objective: To develop and validate an algorithm for laser wound therapy. Method: Methodological study and literature review. For the development of the algorithm, a review was performed in the Health Sciences databases of the past ten years. The algorithm evaluation was performed by 24 participants, nurses, physiotherapists, and physicians. For data analysis, the Cronbach’s alpha coefficient and the chi-square test for independence was used. The level of significance of the statistical test was established at 5% (p<0.05. Results: The professionals’ responses regarding the facility to read the algorithm indicated: 41.70%, great; 41.70%, good; 16.70%, regular. With regard the algorithm being sufficient for supporting decisions related to wound evaluation and wound cleaning, 87.5% said yes to both questions. Regarding the participants’ opinion that the algorithm contained enough information to support their decision regarding the choice of laser parameters, 91.7% said yes. The questionnaire presented reliability using the Cronbach’s alpha coefficient test (α = 0.962. Conclusion: The developed and validated algorithm showed reliability for evaluation, wound cleaning, and use of laser therapy in wounds.

  5. Hemoglobin-Based Oxygen Carrier (HBOC) Development in Trauma: Previous Regulatory Challenges, Lessons Learned, and a Path Forward.

    Science.gov (United States)

    Keipert, Peter E

    2017-01-01

    Historically, hemoglobin-based oxygen carriers (HBOCs) were being developed as "blood substitutes," despite their transient circulatory half-life (~ 24 h) vs. transfused red blood cells (RBCs). More recently, HBOC commercial development focused on "oxygen therapeutic" indications to provide a temporary oxygenation bridge until medical or surgical interventions (including RBC transfusion, if required) can be initiated. This included the early trauma trials with HemAssist ® (BAXTER), Hemopure ® (BIOPURE) and PolyHeme ® (NORTHFIELD) for resuscitating hypotensive shock. These trials all failed due to safety concerns (e.g., cardiac events, mortality) and certain protocol design limitations. In 2008 the Food and Drug Administration (FDA) put all HBOC trials in the US on clinical hold due to the unfavorable benefit:risk profile demonstrated by various HBOCs in different clinical studies in a meta-analysis published by Natanson et al. (2008). During standard resuscitation in trauma, organ dysfunction and failure can occur due to ischemia in critical tissues, which can be detected by the degree of lactic acidosis. SANGART'S Phase 2 trauma program with MP4OX therefore added lactate >5 mmol/L as an inclusion criterion to enroll patients who had lost sufficient blood to cause a tissue oxygen debt. This was key to the successful conduct of their Phase 2 program (ex-US, from 2009 to 2012) to evaluate MP4OX as an adjunct to standard fluid resuscitation and transfusion of RBCs. In 2013, SANGART shared their Phase 2b results with the FDA, and succeeded in getting the FDA to agree that a planned Phase 2c higher dose comparison study of MP4OX in trauma could include clinical sites in the US. Unfortunately, SANGART failed to secure new funding and was forced to terminate development and operations in Dec 2013, even though a regulatory path forward with FDA approval to proceed in trauma had been achieved.

  6. Development of MODIS data-based algorithm for retrieving sea surface temperature in coastal waters.

    Science.gov (United States)

    Wang, Jiao; Deng, Zhiqiang

    2017-06-01

    A new algorithm was developed for retrieving sea surface temperature (SST) in coastal waters using satellite remote sensing data from Moderate Resolution Imaging Spectroradiometer (MODIS) aboard Aqua platform. The new SST algorithm was trained using the Artificial Neural Network (ANN) method and tested using 8 years of remote sensing data from MODIS Aqua sensor and in situ sensing data from the US coastal waters in Louisiana, Texas, Florida, California, and New Jersey. The ANN algorithm could be utilized to map SST in both deep offshore and particularly shallow nearshore waters at the high spatial resolution of 1 km, greatly expanding the coverage of remote sensing-based SST data from offshore waters to nearshore waters. Applications of the ANN algorithm require only the remotely sensed reflectance values from the two MODIS Aqua thermal bands 31 and 32 as input data. Application results indicated that the ANN algorithm was able to explaining 82-90% variations in observed SST in US coastal waters. While the algorithm is generally applicable to the retrieval of SST, it works best for nearshore waters where important coastal resources are located and existing algorithms are either not applicable or do not work well, making the new ANN-based SST algorithm unique and particularly useful to coastal resource management.

  7. Algorithm Development for a Real-Time Military Noise Monitor

    National Research Council Canada - National Science Library

    Vipperman, Jeffrey S; Bucci, Brian

    2006-01-01

    The long-range goal of this 1-year SERDP Exploratory Development (SEED) project was to create an improved real-time, high-energy military impulse noise monitoring system that can detect events with peak levels (Lpk...

  8. Development of transmission dose estimation algorithm for in vivo dosimetry in high energy radiation treatment

    International Nuclear Information System (INIS)

    Yun, Hyong Geun; Shin, Kyo Chul; Hun, Soon Nyung; Woo, Hong Gyun; Ha, Sung Whan; Lee, Hyoung Koo

    2004-01-01

    In vivo dosimetry is very important for quality assurance purpose in high energy radiation treatment. Measurement of transmission dose is a new method of in vivo dosimetry which is noninvasive and easy for daily performance. This study is to develop a tumor dose estimation algorithm using measured transmission dose for open radiation field. For basic beam data, transmission dose was measured with various field size (FS) of square radiation field, phantom thickness (Tp), and phantom chamber distance (PCD) with a acrylic phantom for 6 MV and 10 MV X-ray. Source to chamber distance (SCD) was set to 150 cm. Measurement was conducted with a 0.6 cc Farmer type ion chamber. By using regression analysis of measured basic beam data, a transmission dose estimation algorithm was developed. Accuracy of the algorithm was tested with flat solid phantom with various thickness in various settings of rectangular fields and various PCD. In our developed algorithm, transmission dose was equated to quadratic function of log(A/P) (where A/P is area-perimeter ratio) and the coefficients of the quadratic functions were equated to tertiary functions of PCD. Our developed algorithm could estimate the radiation dose with the errors within ±0.5% for open square field, and with the errors within ±1.0% for open elongated radiation field. Developed algorithm could accurately estimate the transmission dose in open radiation fields with various treatment settings of high energy radiation treatment. (author)

  9. Development of the algorithm for obtaining 3-dimensional information using the structured light

    Energy Technology Data Exchange (ETDEWEB)

    Shin, Dong Uk; Lee, Jae Hyub; Kim, Chung Soo [Korea University of Technology and Education, Cheonan (Korea)

    1998-03-01

    The utilization of robot in atomic power plants or nuclear-related facilities has grown rapidly. In order to perform preassigned jobs using robot in nuclear-related facilities, advanced technology extracting 3D information of objects is essential. We have studied an algorithm to extract 3D information of objects using laser slit light and camera, and developed the following hardware system and algorithms. (1) We have designed and fabricated the hardware system which consists of laser light and two cameras. The hardware system can be easily installed on the robot. (2) In order to reduce the occlusion problem when measuring 3D information using laser slit light and camera, we have studied system with laser slit light and two cameras and developed algorithm to synthesize 3D information obtained from two cameras. (2) For easy use of obtained 3D information, we expressed it as digital distance image format and developed algorithm to interpolate 3D information of points which is not obtained. (4) In order to simplify calibration of the camera's parameter, we have also designed an fabricated LED plate, and developed an algorithm detecting the center position of LED automatically. We can certify the efficiency of developed algorithm and hardware system through experimental results. 16 refs., 26 figs., 1 tabs. (Author)

  10. On the development of protein pKa calculation algorithms

    Science.gov (United States)

    Carstensen, Tommy; Farrell, Damien; Huang, Yong; Baker, Nathan A.; Nielsen, Jens Erik

    2011-01-01

    Protein pKa calculation methods are developed partly to provide fast non-experimental estimates of the ionization constants of protein side chains. However, the most significant reason for developing such methods is that a good pKa calculation method is presumed to provide an accurate physical model of protein electrostatics, which can be applied in methods for drug design, protein design and other structure-based energy calculation methods. We explore the validity of this presumption by simulating the development of a pKa calculation method using artificial experimental data derived from a human-defined physical reality. We examine the ability of an RMSD-guided development protocol to retrieve the correct (artificial) physical reality and find that a rugged optimization landscape and a huge parameter space prevent the identification of the correct physical reality. We examine the importance of the training set in developing pKa calculation methods and investigate the effect of experimental noise on our ability to identify the correct physical reality, and find that both effects have a significant and detrimental impact on the physical reality of the optimal model identified. Our findings are of relevance to all structure-based methods for protein energy calculations and simulation, and have large implications for all types of current pKa calculation methods. Our analysis furthermore suggests that careful and extensive validation on many types of experimental data can go some way in making current models more realistic. PMID:21744393

  11. RStorm: Developing and Testing Streaming Algorithms in R

    NARCIS (Netherlands)

    Kaptein, M.C.

    2014-01-01

    Streaming data, consisting of indefinitely evolving sequences, are becoming ubiquitous in many branches of science and in various applications. Computer scientists have developed streaming applications such as Storm and the S4 distributed stream computing platform1 to deal with data streams.

  12. Developed adaptive neuro-fuzzy algorithm to control air conditioning ...

    African Journals Online (AJOL)

    The paper developed artificial intelligence technique adaptive neuro-fuzzy controller for air conditioning systems at different pressures. The first order Sugeno fuzzy inference system was implemented and utilized for modeling and controller design. In addition, the estimation of the heat transfer rate and water mass flow rate ...

  13. RStorm : Developing and testing streaming algorithms in R

    NARCIS (Netherlands)

    Kaptein, M.C.

    2014-01-01

    Streaming data, consisting of indefinitely evolving sequences, are becoming ubiquitous in many branches of science and in various applications. Computer scientists have developed streaming applications such as Storm and the S4 distributed stream computing platform1 to deal with data streams.

  14. Developed adaptive neuro-fuzzy algorithm to control air conditioning ...

    African Journals Online (AJOL)

    user

    The paper developed artificial intelligence technique adaptive neuro-fuzzy ... system is highly appreciated and essential in most of our daily life. ... It can construct an input-output mapping based on human knowledge and specific input-output data ... fuzzy controllers to produce desirable internal temperature and air quality, ...

  15. An integrated environment for fast development and performance assessment of sonar image processing algorithms - SSIE

    DEFF Research Database (Denmark)

    Henriksen, Lars

    1996-01-01

    The sonar simulator integrated environment (SSIE) is a tool for developing high performance processing algorithms for single or sequences of sonar images. The tool is based on MATLAB providing a very short lead time from concept to executable code and thereby assessment of the algorithms tested...... of the algorithms is the availability of sonar images. To accommodate this problem the SSIE has been equipped with a simulator capable of generating high fidelity sonar images for a given scene of objects, sea-bed AUV path, etc. In the paper the main components of the SSIE is described and examples of different...... processing steps are given...

  16. The development of an algebraic multigrid algorithm for symmetric positive definite linear systems

    Energy Technology Data Exchange (ETDEWEB)

    Vanek, P.; Mandel, J.; Brezina, M. [Univ. of Colorado, Denver, CO (United States)

    1996-12-31

    An algebraic multigrid algorithm for symmetric, positive definite linear systems is developed based on the concept of prolongation by smoothed aggregation. Coarse levels are generated automatically. We present a set of requirements motivated heuristically by a convergence theory. The algorithm then attempts to satisfy the requirements. Input to the method are the coefficient matrix and zero energy modes, which are determined from nodal coordinates and knowledge of the differential equation. Efficiency of the resulting algorithm is demonstrated by computational results on real world problems from solid elasticity, plate blending, and shells.

  17. Development of web-based reliability data analysis algorithm model and its application

    International Nuclear Information System (INIS)

    Hwang, Seok-Won; Oh, Ji-Yong; Moosung-Jae

    2010-01-01

    For this study, a database model of plant reliability was developed for the effective acquisition and management of plant-specific data that can be used in various applications of plant programs as well as in Probabilistic Safety Assessment (PSA). Through the development of a web-based reliability data analysis algorithm, this approach systematically gathers specific plant data such as component failure history, maintenance history, and shift diary. First, for the application of the developed algorithm, this study reestablished the raw data types, data deposition procedures and features of the Enterprise Resource Planning (ERP) system process. The component codes and system codes were standardized to make statistical analysis between different types of plants possible. This standardization contributes to the establishment of a flexible database model that allows the customization of reliability data for the various applications depending on component types and systems. In addition, this approach makes it possible for users to perform trend analyses and data comparisons for the significant plant components and systems. The validation of the algorithm is performed through a comparison of the importance measure value (Fussel-Vesely) of the mathematical calculation and that of the algorithm application. The development of a reliability database algorithm is one of the best approaches for providing systemic management of plant-specific reliability data with transparency and continuity. This proposed algorithm reinforces the relationships between raw data and application results so that it can provide a comprehensive database that offers everything from basic plant-related data to final customized data.

  18. Using qualitative research to inform development of a diagnostic algorithm for UTI in children.

    Science.gov (United States)

    de Salis, Isabel; Whiting, Penny; Sterne, Jonathan A C; Hay, Alastair D

    2013-06-01

    Diagnostic and prognostic algorithms can help reduce clinical uncertainty. The selection of candidate symptoms and signs to be measured in case report forms (CRFs) for potential inclusion in diagnostic algorithms needs to be comprehensive, clearly formulated and relevant for end users. To investigate whether qualitative methods could assist in designing CRFs in research developing diagnostic algorithms. Specifically, the study sought to establish whether qualitative methods could have assisted in designing the CRF for the Health Technology Association funded Diagnosis of Urinary Tract infection in Young children (DUTY) study, which will develop a diagnostic algorithm to improve recognition of urinary tract infection (UTI) in children aged children in primary care and a Children's Emergency Department. We elicited features that clinicians believed useful in diagnosing UTI and compared these for presence or absence and terminology with the DUTY CRF. Despite much agreement between clinicians' accounts and the DUTY CRFs, we identified a small number of potentially important symptoms and signs not included in the CRF and some included items that could have been reworded to improve understanding and final data analysis. This study uniquely demonstrates the role of qualitative methods in the design and content of CRFs used for developing diagnostic (and prognostic) algorithms. Research groups developing such algorithms should consider using qualitative methods to inform the selection and wording of candidate symptoms and signs.

  19. Development of web-based reliability data analysis algorithm model and its application

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Seok-Won, E-mail: swhwang@khnp.co.k [Korea Hydro and Nuclear Power Co. Ltd., Jang-Dong 25-1, Yuseong-Gu, 305-343 Daejeon (Korea, Republic of); Oh, Ji-Yong [Korea Hydro and Nuclear Power Co. Ltd., Jang-Dong 25-1, Yuseong-Gu, 305-343 Daejeon (Korea, Republic of); Moosung-Jae [Department of Nuclear Engineering Hanyang University 17 Haengdang, Sungdong, Seoul (Korea, Republic of)

    2010-02-15

    For this study, a database model of plant reliability was developed for the effective acquisition and management of plant-specific data that can be used in various applications of plant programs as well as in Probabilistic Safety Assessment (PSA). Through the development of a web-based reliability data analysis algorithm, this approach systematically gathers specific plant data such as component failure history, maintenance history, and shift diary. First, for the application of the developed algorithm, this study reestablished the raw data types, data deposition procedures and features of the Enterprise Resource Planning (ERP) system process. The component codes and system codes were standardized to make statistical analysis between different types of plants possible. This standardization contributes to the establishment of a flexible database model that allows the customization of reliability data for the various applications depending on component types and systems. In addition, this approach makes it possible for users to perform trend analyses and data comparisons for the significant plant components and systems. The validation of the algorithm is performed through a comparison of the importance measure value (Fussel-Vesely) of the mathematical calculation and that of the algorithm application. The development of a reliability database algorithm is one of the best approaches for providing systemic management of plant-specific reliability data with transparency and continuity. This proposed algorithm reinforces the relationships between raw data and application results so that it can provide a comprehensive database that offers everything from basic plant-related data to final customized data.

  20. Dynamic Control of Airport Departures: Algorithm Development and Field Evaluation

    OpenAIRE

    Simaiakis, Ioannis; Balakrishnan, Hamsa

    2012-01-01

    Surface congestion leads to significant increases in taxi times and fuel burn at major airports. In this paper, we formulate the airport surface congestion management problem as a dynamic control problem. We address two main challenges: the random delay between actuation (at the gate) and the server being controlled (the runway), and the need to develop control strategies that can be implemented in practice by human air traffic controllers. The second requirement necessitates a strategy that ...

  1. Rapid mental computation system as a tool for algorithmic thinking of elementary school students development

    OpenAIRE

    Ziatdinov, Rushan; Musa, Sajid

    2013-01-01

    In this paper, we describe the possibilities of using a rapid mental computation system in elementary education. The system consists of a number of readily memorized operations that allow one to perform arithmetic computations very quickly. These operations are actually simple algorithms which can develop or improve the algorithmic thinking of pupils. Using a rapid mental computation system allows forming the basis for the study of computer science in secondary school.

  2. Genetic algorithms

    Science.gov (United States)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  3. TIGER: Development of Thermal Gradient Compensation Algorithms and Techniques

    Science.gov (United States)

    Hereford, James; Parker, Peter A.; Rhew, Ray D.

    2004-01-01

    In a wind tunnel facility, the direct measurement of forces and moments induced on the model are performed by a force measurement balance. The measurement balance is a precision-machined device that has strain gages at strategic locations to measure the strain (i.e., deformations) due to applied forces and moments. The strain gages convert the strain (and hence the applied force) to an electrical voltage that is measured by external instruments. To address the problem of thermal gradients on the force measurement balance NASA-LaRC has initiated a research program called TIGER - Thermally-Induced Gradients Effects Research. The ultimate goals of the TIGER program are to: (a) understand the physics of the thermally-induced strain and its subsequent impact on load measurements and (b) develop a robust thermal gradient compensation technique. This paper will discuss the impact of thermal gradients on force measurement balances, specific aspects of the TIGER program (the design of a special-purpose balance, data acquisition and data analysis challenges), and give an overall summary.

  4. Advancements in the Development of an Operational Lightning Jump Algorithm for GOES-R GLM

    Science.gov (United States)

    Shultz, Chris; Petersen, Walter; Carey, Lawrence

    2011-01-01

    Rapid increases in total lightning have been shown to precede the manifestation of severe weather at the surface. These rapid increases have been termed lightning jumps, and are the current focus of algorithm development for the GOES-R Geostationary Lightning Mapper (GLM). Recent lightning jump algorithm work has focused on evaluation of algorithms in three additional regions of the country, as well as, markedly increasing the number of thunderstorms in order to evaluate the each algorithm s performance on a larger population of storms. Lightning characteristics of just over 600 thunderstorms have been studied over the past four years. The 2 lightning jump algorithm continues to show the most promise for an operational lightning jump algorithm, with a probability of detection of 82%, a false alarm rate of 35%, a critical success index of 57%, and a Heidke Skill Score of 0.73 on the entire population of thunderstorms. Average lead time for the 2 algorithm on all severe weather is 21.15 minutes, with a standard deviation of +/- 14.68 minutes. Looking at tornadoes alone, the average lead time is 18.71 minutes, with a standard deviation of +/-14.88 minutes. Moreover, removing the 2 lightning jumps that occur after a jump has been detected, and before severe weather is detected at the ground, the 2 lightning jump algorithm s false alarm rate drops from 35% to 21%. Cold season, low topped, and tropical environments cause problems for the 2 lightning jump algorithm, due to their relative dearth in lightning as compared to a supercellular or summertime airmass thunderstorm environment.

  5. Battery algorithm verification and development using hardware-in-the-loop testing

    Energy Technology Data Exchange (ETDEWEB)

    He, Yongsheng [General Motors Global Research and Development, 30500 Mound Road, MC 480-106-252, Warren, MI 48090 (United States); Liu, Wei; Koch, Brain J. [General Motors Global Vehicle Engineering, Warren, MI 48090 (United States)

    2010-05-01

    Battery algorithms play a vital role in hybrid electric vehicles (HEVs), plug-in hybrid electric vehicles (PHEVs), extended-range electric vehicles (EREVs), and electric vehicles (EVs). The energy management of hybrid and electric propulsion systems needs to rely on accurate information on the state of the battery in order to determine the optimal electric drive without abusing the battery. In this study, a cell-level hardware-in-the-loop (HIL) system is used to verify and develop state of charge (SOC) and power capability predictions of embedded battery algorithms for various vehicle applications. Two different batteries were selected as representative examples to illustrate the battery algorithm verification and development procedure. One is a lithium-ion battery with a conventional metal oxide cathode, which is a power battery for HEV applications. The other is a lithium-ion battery with an iron phosphate (LiFePO{sub 4}) cathode, which is an energy battery for applications in PHEVs, EREVs, and EVs. The battery cell HIL testing provided valuable data and critical guidance to evaluate the accuracy of the developed battery algorithms, to accelerate battery algorithm future development and improvement, and to reduce hybrid/electric vehicle system development time and costs. (author)

  6. Battery algorithm verification and development using hardware-in-the-loop testing

    Science.gov (United States)

    He, Yongsheng; Liu, Wei; Koch, Brain J.

    Battery algorithms play a vital role in hybrid electric vehicles (HEVs), plug-in hybrid electric vehicles (PHEVs), extended-range electric vehicles (EREVs), and electric vehicles (EVs). The energy management of hybrid and electric propulsion systems needs to rely on accurate information on the state of the battery in order to determine the optimal electric drive without abusing the battery. In this study, a cell-level hardware-in-the-loop (HIL) system is used to verify and develop state of charge (SOC) and power capability predictions of embedded battery algorithms for various vehicle applications. Two different batteries were selected as representative examples to illustrate the battery algorithm verification and development procedure. One is a lithium-ion battery with a conventional metal oxide cathode, which is a power battery for HEV applications. The other is a lithium-ion battery with an iron phosphate (LiFePO 4) cathode, which is an energy battery for applications in PHEVs, EREVs, and EVs. The battery cell HIL testing provided valuable data and critical guidance to evaluate the accuracy of the developed battery algorithms, to accelerate battery algorithm future development and improvement, and to reduce hybrid/electric vehicle system development time and costs.

  7. Bobcat 2013: a hyperspectral data collection supporting the development and evaluation of spatial-spectral algorithms

    Science.gov (United States)

    Kaufman, Jason; Celenk, Mehmet; White, A. K.; Stocker, Alan D.

    2014-06-01

    The amount of hyperspectral imagery (HSI) data currently available is relatively small compared to other imaging modalities, and what is suitable for developing, testing, and evaluating spatial-spectral algorithms is virtually nonexistent. In this work, a significant amount of coincident airborne hyperspectral and high spatial resolution panchromatic imagery that supports the advancement of spatial-spectral feature extraction algorithms was collected to address this need. The imagery was collected in April 2013 for Ohio University by the Civil Air Patrol, with their Airborne Real-time Cueing Hyperspectral Enhanced Reconnaissance (ARCHER) sensor. The target materials, shapes, and movements throughout the collection area were chosen such that evaluation of change detection algorithms, atmospheric compensation techniques, image fusion methods, and material detection and identification algorithms is possible. This paper describes the collection plan, data acquisition, and initial analysis of the collected imagery.

  8. Implementation on Landsat Data of a Simple Cloud Mask Algorithm Developed for MODIS Land Bands

    Science.gov (United States)

    Oreopoulos, Lazaros; Wilson, Michael J.; Varnai, Tamas

    2010-01-01

    This letter assesses the performance on Landsat-7 images of a modified version of a cloud masking algorithm originally developed for clear-sky compositing of Moderate Resolution Imaging Spectroradiometer (MODIS) images at northern mid-latitudes. While data from recent Landsat missions include measurements at thermal wavelengths, and such measurements are also planned for the next mission, thermal tests are not included in the suggested algorithm in its present form to maintain greater versatility and ease of use. To evaluate the masking algorithm we take advantage of the availability of manual (visual) cloud masks developed at USGS for the collection of Landsat scenes used here. As part of our evaluation we also include the Automated Cloud Cover Assesment (ACCA) algorithm that includes thermal tests and is used operationally by the Landsat-7 mission to provide scene cloud fractions, but no cloud masks. We show that the suggested algorithm can perform about as well as ACCA both in terms of scene cloud fraction and pixel-level cloud identification. Specifically, we find that the algorithm gives an error of 1.3% for the scene cloud fraction of 156 scenes, and a root mean square error of 7.2%, while it agrees with the manual mask for 93% of the pixels, figures very similar to those from ACCA (1.2%, 7.1%, 93.7%).

  9. A comparison of three self-tuning control algorithms developed for the Bristol-Babcock controller

    International Nuclear Information System (INIS)

    Tapp, P.A.

    1992-04-01

    A brief overview of adaptive control methods relating to the design of self-tuning proportional-integral-derivative (PID) controllers is given. The methods discussed include gain scheduling, self-tuning, auto-tuning, and model-reference adaptive control systems. Several process identification and parameter adjustment methods are discussed. Characteristics of the two most common types of self-tuning controllers implemented by industry (i.e., pattern recognition and process identification) are summarized. The substance of the work is a comparison of three self-tuning proportional-plus-integral (STPI) control algorithms developed to work in conjunction with the Bristol-Babcock PID control module. The STPI control algorithms are based on closed-loop cycling theory, pattern recognition theory, and model-based theory. A brief theory of operation of these three STPI control algorithms is given. Details of the process simulations developed to test the STPI algorithms are given, including an integrating process, a first-order system, a second-order system, a system with initial inverse response, and a system with variable time constant and delay. The STPI algorithms' performance with regard to both setpoint changes and load disturbances is evaluated, and their robustness is compared. The dynamic effects of process deadtime and noise are also considered. Finally, the limitations of each of the STPI algorithms is discussed, some conclusions are drawn from the performance comparisons, and a few recommendations are made. 6 refs

  10. The development of a scalable parallel 3-D CFD algorithm for turbomachinery. M.S. Thesis Final Report

    Science.gov (United States)

    Luke, Edward Allen

    1993-01-01

    Two algorithms capable of computing a transonic 3-D inviscid flow field about rotating machines are considered for parallel implementation. During the study of these algorithms, a significant new method of measuring the performance of parallel algorithms is developed. The theory that supports this new method creates an empirical definition of scalable parallel algorithms that is used to produce quantifiable evidence that a scalable parallel application was developed. The implementation of the parallel application and an automated domain decomposition tool are also discussed.

  11. Crowdsourcing seizure detection: algorithm development and validation on human implanted device recordings.

    Science.gov (United States)

    Baldassano, Steven N; Brinkmann, Benjamin H; Ung, Hoameng; Blevins, Tyler; Conrad, Erin C; Leyde, Kent; Cook, Mark J; Khambhati, Ankit N; Wagenaar, Joost B; Worrell, Gregory A; Litt, Brian

    2017-06-01

    There exist significant clinical and basic research needs for accurate, automated seizure detection algorithms. These algorithms have translational potential in responsive neurostimulation devices and in automatic parsing of continuous intracranial electroencephalography data. An important barrier to developing accurate, validated algorithms for seizure detection is limited access to high-quality, expertly annotated seizure data from prolonged recordings. To overcome this, we hosted a kaggle.com competition to crowdsource the development of seizure detection algorithms using intracranial electroencephalography from canines and humans with epilepsy. The top three performing algorithms from the contest were then validated on out-of-sample patient data including standard clinical data and continuous ambulatory human data obtained over several years using the implantable NeuroVista seizure advisory system. Two hundred teams of data scientists from all over the world participated in the kaggle.com competition. The top performing teams submitted highly accurate algorithms with consistent performance in the out-of-sample validation study. The performance of these seizure detection algorithms, achieved using freely available code and data, sets a new reproducible benchmark for personalized seizure detection. We have also shared a 'plug and play' pipeline to allow other researchers to easily use these algorithms on their own datasets. The success of this competition demonstrates how sharing code and high quality data results in the creation of powerful translational tools with significant potential to impact patient care. © The Author (2017). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  12. Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics

    Science.gov (United States)

    Goodrich, John W.; Dyson, Rodger W.

    1999-01-01

    The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that

  13. Development of CAD implementing the algorithm of boundary elements’ numerical analytical method

    Directory of Open Access Journals (Sweden)

    Yulia V. Korniyenko

    2015-03-01

    Full Text Available Up to recent days the algorithms for numerical-analytical boundary elements method had been implemented with programs written in MATLAB environment language. Each program had a local character, i.e. used to solve a particular problem: calculation of beam, frame, arch, etc. Constructing matrices in these programs was carried out “manually” therefore being time-consuming. The research was purposed onto a reasoned choice of programming language for new CAD development, allows to implement algorithm of numerical analytical boundary elements method and to create visualization tools for initial objects and calculation results. Research conducted shows that among wide variety of programming languages the most efficient one for CAD development, employing the numerical analytical boundary elements method algorithm, is the Java language. This language provides tools not only for development of calculating CAD part, but also to build the graphic interface for geometrical models construction and calculated results interpretation.

  14. A formal analysis of a dynamic distributed spanning tree algorithm

    NARCIS (Netherlands)

    Mooij, A.J.; Wesselink, J.W.

    2003-01-01

    Abstract. We analyze the spanning tree algorithm in the IEEE 1394.1 draft standard, which correctness has not previously been proved. This algorithm is a fully-dynamic distributed graph algorithm, which, in general, is hard to develop. The approach we use is to formally develop an algorithm that is

  15. Unveiling the development of intracranial injury using dynamic brain EIT: an evaluation of current reconstruction algorithms.

    Science.gov (United States)

    Li, Haoting; Chen, Rongqing; Xu, Canhua; Liu, Benyuan; Tang, Mengxing; Yang, Lin; Dong, Xiuzhen; Fu, Feng

    2017-08-21

    Dynamic brain electrical impedance tomography (EIT) is a promising technique for continuously monitoring the development of cerebral injury. While there are many reconstruction algorithms available for brain EIT, there is still a lack of study to compare their performance in the context of dynamic brain monitoring. To address this problem, we develop a framework for evaluating different current algorithms with their ability to correctly identify small intracranial conductivity changes. Firstly, a simulation 3D head phantom with realistic layered structure and impedance distribution is developed. Next several reconstructing algorithms, such as back projection (BP), damped least-square (DLS), Bayesian, split Bregman (SB) and GREIT are introduced. We investigate their temporal response, noise performance, location and shape error with respect to different noise levels on the simulation phantom. The results show that the SB algorithm demonstrates superior performance in reducing image error. To further improve the location accuracy, we optimize SB by incorporating the brain structure-based conductivity distribution priors, in which differences of the conductivities between different brain tissues and the inhomogeneous conductivity distribution of the skull are considered. We compare this novel algorithm (called SB-IBCD) with SB and DLS using anatomically correct head shaped phantoms with spatial varying skull conductivity. Main results and Significance: The results showed that SB-IBCD is the most effective in unveiling small intracranial conductivity changes, where it can reduce the image error by an average of 30.0% compared to DLS.

  16. Developing Subdomain Allocation Algorithms Based on Spatial and Communicational Constraints to Accelerate Dust Storm Simulation

    Science.gov (United States)

    Gui, Zhipeng; Yu, Manzhu; Yang, Chaowei; Jiang, Yunfeng; Chen, Songqing; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Hassan, Mohammed Anowarul; Jin, Baoxuan

    2016-01-01

    Dust storm has serious disastrous impacts on environment, human health, and assets. The developments and applications of dust storm models have contributed significantly to better understand and predict the distribution, intensity and structure of dust storms. However, dust storm simulation is a data and computing intensive process. To improve the computing performance, high performance computing has been widely adopted by dividing the entire study area into multiple subdomains and allocating each subdomain on different computing nodes in a parallel fashion. Inappropriate allocation may introduce imbalanced task loads and unnecessary communications among computing nodes. Therefore, allocation is a key factor that may impact the efficiency of parallel process. An allocation algorithm is expected to consider the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire simulation. This research introduces three algorithms to optimize the allocation by considering the spatial and communicational constraints: 1) an Integer Linear Programming (ILP) based algorithm from combinational optimization perspective; 2) a K-Means and Kernighan-Lin combined heuristic algorithm (K&K) integrating geometric and coordinate-free methods by merging local and global partitioning; 3) an automatic seeded region growing based geometric and local partitioning algorithm (ASRG). The performance and effectiveness of the three algorithms are compared based on different factors. Further, we adopt the K&K algorithm as the demonstrated algorithm for the experiment of dust model simulation with the non-hydrostatic mesoscale model (NMM-dust) and compared the performance with the MPI default sequential allocation. The results demonstrate that K&K method significantly improves the simulation performance with better subdomain allocation. This method can also be adopted for other relevant atmospheric and numerical

  17. Correlation signatures of wet soils and snows. [algorithm development and computer programming

    Science.gov (United States)

    Phillips, M. R.

    1972-01-01

    Interpretation, analysis, and development of algorithms have provided the necessary computational programming tools for soil data processing, data handling and analysis. Algorithms that have been developed thus far, are adequate and have been proven successful for several preliminary and fundamental applications such as software interfacing capabilities, probability distributions, grey level print plotting, contour plotting, isometric data displays, joint probability distributions, boundary mapping, channel registration and ground scene classification. A description of an Earth Resources Flight Data Processor, (ERFDP), which handles and processes earth resources data under a users control is provided.

  18. DOOCS environment for FPGA-based cavity control system and control algorithms development

    International Nuclear Information System (INIS)

    Pucyk, P.; Koprek, W.; Kaleta, P.; Szewinski, J.; Pozniak, K.T.; Czarski, T.; Romaniuk, R.S.

    2005-01-01

    The paper describes the concept and realization of the DOOCS control software for FPGAbased TESLA cavity controller and simulator (SIMCON). It bases on universal software components, created for laboratory purposes and used in MATLAB based control environment. These modules have been recently adapted to the DOOCS environment to ensure a unified software to hardware communication model. The presented solution can be also used as a general platform for control algorithms development. The proposed interfaces between MATLAB and DOOCS modules allow to check the developed algorithm in the operation environment before implementation in the FPGA. As the examples two systems have been presented. (orig.)

  19. Developments in the Aerosol Layer Height Retrieval Algorithm for the Copernicus Sentinel-4/UVN Instrument

    Science.gov (United States)

    Nanda, Swadhin; Sanders, Abram; Veefkind, Pepijn

    2016-04-01

    The Sentinel-4 mission is a part of the European Commission's Copernicus programme, the goal of which is to provide geo-information to manage environmental assets, and to observe, understand and mitigate the effects of the changing climate. The Sentinel-4/UVN instrument design is motivated by the need to monitor trace gas concentrations and aerosols in the atmosphere from a geostationary orbit. The on-board instrument is a high resolution UV-VIS-NIR (UVN) spectrometer system that provides hourly radiance measurements over Europe and northern Africa with a spatial sampling of 8 km. The main application area of Sentinel-4/UVN is air quality. One of the data products that is being developed for Sentinel-4/UVN is the Aerosol Layer Height (ALH). The goal is to determine the height of aerosol plumes with a resolution of better than 0.5 - 1 km. The ALH product thus targets aerosol layers in the free troposphere, such as desert dust, volcanic ash and biomass during plumes. KNMI is assigned with the development of the Aerosol Layer Height (ALH) algorithm. Its heritage is the ALH algorithm developed by Sanders and De Haan (ATBD, 2016) for the TROPOMI instrument on board the Sentinel-5 Precursor mission that is to be launched in June or July 2016 (tentative date). The retrieval algorithm designed so far for the aerosol height product is based on the absorption characteristics of the oxygen-A band (759-770 nm). The algorithm has heritage to the ALH algorithm developed for TROPOMI on the Sentinel 5 precursor satellite. New aspects for Sentinel-4/UVN include the higher resolution (0.116 nm compared to 0.4 for TROPOMI) and hourly observation from the geostationary orbit. The algorithm uses optimal estimation to obtain a spectral fit of the reflectance across absorption band, while assuming a single uniform layer with fixed width to represent the aerosol vertical distribution. The state vector includes amongst other elements the height of this layer and its aerosol optical

  20. Software development minimum guidance system. Algorithm and specifications of realizing special hardware processor data prefilter program

    International Nuclear Information System (INIS)

    Baginyan, S.A.; Govorun, N.N.; Tkhang, T.L.; Shigaev, V.N.

    1982-01-01

    Software development minimum guidance system for measuring pictures of bubble chamber on the base of a scanner (HPD) and special hardware processor (SHP) is described. The algorithm of selective filter is proposed. The local software structure and functional specifications of its major parts are described. Some examples of processing picture from HBC-1 (JINR) are also presented

  1. Scheduling language and algorithm development study. Appendix: Study approach and activity summary

    Science.gov (United States)

    1974-01-01

    The approach and organization of the study to develop a high level computer programming language and a program library are presented. The algorithm and problem modeling analyses are summarized. The approach used to identify and specify the capabilities required in the basic language is described. Results of the analyses used to define specifications for the scheduling module library are presented.

  2. Chronic wrist pain: diagnosis and management. Development and use of a new algorithm

    NARCIS (Netherlands)

    van Vugt, R. M.; Bijlsma, J. W.; van Vugt, A. C.

    1999-01-01

    Chronic wrist pain can be difficult to manage and the differential diagnosis is extensive. To provide guidelines for assessment of the painful wrist an algorithm was developed to encourage a structured approach to the diagnosis and management of these patients. A review of the literature on causes

  3. Development and Evaluation of Model Algorithms to Account for Chemical Transformation in the Nearroad Environment

    Science.gov (United States)

    We describe the development and evaluation of two new model algorithms for NOx chemistry in the R-LINE near-road dispersion model for traffic sources. With increased urbanization, there is increased mobility leading to higher amount of traffic related activity on a global scale. ...

  4. Image microarrays derived from tissue microarrays (IMA-TMA: New resource for computer-aided diagnostic algorithm development

    Directory of Open Access Journals (Sweden)

    Jennifer A Hipp

    2012-01-01

    Full Text Available Background: Conventional tissue microarrays (TMAs consist of cores of tissue inserted into a recipient paraffin block such that a tissue section on a single glass slide can contain numerous patient samples in a spatially structured pattern. Scanning TMAs into digital slides for subsequent analysis by computer-aided diagnostic (CAD algorithms all offers the possibility of evaluating candidate algorithms against a near-complete repertoire of variable disease morphologies. This parallel interrogation approach simplifies the evaluation, validation, and comparison of such candidate algorithms. A recently developed digital tool, digital core (dCORE, and image microarray maker (iMAM enables the capture of uniformly sized and resolution-matched images, with these representing key morphologic features and fields of view, aggregated into a single monolithic digital image file in an array format, which we define as an image microarray (IMA. We further define the TMA-IMA construct as IMA-based images derived from whole slide images of TMAs themselves. Methods: Here we describe the first combined use of the previously described dCORE and iMAM tools, toward the goal of generating a higher-order image construct, with multiple TMA cores from multiple distinct conventional TMAs assembled as a single digital image montage. This image construct served as the basis of the carrying out of a massively parallel image analysis exercise, based on the use of the previously described spatially invariant vector quantization (SIVQ algorithm. Results: Multicase, multifield TMA-IMAs of follicular lymphoma and follicular hyperplasia were separately rendered, using the aforementioned tools. Each of these two IMAs contained a distinct spectrum of morphologic heterogeneity with respect to both tingible body macrophage (TBM appearance and apoptotic body morphology. SIVQ-based pattern matching, with ring vectors selected to screen for either tingible body macrophages or apoptotic

  5. Evaluation of nine HIV rapid test kits to develop a national HIV testing algorithm in Nigeria

    Directory of Open Access Journals (Sweden)

    Orji Bassey

    2015-05-01

    Full Text Available Background: Non-cold chain-dependent HIV rapid testing has been adopted in many resource-constrained nations as a strategy for reaching out to populations. HIV rapid test kits (RTKs have the advantage of ease of use, low operational cost and short turnaround times. Before 2005, different RTKs had been used in Nigeria without formal evaluation. Between 2005 and 2007, a study was conducted to formally evaluate a number of RTKs and construct HIV testing algorithms. Objectives: The objectives of this study were to assess and select HIV RTKs and develop national testing algorithms. Method: Nine RTKs were evaluated using 528 well-characterised plasma samples. These comprised 198 HIV-positive specimens (37.5% and 330 HIV-negative specimens (62.5%, collected nationally. Sensitivity and specificity were calculated with 95% confidence intervals for all nine RTKs singly and for serial and parallel combinations of six RTKs; and relative costs were estimated. Results: Six of the nine RTKs met the selection criteria, including minimum sensitivity and specificity (both ≥ 99.0% requirements. There were no significant differences in sensitivities or specificities of RTKs in the serial and parallel algorithms, but the cost of RTKs in parallel algorithms was twice that in serial algorithms. Consequently, three serial algorithms, comprising four test kits (BundiTM, DetermineTM, Stat-Pak® and Uni-GoldTM with 100.0% sensitivity and 99.1% – 100.0% specificity, were recommended and adopted as national interim testing algorithms in 2007. Conclusion: This evaluation provides the first evidence for reliable combinations of RTKs for HIV testing in Nigeria. However, these RTKs need further evaluation in the field (Phase II to re-validate their performance.

  6. Development of the Landsat Data Continuity Mission Cloud Cover Assessment Algorithms

    Science.gov (United States)

    Scaramuzza, Pat; Bouchard, M.A.; Dwyer, John L.

    2012-01-01

    The upcoming launch of the Operational Land Imager (OLI) will start the next era of the Landsat program. However, the Automated Cloud-Cover Assessment (CCA) (ACCA) algorithm used on Landsat 7 requires a thermal band and is thus not suited for OLI. There will be a thermal instrument on the Landsat Data Continuity Mission (LDCM)-the Thermal Infrared Sensor-which may not be available during all OLI collections. This illustrates a need for CCA for LDCM in the absence of thermal data. To research possibilities for full-resolution OLI cloud assessment, a global data set of 207 Landsat 7 scenes with manually generated cloud masks was created. It was used to evaluate the ACCA algorithm, showing that the algorithm correctly classified 79.9% of a standard test subset of 3.95 109 pixels. The data set was also used to develop and validate two successor algorithms for use with OLI data-one derived from an off-the-shelf machine learning package and one based on ACCA but enhanced by a simple neural network. These comprehensive CCA algorithms were shown to correctly classify pixels as cloudy or clear 88.5% and 89.7% of the time, respectively.

  7. Dataset exploited for the development and validation of automated cyanobacteria quantification algorithm, ACQUA

    Directory of Open Access Journals (Sweden)

    Emanuele Gandola

    2016-09-01

    Full Text Available The estimation and quantification of potentially toxic cyanobacteria in lakes and reservoirs are often used as a proxy of risk for water intended for human consumption and recreational activities. Here, we present data sets collected from three volcanic Italian lakes (Albano, Vico, Nemi that present filamentous cyanobacteria strains at different environments. Presented data sets were used to estimate abundance and morphometric characteristics of potentially toxic cyanobacteria comparing manual Vs. automated estimation performed by ACQUA (“ACQUA: Automated Cyanobacterial Quantification Algorithm for toxic filamentous genera using spline curves, pattern recognition and machine learning” (Gandola et al., 2016 [1]. This strategy was used to assess the algorithm performance and to set up the denoising algorithm. Abundance and total length estimations were used for software development, to this aim we evaluated the efficiency of statistical tools and mathematical algorithms, here described. The image convolution with the Sobel filter has been chosen to denoise input images from background signals, then spline curves and least square method were used to parameterize detected filaments and to recombine crossing and interrupted sections aimed at performing precise abundances estimations and morphometric measurements. Keywords: Comparing data, Filamentous cyanobacteria, Algorithm, Deoising, Natural sample

  8. The ASTRA Toolbox: A platform for advanced algorithm development in electron tomography

    Energy Technology Data Exchange (ETDEWEB)

    Aarle, Wim van, E-mail: wim.vanaarle@uantwerpen.be [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Palenstijn, Willem Jan, E-mail: willemjan.palenstijn@uantwerpen.be [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Centrum Wiskunde & Informatica, Science Park 123, NL-1098 XG Amsterdam (Netherlands); De Beenhouwer, Jan, E-mail: jan.debeenhouwer@uantwerpen.be [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Altantzis, Thomas, E-mail: thomas.altantzis@uantwerpen.be [Electron Microscopy for Materials Science, University of Antwerp, Groenenborgerlaan 171, B-2020 Wilrijk (Belgium); Bals, Sara, E-mail: sara.bals@uantwerpen.be [Electron Microscopy for Materials Science, University of Antwerp, Groenenborgerlaan 171, B-2020 Wilrijk (Belgium); Batenburg, K. Joost, E-mail: joost.batenburg@cwi.nl [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Centrum Wiskunde & Informatica, Science Park 123, NL-1098 XG Amsterdam (Netherlands); Mathematical Institute, Leiden University, P.O. Box 9512, NL-2300 RA Leiden (Netherlands); Sijbers, Jan, E-mail: jan.sijbers@uantwerpen.be [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium)

    2015-10-15

    We present the ASTRA Toolbox as an open platform for 3D image reconstruction in tomography. Most of the software tools that are currently used in electron tomography offer limited flexibility with respect to the geometrical parameters of the acquisition model and the algorithms used for reconstruction. The ASTRA Toolbox provides an extensive set of fast and flexible building blocks that can be used to develop advanced reconstruction algorithms, effectively removing these limitations. We demonstrate this flexibility, the resulting reconstruction quality, and the computational efficiency of this toolbox by a series of experiments, based on experimental dual-axis tilt series. - Highlights: • The ASTRA Toolbox is an open platform for 3D image reconstruction in tomography. • Advanced reconstruction algorithms can be prototyped using the fast and flexible building blocks. • This flexibility is demonstrated on a common use case: dual-axis tilt series reconstruction with prior knowledge. • The computational efficiency is validated on an experimentally measured tilt series.

  9. A sonification algorithm for developing the off-roads models for driving simulators

    Science.gov (United States)

    Chiroiu, Veturia; Brişan, Cornel; Dumitriu, Dan; Munteanu, Ligia

    2018-01-01

    In this paper, a sonification algorithm for developing the off-road models for driving simulators, is proposed. The aim of this algorithm is to overcome difficulties of heuristics identification which are best suited to a particular off-road profile built by measurements. The sonification algorithm is based on the stochastic polynomial chaos analysis suitable in solving equations with random input data. The fluctuations are generated by incomplete measurements leading to inhomogeneities of the cross-sectional curves of off-roads before and after deformation, the unstable contact between the tire and the road and the unreal distribution of contact and friction forces in the unknown contact domains. The approach is exercised on two particular problems and results compare favorably to existing analytical and numerical solutions. The sonification technique represents a useful multiscale analysis able to build a low-cost virtual reality environment with increased degrees of realism for driving simulators and higher user flexibility.

  10. Development of Data Processing Algorithms for the Upgraded LHCb Vertex Locator

    CERN Document Server

    AUTHOR|(CDS)2101352

    The LHCb detector will see a major upgrade during LHC Long Shutdown II, which is planned for 2019/20. The silicon Vertex Locator subdetector will be upgraded for operation under the new run conditions. The detector will be read out using a data acquisition board based on an FPGA. The work presented in this thesis is concerned with the development of the data processing algorithms to be used in this data acquisition board. In particular, work in three different areas of the FPGA is covered: the data processing block, the low level interface, and the post router block. The algorithms produced have been simulated and tested, and shown to provide the required performance. Errors in the initial implementation of the Gigabit Wireline Transmitter serialized data in the low level interface were discovered and corrected. The data scrambling algorithm and the post router block have been incorporated in the front end readout chip.

  11. The ASTRA Toolbox: A platform for advanced algorithm development in electron tomography

    International Nuclear Information System (INIS)

    Aarle, Wim van; Palenstijn, Willem Jan; De Beenhouwer, Jan; Altantzis, Thomas; Bals, Sara; Batenburg, K. Joost; Sijbers, Jan

    2015-01-01

    We present the ASTRA Toolbox as an open platform for 3D image reconstruction in tomography. Most of the software tools that are currently used in electron tomography offer limited flexibility with respect to the geometrical parameters of the acquisition model and the algorithms used for reconstruction. The ASTRA Toolbox provides an extensive set of fast and flexible building blocks that can be used to develop advanced reconstruction algorithms, effectively removing these limitations. We demonstrate this flexibility, the resulting reconstruction quality, and the computational efficiency of this toolbox by a series of experiments, based on experimental dual-axis tilt series. - Highlights: • The ASTRA Toolbox is an open platform for 3D image reconstruction in tomography. • Advanced reconstruction algorithms can be prototyped using the fast and flexible building blocks. • This flexibility is demonstrated on a common use case: dual-axis tilt series reconstruction with prior knowledge. • The computational efficiency is validated on an experimentally measured tilt series

  12. Postpartum IGF-I and IGFBP-2 levels are prospectively associated with the development of type 2 diabetes in women with previous gestational diabetes mellitus.

    Science.gov (United States)

    Lappas, M; Jinks, D; Shub, A; Willcox, J C; Georgiou, H M; Permezel, M

    2016-12-01

    Women with previous gestational diabetes mellitus (GDM) are at greater risk of developing type 2 diabetes. In the general population, the insulin-like growth factor (IGF) system has been implicated in the development of type 2 diabetes. The aim of this study was to determine if circulating IGF-I, IGF-II, IGFBP-1 and IGFBP-2 levels 12weeks following a GDM pregnancy are associated with an increased risk of developing type 2 diabetes. IGF-I, IGF-II, IGFBP-1 and IGFBP-2 levels were measured in 98 normal glucose tolerant women, 12weeks following an index GDM pregnancy using enzyme immunoassay. Women were assessed for up to 10years for the development of overt type 2 diabetes. Among the 98 women with previous GDM, 21 (21%) developed diabetes during the median follow-up period of 8.5years. After adjusting for age and BMI, IGF-I and IGFBP-2 were significantly associated with the development of type 2 diabetes. In a clinical model of prediction of type 2 diabetes that included age, BMI, pregnancy fasting glucose and postnatal fasting glucose, the addition of IGF-I and IGFBP-2 resulted in an improvement in the net reclassification index of 17.8%. High postpartum IGF-I and low postpartum IGFBP-2 levels are a significant risk factor for the development of type 2 diabetes in women with a previous history of GDM. This is the first report that identifies IGF-I and IGFBP-2 as a potential biomarker for the prediction of type 2 diabetes in women with a history of GDM. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  13. The development of gamma energy identify algorithm for compact radiation sensors using stepwise refinement technique

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Hyun Jun [Div. of Radiation Regulation, Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of); Kim, Ye Won; Kim, Hyun Duk; Cho, Gyu Seong [Dept. of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Yi, Yun [Dept. of of Electronics and Information Engineering, Korea University, Seoul (Korea, Republic of)

    2017-06-15

    A gamma energy identifying algorithm using spectral decomposition combined with smoothing method was suggested to confirm the existence of the artificial radio isotopes. The algorithm is composed by original pattern recognition method and smoothing method to enhance the performance to identify gamma energy of radiation sensors that have low energy resolution. The gamma energy identifying algorithm for the compact radiation sensor is a three-step of refinement process. Firstly, the magnitude set is calculated by the original spectral decomposition. Secondly, the magnitude of modeling error in the magnitude set is reduced by the smoothing method. Thirdly, the expected gamma energy is finally decided based on the enhanced magnitude set as a result of the spectral decomposition with the smoothing method. The algorithm was optimized for the designed radiation sensor composed of a CsI (Tl) scintillator and a silicon pin diode. The two performance parameters used to estimate the algorithm are the accuracy of expected gamma energy and the number of repeated calculations. The original gamma energy was accurately identified with the single energy of gamma radiation by adapting this modeling error reduction method. Also the average error decreased by half with the multi energies of gamma radiation in comparison to the original spectral decomposition. In addition, the number of repeated calculations also decreased by half even in low fluence conditions under 104 (/0.09 cm{sup 2} of the scintillator surface). Through the development of this algorithm, we have confirmed the possibility of developing a product that can identify artificial radionuclides nearby using inexpensive radiation sensors that are easy to use by the public. Therefore, it can contribute to reduce the anxiety of the public exposure by determining the presence of artificial radionuclides in the vicinity.

  14. Lightning Jump Algorithm Development for the GOES·R Geostationary Lightning Mapper

    Science.gov (United States)

    Schultz. E.; Schultz. C.; Chronis, T.; Stough, S.; Carey, L.; Calhoun, K.; Ortega, K.; Stano, G.; Cecil, D.; Bateman, M.; hide

    2014-01-01

    Current work on the lightning jump algorithm to be used in GOES-R Geostationary Lightning Mapper (GLM)'s data stream is multifaceted due to the intricate interplay between the storm tracking, GLM proxy data, and the performance of the lightning jump itself. This work outlines the progress of the last year, where analysis and performance of the lightning jump algorithm with automated storm tracking and GLM proxy data were assessed using over 700 storms from North Alabama. The cases analyzed coincide with previous semi-objective work performed using total lightning mapping array (LMA) measurements in Schultz et al. (2011). Analysis shows that key components of the algorithm (flash rate and sigma thresholds) have the greatest influence on the performance of the algorithm when validating using severe storm reports. Automated objective analysis using the GLM proxy data has shown probability of detection (POD) values around 60% with false alarm rates (FAR) around 73% using similar methodology to Schultz et al. (2011). However, when applying verification methods similar to those employed by the National Weather Service, POD values increase slightly (69%) and FAR values decrease (63%). The relationship between storm tracking and lightning jump has also been tested in a real-time framework at NSSL. This system includes fully automated tracking by radar alone, real-time LMA and radar observations and the lightning jump. Results indicate that the POD is strong at 65%. However, the FAR is significantly higher than in Schultz et al. (2011) (50-80% depending on various tracking/lightning jump parameters) when using storm reports for verification. Given known issues with Storm Data, the performance of the real-time jump algorithm is also being tested with high density radar and surface observations from the NSSL Severe Hazards Analysis & Verification Experiment (SHAVE).

  15. Development of a MELCOR self-initialization algorithm for boiling water reactors

    International Nuclear Information System (INIS)

    Chien, C.S.; Wang, S.J.; Cheng, S.K.

    1996-01-01

    The MELCOR code, developed by Sandia National Laboratories, is suitable for calculating source terms and simulating severe accident phenomena of nuclear power plants. Prior to simulating a severe accident transient with MELCOR, the initial steady-state conditions must be generated in advance. The current MELCOR users' manuals do not provide a self-initialization procedure; this is the reason users have to adjust the initial conditions by themselves through a trial-and-error approach. A MELCOR self-initialization algorithm for boiling water reactor plants has been developed, which eliminates the tedious trial-and-error procedures and improves the simulation accuracy. This algorithm adjusts the important plant variable such as the dome pressure, downcomer level, and core flow rate to the desired conditions automatically. It is implemented through input with control functions provided in MELCOR. The reactor power and feedwater temperature are fed as input data. The initialization work of full-power conditions of the Kuosheng nuclear power station is cited as an example. These initial conditions are generated successfully with the developed algorithm. The generated initial conditions can be stored in a restart file and used for transient analysis. The methodology in this study improves the accuracy and consistency of transient calculations. Meanwhile, the algorithm provides all MELCOR users an easy and correct method for establishing the initial conditions

  16. Algorithm of developing competitive strategies and the trends of realizing them for agricultural enterprises

    Directory of Open Access Journals (Sweden)

    Viktoriia Boiko

    2016-02-01

    Full Text Available The paper specifies basic stages of developing and realizing the strategy of enhancing competitiveness of enterprises and represents an appropriate algorithm. The study analyzes the economic indexes and results of the activity of the agrarian enterprises in Kherson region and provides competitive strategies of efficient development of agrarian enterprises with different levels of competitiveness and specifies the ways of realizing them which will contribute to the optimal use of the available strategic potential

  17. Systems Engineering Approach to Develop Guidance, Navigation and Control Algorithms for Unmanned Ground Vehicle

    Science.gov (United States)

    2016-09-01

    Global Positioning System HNA hybrid navigation algorithm HRI human-robot interface IED Improvised Explosive Device IMU inertial measurement unit...Potential Field Method R&D research and development RDT&E Research, development, test and evaluation RF radiofrequency RGB red, green and blue ROE...were radiofrequency (RF) controlled and pneumatically actuated upon receiving the wireless commands from the radio operator. The pairing of such an

  18. The development of a novel knowledge-based weaning algorithm using pulmonary parameters: a simulation study.

    Science.gov (United States)

    Guler, Hasan; Kilic, Ugur

    2018-03-01

    Weaning is important for patients and clinicians who have to determine correct weaning time so that patients do not become addicted to the ventilator. There are already some predictors developed, such as the rapid shallow breathing index (RSBI), the pressure time index (PTI), and Jabour weaning index. Many important dimensions of weaning are sometimes ignored by these predictors. This is an attempt to develop a knowledge-based weaning process via fuzzy logic that eliminates the disadvantages of the present predictors. Sixteen vital parameters listed in published literature have been used to determine the weaning decisions in the developed system. Since there are considered to be too many individual parameters in it, related parameters were grouped together to determine acid-base balance, adequate oxygenation, adequate pulmonary function, hemodynamic stability, and the psychological status of the patients. To test the performance of the developed algorithm, 20 clinical scenarios were generated using Monte Carlo simulations and the Gaussian distribution method. The developed knowledge-based algorithm and RSBI predictor were applied to the generated scenarios. Finally, a clinician evaluated each clinical scenario independently. The Student's t test was used to show the statistical differences between the developed weaning algorithm, RSBI, and the clinician's evaluation. According to the results obtained, there were no statistical differences between the proposed methods and the clinician evaluations.

  19. Development and comparisons of wind retrieval algorithms for small unmanned aerial systems

    Science.gov (United States)

    Bonin, T. A.; Chilson, P. B.; Zielke, B. S.; Klein, P. M.; Leeman, J. R.

    2012-12-01

    Recently, there has been an increase in use of Unmanned Aerial Systems (UASs) as platforms for conducting fundamental and applied research in the lower atmosphere due to their relatively low cost and ability to collect samples with high spatial and temporal resolution. Concurrent with this development comes the need for accurate instrumentation and measurement methods suitable for small meteorological UASs. Moreover, the instrumentation to be integrated into such platforms must be small and lightweight. Whereas thermodynamic variables can be easily measured using well aspirated sensors onboard, it is much more challenging to accurately measure the wind with a UAS. Several algorithms have been developed that incorporate GPS observations as a means of estimating the horizontal wind vector, with each algorithm exhibiting its own particular strengths and weaknesses. In the present study, the performance of three such GPS-based wind-retrieval algorithms has been investigated and compared with wind estimates from rawinsonde and sodar observations. Each of the algorithms considered agreed well with the wind measurements from sounding and sodar data. Through the integration of UAS-retrieved profiles of thermodynamic and kinematic parameters, one can investigate the static and dynamic stability of the atmosphere and relate them to the state of the boundary layer across a variety of times and locations, which might be difficult to access using conventional instrumentation.

  20. Development of real time diagnostics and feedback algorithms for JET in view of the next step

    International Nuclear Information System (INIS)

    Murari, A.; Felton, R.; Zabeo, L.; Piccolo, F.; Sartori, F.; Murari, A.; Barana, O.; Albanese, R.; Joffrin, E.; Mazon, D.; Laborde, L.; Moreau, D.; Arena, P.; Bruno, M.; Ambrosino, G.; Ariola, M.; Crisanti, F.; Luna, E. de la; Sanchez, J.

    2004-01-01

    Real time control of many plasma parameters will be an essential aspect in the development of reliable high performance operation of Next Step Tokamaks. The main prerequisites for any feedback scheme are the precise real-time determination of the quantities to be controlled, requiring top quality and highly reliable diagnostics, and the availability of robust control algorithms. A new set of real time diagnostics was recently implemented on JET to prove the feasibility of determining, with high accuracy and time resolution, the most important plasma quantities. With regard to feedback algorithms, new model-based controllers were developed to allow a more robust control of several plasma parameters. Both diagnostics and algorithms were successfully used in several experiments, ranging from H-mode plasmas to configuration with internal transport barriers. Since elaboration of computationally heavy measurements is often required, significant attention was devoted to non-algorithmic methods like Digital or Cellular Neural/Nonlinear Networks. The real time hardware and software adopted architectures are also described with particular attention to their relevance to ITER. (authors)

  1. Development of real time diagnostics and feedback algorithms for JET in view of the next step

    International Nuclear Information System (INIS)

    Murari, A.; Barana, O.; Murari, A.; Felton, R.; Zabeo, L.; Piccolo, F.; Sartori, F.; Joffrin, E.; Mazon, D.; Laborde, L.; Moreau, D.; Albanese, R.; Arena, P.; Bruno, M.; Ambrosino, G.; Ariola, M.; Crisanti, F.; Luna, E. de la; Sanchez, J.

    2004-01-01

    Real time control of many plasma parameters will be an essential aspect in the development of reliable high performance operation of Next Step Tokamaks. The main prerequisites for any feedback scheme are the precise real-time determination of the quantities to be controlled, requiring top quality and highly reliable diagnostics, and the availability of robust control algorithms. A new set of real time diagnostics was recently implemented on JET to prove the feasibility of determining, with high accuracy and time resolution, the most important plasma quantities. With regard to feedback algorithms, new model-based controllers were developed to allow a more robust control of several plasma parameters. Both diagnostics and algorithms were successfully used in several experiments, ranging from H-mode plasmas to configuration with ITBs (internal thermal barriers). Since elaboration of computationally heavy measurements is often required, significant attention was devoted to non-algorithmic methods like Digital or Cellular Neural/Nonlinear Networks. The real time hardware and software adopted architectures are also described with particular attention to their relevance to ITER. (authors)

  2. Development of real time diagnostics and feedback algorithms for JET in view of the next step

    Energy Technology Data Exchange (ETDEWEB)

    Murari, A.; Barana, O. [Consorzio RFX Associazione EURATOM ENEA per la Fusione, Corso Stati Uniti 4, Padua (Italy); Felton, R.; Zabeo, L.; Piccolo, F.; Sartori, F. [Euratom/UKAEA Fusion Assoc., Culham Science Centre, Abingdon, Oxon (United Kingdom); Joffrin, E.; Mazon, D.; Laborde, L.; Moreau, D. [Association EURATOM-CEA, CEA Cadarache, 13 - Saint-Paul-lez-Durance (France); Albanese, R. [Assoc. Euratom-ENEA-CREATE, Univ. Mediterranea RC (Italy); Arena, P.; Bruno, M. [Assoc. Euratom-ENEA-CREATE, Univ.di Catania (Italy); Ambrosino, G.; Ariola, M. [Assoc. Euratom-ENEA-CREATE, Univ. Napoli Federico Napoli (Italy); Crisanti, F. [Associazone EURATOM ENEA sulla Fusione, C.R. Frascati (Italy); Luna, E. de la; Sanchez, J. [Associacion EURATOM CIEMAT para Fusion, Madrid (Spain)

    2004-07-01

    Real time control of many plasma parameters will be an essential aspect in the development of reliable high performance operation of Next Step Tokamaks. The main prerequisites for any feedback scheme are the precise real-time determination of the quantities to be controlled, requiring top quality and highly reliable diagnostics, and the availability of robust control algorithms. A new set of real time diagnostics was recently implemented on JET to prove the feasibility of determining, with high accuracy and time resolution, the most important plasma quantities. With regard to feedback algorithms, new model-based controllers were developed to allow a more robust control of several plasma parameters. Both diagnostics and algorithms were successfully used in several experiments, ranging from H-mode plasmas to configuration with ITBs (internal thermal barriers). Since elaboration of computationally heavy measurements is often required, significant attention was devoted to non-algorithmic methods like Digital or Cellular Neural/Nonlinear Networks. The real time hardware and software adopted architectures are also described with particular attention to their relevance to ITER. (authors)

  3. jClustering, an open framework for the development of 4D clustering algorithms.

    Directory of Open Access Journals (Sweden)

    José María Mateos-Pérez

    Full Text Available We present jClustering, an open framework for the design of clustering algorithms in dynamic medical imaging. We developed this tool because of the difficulty involved in manually segmenting dynamic PET images and the lack of availability of source code for published segmentation algorithms. Providing an easily extensible open tool encourages publication of source code to facilitate the process of comparing algorithms and provide interested third parties with the opportunity to review code. The internal structure of the framework allows an external developer to implement new algorithms easily and quickly, focusing only on the particulars of the method being implemented and not on image data handling and preprocessing. This tool has been coded in Java and is presented as an ImageJ plugin in order to take advantage of all the functionalities offered by this imaging analysis platform. Both binary packages and source code have been published, the latter under a free software license (GNU General Public License to allow modification if necessary.

  4. Development of algorithm for depreciation costs allocation in dynamic input-output industrial enterprise model

    Directory of Open Access Journals (Sweden)

    Keller Alevtina

    2017-01-01

    Full Text Available The article considers the issue of allocation of depreciation costs in the dynamic inputoutput model of an industrial enterprise. Accounting the depreciation costs in such a model improves the policy of fixed assets management. It is particularly relevant to develop the algorithm for the allocation of depreciation costs in the construction of dynamic input-output model of an industrial enterprise, since such enterprises have a significant amount of fixed assets. Implementation of terms of the adequacy of such an algorithm itself allows: evaluating the appropriateness of investments in fixed assets, studying the final financial results of an industrial enterprise, depending on management decisions in the depreciation policy. It is necessary to note that the model in question for the enterprise is always degenerate. It is caused by the presence of zero rows in the matrix of capital expenditures by lines of structural elements unable to generate fixed assets (part of the service units, households, corporate consumers. The paper presents the algorithm for the allocation of depreciation costs for the model. This algorithm was developed by the authors and served as the basis for further development of the flowchart for subsequent implementation with use of software. The construction of such algorithm and its use for dynamic input-output models of industrial enterprises is actualized by international acceptance of the effectiveness of the use of input-output models for national and regional economic systems. This is what allows us to consider that the solutions discussed in the article are of interest to economists of various industrial enterprises.

  5. Development of Human-level Decision Making Algorithm for NPPs through Deep Neural Networks : Conceptual Approach

    International Nuclear Information System (INIS)

    Kim, Seung Geun; Seong, Poong Hyun

    2017-01-01

    Development of operation support systems and automation systems are closely related to machine learning field. However, since it is hard to achieve human-level delicacy and flexibility for complex tasks with conventional machine learning technologies, only operation support systems with simple purposes were developed and high-level automation related studies were not actively conducted. As one of the efforts for reducing human error in NPPs and technical advance toward automation, the ultimate goal of this research is to develop human-level decision making algorithm for NPPs during emergency situations. The concepts of SL, RL, policy network, value network, and MCTS, which were applied to decision making algorithm for other fields are introduced and combined with nuclear field specifications. Since the research is currently at the conceptual stage, more research is warranted.

  6. Algorithmic mathematics

    CERN Document Server

    Hougardy, Stefan

    2016-01-01

    Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.

  7. Development and validation of an online interactive, multimedia wound care algorithms program.

    Science.gov (United States)

    Beitz, Janice M; van Rijswijk, Lia

    2012-01-01

    To provide education based on evidence-based and validated wound care algorithms we designed and implemented an interactive, Web-based learning program for teaching wound care. A mixed methods quantitative pilot study design with qualitative components was used to test and ascertain the ease of use, validity, and reliability of the online program. A convenience sample of 56 RN wound experts (formally educated, certified in wound care, or both) participated. The interactive, online program consists of a user introduction, interactive assessment of 15 acute and chronic wound photos, user feedback about the percentage correct, partially correct, or incorrect algorithm and dressing choices and a user survey. After giving consent, participants accessed the online program, provided answers to the demographic survey, and completed the assessment module and photographic test, along with a posttest survey. The construct validity of the online interactive program was strong. Eighty-five percent (85%) of algorithm and 87% of dressing choices were fully correct even though some programming design issues were identified. Online study results were consistently better than previously conducted comparable paper-pencil study results. Using a 5-point Likert-type scale, participants rated the program's value and ease of use as 3.88 (valuable to very valuable) and 3.97 (easy to very easy), respectively. Similarly the research process was described qualitatively as "enjoyable" and "exciting." This digital program was well received indicating its "perceived benefits" for nonexpert users, which may help reduce barriers to implementing safe, evidence-based care. Ongoing research using larger sample sizes may help refine the program or algorithms while identifying clinician educational needs. Initial design imperfections and programming problems identified also underscored the importance of testing all paper and Web-based programs designed to educate health care professionals or guide

  8. The development of a new algorithm to calculate a survival function in non-parametric ways

    International Nuclear Information System (INIS)

    Ahn, Kwang Won; Kim, Yoon Ik; Chung, Chang Hyun; Kim, Kil Yoo

    2001-01-01

    In this study, a generalized formula of the Kaplan-Meier method is developed. The idea of this algorithm is that the result of the Kaplan-Meier estimator is the same as that of the redistribute-to-the right algorithm. Hence, the result of the Kaplan-Meier estimator is used when we redistribute to the right. This can be explained as the following steps, at first, the same mass is distributed to all the points. At second, when you reach the censored points, you must redistribute the mass of that point to the right according to the following rule; to normalize the masses, which are located to the right of the censored point, and redistribute the mass of the censored point to the right according to the ratio of the normalized mass. Until now, we illustrate the main idea of this algorithm.The meaning of that idea is more efficient than PL-estimator in the sense that it decreases the mass of after that area. Just like a redistribute to the right algorithm, this method is enough for the probability theory

  9. Prosthetic joint infection development of an evidence-based diagnostic algorithm.

    Science.gov (United States)

    Mühlhofer, Heinrich M L; Pohlig, Florian; Kanz, Karl-Georg; Lenze, Ulrich; Lenze, Florian; Toepfer, Andreas; Kelch, Sarah; Harrasser, Norbert; von Eisenhart-Rothe, Rüdiger; Schauwecker, Johannes

    2017-03-09

    Increasing rates of prosthetic joint infection (PJI) have presented challenges for general practitioners, orthopedic surgeons and the health care system in the recent years. The diagnosis of PJI is complex; multiple diagnostic tools are used in the attempt to correctly diagnose PJI. Evidence-based algorithms can help to identify PJI using standardized diagnostic steps. We reviewed relevant publications between 1990 and 2015 using a systematic literature search in MEDLINE and PUBMED. The selected search results were then classified into levels of evidence. The keywords were prosthetic joint infection, biofilm, diagnosis, sonication, antibiotic treatment, implant-associated infection, Staph. aureus, rifampicin, implant retention, pcr, maldi-tof, serology, synovial fluid, c-reactive protein level, total hip arthroplasty (THA), total knee arthroplasty (TKA) and combinations of these terms. From an initial 768 publications, 156 publications were stringently reviewed. Publications with class I-III recommendations (EAST) were considered. We developed an algorithm for the diagnostic approach to display the complex diagnosis of PJI in a clear and logically structured process according to ISO 5807. The evidence-based standardized algorithm combines modern clinical requirements and evidence-based treatment principles. The algorithm provides a detailed transparent standard operating procedure (SOP) for diagnosing PJI. Thus, consistently high, examiner-independent process quality is assured to meet the demands of modern quality management in PJI diagnosis.

  10. The Development of a Parameterized Scatter Removal Algorithm for Nuclear Materials Identification System Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Grogan, Brandon Robert [Univ. of Tennessee, Knoxville, TN (United States)

    2010-03-01

    This dissertation presents a novel method for removing scattering effects from Nuclear Materials Identification System (NMIS) imaging. The NMIS uses fast neutron radiography to generate images of the internal structure of objects non-intrusively. If the correct attenuation through the object is measured, the positions and macroscopic cross-sections of features inside the object can be determined. The cross sections can then be used to identify the materials and a 3D map of the interior of the object can be reconstructed. Unfortunately, the measured attenuation values are always too low because scattered neutrons contribute to the unattenuated neutron signal. Previous efforts to remove the scatter from NMIS imaging have focused on minimizing the fraction of scattered neutrons which are misidentified as directly transmitted by electronically collimating and time tagging the source neutrons. The parameterized scatter removal algorithm (PSRA) approaches the problem from an entirely new direction by using Monte Carlo simulations to estimate the point scatter functions (PScFs) produced by neutrons scattering in the object. PScFs have been used to remove scattering successfully in other applications, but only with simple 2D detector models. This work represents the first time PScFs have ever been applied to an imaging detector geometry as complicated as the NMIS. By fitting the PScFs using a Gaussian function, they can be parameterized and the proper scatter for a given problem can be removed without the need for rerunning the simulations each time. In order to model the PScFs, an entirely new method for simulating NMIS measurements was developed for this work. The development of the new models and the codes required to simulate them are presented in detail. The PSRA was used on several simulated and experimental measurements and chi-squared goodness of fit tests were used to compare the corrected values to the ideal values that would be expected with no scattering. Using

  11. THE DEVELOPMENT OF A PARAMETERIZED SCATTER REMOVAL ALGORITHM FOR NUCLEAR MATERIALS IDENTIFICATION SYSTEM IMAGING

    Energy Technology Data Exchange (ETDEWEB)

    Grogan, Brandon R [ORNL

    2010-05-01

    This report presents a novel method for removing scattering effects from Nuclear Materials Identification System (NMIS) imaging. The NMIS uses fast neutron radiography to generate images of the internal structure of objects nonintrusively. If the correct attenuation through the object is measured, the positions and macroscopic cross sections of features inside the object can be determined. The cross sections can then be used to identify the materials, and a 3D map of the interior of the object can be reconstructed. Unfortunately, the measured attenuation values are always too low because scattered neutrons contribute to the unattenuated neutron signal. Previous efforts to remove the scatter from NMIS imaging have focused on minimizing the fraction of scattered neutrons that are misidentified as directly transmitted by electronically collimating and time tagging the source neutrons. The parameterized scatter removal algorithm (PSRA) approaches the problem from an entirely new direction by using Monte Carlo simulations to estimate the point scatter functions (PScFs) produced by neutrons scattering in the object. PScFs have been used to remove scattering successfully in other applications, but only with simple 2D detector models. This work represents the first time PScFs have ever been applied to an imaging detector geometry as complicated as the NMIS. By fitting the PScFs using a Gaussian function, they can be parameterized, and the proper scatter for a given problem can be removed without the need for rerunning the simulations each time. In order to model the PScFs, an entirely new method for simulating NMIS measurements was developed for this work. The development of the new models and the codes required to simulate them are presented in detail. The PSRA was used on several simulated and experimental measurements, and chi-squared goodness of fit tests were used to compare the corrected values to the ideal values that would be expected with no scattering. Using the

  12. Development and Comparative Study of Effects of Training Algorithms on Performance of Artificial Neural Network Based Analog and Digital Automatic Modulation Recognition

    Directory of Open Access Journals (Sweden)

    Jide Julius Popoola

    2015-11-01

    Full Text Available This paper proposes two new classifiers that automatically recognise twelve combined analog and digital modulated signals without any a priori knowledge of the modulation schemes and the modulation parameters. The classifiers are developed using pattern recognition approach. Feature keys extracted from the instantaneous amplitude, instantaneous phase and the spectrum symmetry of the simulated signals are used as inputs to the artificial neural network employed in developing the classifiers. The two developed classifiers are trained using scaled conjugate gradient (SCG and conjugate gradient (CONJGRAD training algorithms. Sample results of the two classifiers show good success recognition performance with an average overall recognition rate above 99.50% at signal-to-noise ratio (SNR value from 0 dB and above with the two training algorithms employed and an average overall recognition rate slightly above 99.00% and 96.40% respectively at - 5 dB SNR value for SCG and CONJGRAD training algorithms. The comparative performance evaluation of the two developed classifiers using the two training algorithms shows that the two training algorithms have different effects on both the response rate and efficiency of the two developed artificial neural networks classifiers. In addition, the result of the performance evaluation carried out on the overall success recognition rates between the two developed classifiers in this study using pattern recognition approach with the two training algorithms and one reported classifier in surveyed literature using decision-theoretic approach shows that the classifiers developed in this study perform favourably with regard to accuracy and performance probability as compared to classifier presented in previous study.

  13. The development of controller and navigation algorithm for underwater wall crawler

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Hyung Suck; Kim, Kyung Hoon; Kim, Min Young [Korea Advanced Institute of Science and Technology, Taejon (Korea)

    1999-01-01

    In this project, the control system of a underwater robotic vehicle(URV) for underwater wall inspection in the nuclear reactor pool or the related facilities has been developed. The following 4-sub projects have been studied for this project: (1) Development of the controller and motor driver for the URV (2) Development of the control algorithm for the tracking control of the URV (3) Development of the localization system (4) Underwater experiments of the developed system. First, the dynamic characteristic of thruster with the DC servo-motor was analyzed experimentally. Second the controller board using the INTEL 80C196 was designed and constructed, and the software for the communication and motor control is developed. Third the PWM motor-driver was developed. Fourth the localization system using the laser scanner and inclinometer was developed and tested in the pool. Fifth the dynamics of the URV was studied and the proper control algorithms for the URV was proposed. Lastly the validation of the integrated system was experimentally performed. (author). 27 refs., 51 figs., 8 tabs.

  14. Measuring river from the cloud - River width algorithm development on Google Earth Engine

    Science.gov (United States)

    Yang, X.; Pavelsky, T.; Allen, G. H.; Donchyts, G.

    2017-12-01

    Rivers are some of the most dynamic features of the terrestrial land surface. They help distribute freshwater, nutrients, sediment, and they are also responsible for some of the greatest natural hazards. Despite their importance, our understanding of river behavior is limited at the global scale, in part because we do not have a river observational dataset that spans both time and space. Remote sensing data represent a rich, largely untapped resource for observing river dynamics. In particular, publicly accessible archives of satellite optical imagery, which date back to the 1970s, can be used to study the planview morphodynamics of rivers at the global scale. Here we present an image processing algorithm developed using the Google Earth Engine cloud-based platform, that can automatically extracts river centerlines and widths from Landsat 5, 7, and 8 scenes at 30 m resolution. Our algorithm makes use of the latest monthly global surface water history dataset and an existing Global River Width from Landsat (GRWL) dataset to efficiently extract river masks from each Landsat scene. Then a combination of distance transform and skeletonization techniques are used to extract river centerlines. Finally, our algorithm calculates wetted river width at each centerline pixel perpendicular to its local centerline direction. We validated this algorithm using in situ data estimated from 16 USGS gauge stations (N=1781). We find that 92% of the width differences are within 60 m (i.e. the minimum length of 2 Landsat pixels). Leveraging Earth Engine's infrastructure of collocated data and processing power, our goal is to use this algorithm to reconstruct the morphodynamic history of rivers globally by processing over 100,000 Landsat 5 scenes, covering from 1984 to 2013.

  15. Interpreting "Personality" Taxonomies: Why Previous Models Cannot Capture Individual-Specific Experiencing, Behaviour, Functioning and Development. Major Taxonomic Tasks Still Lay Ahead.

    Science.gov (United States)

    Uher, Jana

    2015-12-01

    As science seeks to make generalisations, a science of individual peculiarities encounters intricate challenges. This article explores these challenges by applying the Transdisciplinary Philosophy-of-Science Paradigm for Research on Individuals (TPS-Paradigm) and by exploring taxonomic "personality" research as an example. Analyses of researchers' interpretations of the taxonomic "personality" models, constructs and data that have been generated in the field reveal widespread erroneous assumptions about the abilities of previous methodologies to appropriately represent individual-specificity in the targeted phenomena. These assumptions, rooted in everyday thinking, fail to consider that individual-specificity and others' minds cannot be directly perceived, that abstract descriptions cannot serve as causal explanations, that between-individual structures cannot be isomorphic to within-individual structures, and that knowledge of compositional structures cannot explain the process structures of their functioning and development. These erroneous assumptions and serious methodological deficiencies in widely used standardised questionnaires have effectively prevented psychologists from establishing taxonomies that can comprehensively model individual-specificity in most of the kinds of phenomena explored as "personality", especially in experiencing and behaviour and in individuals' functioning and development. Contrary to previous assumptions, it is not universal models but rather different kinds of taxonomic models that are required for each of the different kinds of phenomena, variations and structures that are commonly conceived of as "personality". Consequently, to comprehensively explore individual-specificity, researchers have to apply a portfolio of complementary methodologies and develop different kinds of taxonomies, most of which have yet to be developed. Closing, the article derives some meta-desiderata for future research on individuals' "personality".

  16. Development of algorithms for building inventory compilation through remote sensing and statistical inferencing

    Science.gov (United States)

    Sarabandi, Pooya

    Building inventories are one of the core components of disaster vulnerability and loss estimations models, and as such, play a key role in providing decision support for risk assessment, disaster management and emergency response efforts. In may parts of the world inclusive building inventories, suitable for the use in catastrophe models cannot be found. Furthermore, there are serious shortcomings in the existing building inventories that include incomplete or out-dated information on critical attributes as well as missing or erroneous values for attributes. In this dissertation a set of methodologies for updating spatial and geometric information of buildings from single and multiple high-resolution optical satellite images are presented. Basic concepts, terminologies and fundamentals of 3-D terrain modeling from satellite images are first introduced. Different sensor projection models are then presented and sources of optical noise such as lens distortions are discussed. An algorithm for extracting height and creating 3-D building models from a single high-resolution satellite image is formulated. The proposed algorithm is a semi-automated supervised method capable of extracting attributes such as longitude, latitude, height, square footage, perimeter, irregularity index and etc. The associated errors due to the interactive nature of the algorithm are quantified and solutions for minimizing the human-induced errors are proposed. The height extraction algorithm is validated against independent survey data and results are presented. The validation results show that an average height modeling accuracy of 1.5% can be achieved using this algorithm. Furthermore, concept of cross-sensor data fusion for the purpose of 3-D scene reconstruction using quasi-stereo images is developed in this dissertation. The developed algorithm utilizes two or more single satellite images acquired from different sensors and provides the means to construct 3-D building models in a more

  17. Development of an Internet-Administered Cognitive Behavior Therapy Program (ENGAGE) for Parents of Children Previously Treated for Cancer: Participatory Action Research Approach.

    Science.gov (United States)

    Wikman, Anna; Kukkola, Laura; Börjesson, Helene; Cernvall, Martin; Woodford, Joanne; Grönqvist, Helena; von Essen, Louise

    2018-04-18

    Parenting a child through cancer is a distressing experience, and a subgroup of parents report negative long-term psychological consequences years after treatment completion. However, there is a lack of evidence-based psychological interventions for parents who experience distress in relation to a child's cancer disease after end of treatment. One aim of this study was to develop an internet-administered, cognitive behavior therapy-based, psychological, guided, self-help intervention (ENGAGE) for parents of children previously treated for cancer. Another aim was to identify acceptable procedures for future feasibility and efficacy studies testing and evaluating the intervention. Participatory action research methodology was used. The study included face-to-face workshops and related Web-based exercises. A total of 6 parents (4 mothers, 2 fathers) of children previously treated for cancer were involved as parent research partners. Moreover, 2 clinical psychologists were involved as expert research partners. Research partners and research group members worked collaboratively throughout the study. Data were analyzed iteratively using written summaries of the workshops and Web-based exercises parallel to data collection. A 10-week, internet-administered, cognitive behavior therapy-based, psychological, guided, self-help intervention (ENGAGE) was developed in collaboration with parent research partners and expert research partners. The content of the intervention, mode and frequency of e-therapist support, and the individualized approach for feedback were modified based on the research partner input. Shared solutions were reached regarding the type and timing of support from an e-therapist (eg, initial video or telephone call, multiple methods of e-therapist contact), duration and timing of intervention (eg, 10 weeks, 30-min assessments), and the removal of unnecessary support functions (eg, removal of chat and forum functions). Preferences for study procedures in

  18. Development of an image reconstruction algorithm for a few number of projection data

    International Nuclear Information System (INIS)

    Vieira, Wilson S.; Brandao, Luiz E.; Braz, Delson

    2007-01-01

    An image reconstruction algorithm was developed for specific cases of radiotracer applications in industry (rotating cylindrical mixers), involving a very few number of projection data. The algorithm was planned for imaging radioactive isotope distributions around the center of circular planes. The method consists of adapting the original expectation maximization algorithm (EM) to solve the ill-posed emission tomography inverse problem in order to reconstruct transversal 2D images of an object with only four projections. To achieve this aim, counts of photons emitted by selected radioactive sources in the plane, after they had been simulated using the commercial software MICROSHIELD 5.05, constitutes the projections and a computational code (SPECTEM) was developed to generate activity vectors or images related to those sources. SPECTEM is flexible to support simultaneous changes of the detectors's geometry, the medium under investigation and the properties of the gamma radiation. As a consequence of the code had been followed correctly the proposed method, good results were obtained and they encouraged us to continue the next step of the research: the validation of SPECTEM utilizing experimental data to check its real performance. We aim this code will improve considerably radiotracer methodology, making easier the diagnosis of fails in industrial processes. (author)

  19. Development of an image reconstruction algorithm for a few number of projection data

    Energy Technology Data Exchange (ETDEWEB)

    Vieira, Wilson S.; Brandao, Luiz E. [Instituto de Engenharia Nuclear (IEN-CNEN/RJ), Rio de Janeiro , RJ (Brazil)]. E-mails: wilson@ien.gov.br; brandao@ien.gov.br; Braz, Delson [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Coordenacao dos Programa de Pos-graduacao de Engenharia (COPPE). Lab. de Instrumentacao Nuclear]. E-mail: delson@mailhost.lin.ufrj.br

    2007-07-01

    An image reconstruction algorithm was developed for specific cases of radiotracer applications in industry (rotating cylindrical mixers), involving a very few number of projection data. The algorithm was planned for imaging radioactive isotope distributions around the center of circular planes. The method consists of adapting the original expectation maximization algorithm (EM) to solve the ill-posed emission tomography inverse problem in order to reconstruct transversal 2D images of an object with only four projections. To achieve this aim, counts of photons emitted by selected radioactive sources in the plane, after they had been simulated using the commercial software MICROSHIELD 5.05, constitutes the projections and a computational code (SPECTEM) was developed to generate activity vectors or images related to those sources. SPECTEM is flexible to support simultaneous changes of the detectors's geometry, the medium under investigation and the properties of the gamma radiation. As a consequence of the code had been followed correctly the proposed method, good results were obtained and they encouraged us to continue the next step of the research: the validation of SPECTEM utilizing experimental data to check its real performance. We aim this code will improve considerably radiotracer methodology, making easier the diagnosis of fails in industrial processes. (author)

  20. Development of estimation algorithm of loose parts and analysis of impact test data

    International Nuclear Information System (INIS)

    Kim, Jung Soo; Ham, Chang Sik; Jung, Chul Hwan; Hwang, In Koo; Kim, Tak Hwane; Kim, Tae Hwane; Park, Jin Ho

    1999-11-01

    Loose parts are produced by being parted from the structure of the reactor coolant system or by coming into RCS from the outside during test operation, refueling, and overhaul time. These loose parts are mixed with reactor coolant fluid and collide with RCS components. When loose parts are occurred within RCS, it is necessary to estimate the impact point and the mass of loose parts. In this report an analysis algorithm for the estimation of the impact point and mass of loose part is developed. The developed algorithm was tested with the impact test data of Yonggwang-3. The estimated impact point using the proposed algorithm in this report had 5 percent error to the real test data. The estimated mass was analyzed within 28 percent error bound using the same unit's data. We analyzed the characteristic frequency of each sensor because this frequency effected the estimation of impact point and mass. The characteristic frequency of the background noise during normal operation was compared with that of the impact test data. The result of the comparison illustrated that the characteristic frequency bandwidth of the impact test data was lower than that of the background noise during normal operation. by the comparison, the integrity of sensor and monitoring system could be checked, too. (author)

  1. Performance and development for the Inner Detector Trigger algorithms at ATLAS

    CERN Document Server

    Penc, O; The ATLAS collaboration

    2014-01-01

    The performance of the ATLAS Inner Detector (ID) Trigger algorithms being developed for running on the ATLAS High Level Trigger (HLT) processor farm during Run 2 of the LHC are presented. During the 2013-14 LHC long shutdown modifications are being carried out to the LHC accelerator to increase both the beam energy and luminosity. These modifications will pose significant challenges for the ID Trigger algorithms, both in terms execution time and physics performance. To meet these challenges, the ATLAS HLT software is being restructured to run as a more flexible single stage HLT, instead of two separate stages (Level2 and Event Filter) as in Run 1. This will reduce the overall data volume that needs to be requested by the HLT system, since data will no longer need to be requested for each of the two separate processing stages. Development of the ID Trigger algorithms for Run 2, currently expected to be ready for detector commissioning near the end of 2014, is progressing well and the current efforts towards op...

  2. Development of test algorithm for semiconductor package with defects by using probabilistic neural network

    International Nuclear Information System (INIS)

    Kim, Jae Yeol; Sim, Jae Gi; Ko, Myoung Soo; Kim, Chang Hyun; Kim, Hun Cho

    2001-01-01

    In this study, researchers developing the estimative algorithm for artificial defects in semiconductor packages and performing it by pattern recognition technology. For this purpose, the estimative algorithm was included that researchers made software with MATLAB. The software consists of some procedures including ultrasonic image acquisition, equalization filtering, Self-Organizing Map and Probabilistic Neural Network. Self-Organizing Map and Probabilistic Neural Network are belong to methods of Neural Networks. And the pattern recognition technology has applied to classify three kinds of detective patterns in semiconductor packages. This study presumes probability density function from a sample of learning and present which is automatically determine method. PNN can distinguish flaws very difficult distinction as well as. This can do parallel process to stand in a row we confirm that is very efficiently classifier if we applied many data real the process.

  3. Development of a BWR loading pattern design system based on modified genetic algorithms and knowledge

    International Nuclear Information System (INIS)

    Martin-del-Campo, Cecilia; Francois, Juan Luis; Avendano, Linda; Gonzalez, Mario

    2004-01-01

    An optimization system based on Genetic Algorithms (GAs), in combination with expert knowledge coded in heuristics rules, was developed for the design of optimized boiling water reactor (BWR) fuel loading patterns. The system was coded in a computer program named Loading Pattern Optimization System based on Genetic Algorithms, in which the optimization code uses GAs to select candidate solutions, and the core simulator code CM-PRESTO to evaluate them. A multi-objective function was built to maximize the cycle energy length while satisfying power and reactivity constraints used as BWR design parameters. Heuristic rules were applied to satisfy standard fuel management recommendations as the Control Cell Core and Low Leakage loading strategies, and octant symmetry. To test the system performance, an optimized cycle was designed and compared against an actual operating cycle of Laguna Verde Nuclear Power Plant, Unit I

  4. Development of an Algorithm for Automatic Analysis of the Impedance Spectrum Based on a Measurement Model

    Science.gov (United States)

    Kobayashi, Kiyoshi; Suzuki, Tohru S.

    2018-03-01

    A new algorithm for the automatic estimation of an equivalent circuit and the subsequent parameter optimization is developed by combining the data-mining concept and complex least-squares method. In this algorithm, the program generates an initial equivalent-circuit model based on the sampling data and then attempts to optimize the parameters. The basic hypothesis is that the measured impedance spectrum can be reproduced by the sum of the partial-impedance spectra presented by the resistor, inductor, resistor connected in parallel to a capacitor, and resistor connected in parallel to an inductor. The adequacy of the model is determined by using a simple artificial-intelligence function, which is applied to the output function of the Levenberg-Marquardt module. From the iteration of model modifications, the program finds an adequate equivalent-circuit model without any user input to the equivalent-circuit model.

  5. AUTOMATION OF CALCULATION ALGORITHMS FOR EFFICIENCY ESTIMATION OF TRANSPORT INFRASTRUCTURE DEVELOPMENT

    Directory of Open Access Journals (Sweden)

    Sergey Kharitonov

    2015-06-01

    Full Text Available Optimum transport infrastructure usage is an important aspect of the development of the national economy of the Russian Federation. Thus, development of instruments for assessing the efficiency of infrastructure is impossible without constant monitoring of a number of significant indicators. This work is devoted to the selection of indicators and the method of their calculation in relation to the transport subsystem as airport infrastructure. The work also reflects aspects of the evaluation of the possibilities of algorithmic computational mechanisms to improve the tools of public administration transport subsystems.

  6. Development of Nuclear Power Plant Safety Evaluation Method for the Automation Algorithm Application

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Seung Geun; Seong, Poong Hyun [KAIST, Daejeon (Korea, Republic of)

    2016-10-15

    It is commonly believed that replacing human operators to the automated system would guarantee greater efficiency, lower workloads, and fewer human error. Conventional machine learning techniques are considered as not capable to handle complex situations in NPP. Due to these kinds of issues, automation is not actively adopted although human error probability drastically increases during abnormal situations in NPP due to overload of information, high workload, and short time available for diagnosis. Recently, new machine learning techniques, which are known as ‘deep learning’ techniques have been actively applied to many fields, and the deep learning technique-based artificial intelligences (AIs) are showing better performance than conventional AIs. In 2015, deep Q-network (DQN) which is one of the deep learning techniques was developed and applied to train AI that automatically plays various Atari 2800 games, and this AI surpassed the human-level playing in many kind of games. Also in 2016, ‘Alpha-Go’, which was developed by ‘Google Deepmind’ based on deep learning technique to play the game of Go (i.e. Baduk), was defeated Se-dol Lee who is the World Go champion with score of 4:1. By the effort for reducing human error in NPPs, the ultimate goal of this study is the development of automation algorithm which can cover various situations in NPPs. As the first part, quantitative and real-time NPP safety evaluation method is being developed in order to provide the training criteria for automation algorithm. For that, EWS concept of medical field was adopted, and the applicability is investigated in this paper. Practically, the application of full automation (i.e. fully replaces human operators) may requires much more time for the validation and investigation of side-effects after the development of automation algorithm, and so the adoption in the form of full automation will take long time.

  7. Development of Nuclear Power Plant Safety Evaluation Method for the Automation Algorithm Application

    International Nuclear Information System (INIS)

    Kim, Seung Geun; Seong, Poong Hyun

    2016-01-01

    It is commonly believed that replacing human operators to the automated system would guarantee greater efficiency, lower workloads, and fewer human error. Conventional machine learning techniques are considered as not capable to handle complex situations in NPP. Due to these kinds of issues, automation is not actively adopted although human error probability drastically increases during abnormal situations in NPP due to overload of information, high workload, and short time available for diagnosis. Recently, new machine learning techniques, which are known as ‘deep learning’ techniques have been actively applied to many fields, and the deep learning technique-based artificial intelligences (AIs) are showing better performance than conventional AIs. In 2015, deep Q-network (DQN) which is one of the deep learning techniques was developed and applied to train AI that automatically plays various Atari 2800 games, and this AI surpassed the human-level playing in many kind of games. Also in 2016, ‘Alpha-Go’, which was developed by ‘Google Deepmind’ based on deep learning technique to play the game of Go (i.e. Baduk), was defeated Se-dol Lee who is the World Go champion with score of 4:1. By the effort for reducing human error in NPPs, the ultimate goal of this study is the development of automation algorithm which can cover various situations in NPPs. As the first part, quantitative and real-time NPP safety evaluation method is being developed in order to provide the training criteria for automation algorithm. For that, EWS concept of medical field was adopted, and the applicability is investigated in this paper. Practically, the application of full automation (i.e. fully replaces human operators) may requires much more time for the validation and investigation of side-effects after the development of automation algorithm, and so the adoption in the form of full automation will take long time

  8. Development of simulators algorithms of planar radioactive sources for use in computer models of exposure

    International Nuclear Information System (INIS)

    Vieira, Jose Wilson; Leal Neto, Viriato; Lima Filho, Jose de Melo; Lima, Fernando Roberto de Andrade

    2013-01-01

    This paper presents as algorithm of a planar and isotropic radioactive source and by rotating the probability density function (PDF) Gaussian standard subjected to a translatory method which displaces its maximum throughout its field changes its intensity and makes the dispersion around the mean right asymmetric. The algorithm was used to generate samples of photons emerging from a plane and reach a semicircle involving a phantom voxels. The PDF describing this problem is already known, but the generating function of random numbers (FRN) associated with it can not be deduced by direct MC techniques. This is a significant problem because it can be adjusted to simulations involving natural terrestrial radiation or accidents in medical establishments or industries where the radioactive material spreads in a plane. Some attempts to obtain a FRN for the PDF of the problem have already been implemented by the Research Group in Numerical Dosimetry (GND) from Recife-PE, Brazil, always using the technique rejection sampling MC. This article followed methodology of previous work, except on one point: The problem of the PDF was replaced by a normal PDF transferred. To perform dosimetric comparisons, we used two MCES: the MSTA (Mash standing, composed by the adult male voxel phantom in orthostatic position, MASH (male mesh), available from the Department of Nuclear Energy (DEN) of the Federal University of Pernambuco (UFPE), coupled to MC EGSnrc code and the GND planar source based on the rejection technique) and MSTA N T. The two MCES are similar in all but FRN used in planar source. The results presented and discussed in this paper establish the new algorithm for a planar source to be used by GND

  9. Is Uric Acid a Missing Link between Previous Gestational Diabetes Mellitus and the Development of Type 2 Diabetes at a Later Time of Life?

    Science.gov (United States)

    Molęda, Piotr; Fronczyk, Aneta; Safranow, Krzysztof; Majkowska, Lilianna

    2016-01-01

    A high level of uric acid (UA) is a strong, independent risk factor for type 2 diabetes mellitus. The relationship between UA levels and the development of type 2 diabetes in women with previous gestational diabetes mellitus (pGDM) remains unclear. The aim of study was to evaluate the UA levels in pGDM women in relation to their current nutritional status and carbohydrate metabolism. 199 women with pGDM diagnoses based on oral glucose tolerance tests (OGTTs) 5-12 years previously and a control group of 50 women without pGDM. The assessment included anthropometric parameters, body composition (Tanita SC-330S), current OGTT, insulin resistance index (HOMA-IR), β-cell function (HOMA-%B), HbA1c, lipids, and uric acid. No differences between groups were found in terms of age, time from the index pregnancy, anthropometric parameters, lipids or creatinine levels. The incidences of overweight and obesity were similar. Carbohydrate abnormalities were more frequent in the pGDM group than the control group (43.2% vs 12.0% p1). The women with pGDM had significantly higher fasting glucose, HbA1c, glucose and insulin levels in the OGTTs, but similar HOMA-IR values. Their UA levels were significantly higher (258±58 vs 230±50 μmol/L, p1, p1), creatinine level (β = 0.23, 95% CI 0.11-0.35, pdiabetes (β = 0.13, 95% CI 0.01-0.25, pdiabetes) was statistically significant (odds ratio 3.62 [95% CI 1.8-7.3], p1). Higher UA levels may be associated with the development of type 2 diabetes in pGDM women, also in these with normal body weights.

  10. A Novel Algorithm (G-JPSO and Its Development for the Optimal Control of Pumps in Water Distribution Networks

    Directory of Open Access Journals (Sweden)

    Rasoul Rajabpour

    2017-01-01

    Full Text Available Recent decades have witnessed growing applications of metaheuristic techniques as efficient tools for solving complex engineering problems. One such method is the JPSO algorithm. In this study, innovative modifications were made in the nature of the jump algorithm JPSO to make it capable of coping with graph-based solutions, which led to the development of a new algorithm called ‘G-JPSO’. The new algorithm was then used to solve the Fletcher-Powell optimal control problem and its application to optimal control of pumps in water distribution networks was evaluated. Optimal control of pumps consists in an optimum operation timetable (on and off for each of the pumps at the desired time interval. Maximum number of on and off positions for each pump was introduced into the objective function as a constraint such that not only would power consumption at each node be reduced but such problem requirements as the minimum pressure required at each node and minimum/maximum storage tank heights would be met. To determine the optimal operation of pumps, a model-based optimization-simulation algorithm was developed based on G-JPSO and JPSO algorithms. The model proposed by van Zyl was used to determine the optimal operation of the distribution network. Finally, the results obtained from the proposed algorithm were compared with those obtained from ant colony, genetic, and JPSO algorithms to show the robustness of the proposed algorithm in finding near-optimum solutions at reasonable computation costs.

  11. Algorithm development and verification of UASCM for multi-dimension and multi-group neutron kinetics model

    International Nuclear Information System (INIS)

    Si, S.

    2012-01-01

    The Universal Algorithm of Stiffness Confinement Method (UASCM) for neutron kinetics model of multi-dimensional and multi-group transport equations or diffusion equations has been developed. The numerical experiments based on transport theory code MGSNM and diffusion theory code MGNEM have demonstrated that the algorithm has sufficient accuracy and stability. (authors)

  12. Development of a general learning algorithm with applications in nuclear reactor systems

    International Nuclear Information System (INIS)

    Brittain, C.R.; Otaduy, P.J.; Perez, R.B.

    1989-12-01

    The objective of this study was development of a generalized learning algorithm that can learn to predict a particular feature of a process by observation of a set of representative input examples. The algorithm uses pattern matching and statistical analysis techniques to find a functional relationship between descriptive attributes of the input examples and the feature to be predicted. The algorithm was tested by applying it to a set of examples consisting of performance descriptions for 277 fuel cycles of Oak Ridge National Laboratory's High Flux Isotope Reactor (HFIR). The program learned to predict the critical rod position for the HFIR from core configuration data prior to reactor startup. The functional relationship bases its predictions on initial core reactivity, the number of certain targets placed in the center of the reactor, and the total exposure of the control plates. Twelve characteristic fuel cycle clusters were identified. Nine fuel cycles were diagnosed as having noisy data, and one could not be predicted by the functional relationship. 13 refs., 6 figs

  13. Development of a general learning algorithm with applications in nuclear reactor systems

    Energy Technology Data Exchange (ETDEWEB)

    Brittain, C.R.; Otaduy, P.J.; Perez, R.B.

    1989-12-01

    The objective of this study was development of a generalized learning algorithm that can learn to predict a particular feature of a process by observation of a set of representative input examples. The algorithm uses pattern matching and statistical analysis techniques to find a functional relationship between descriptive attributes of the input examples and the feature to be predicted. The algorithm was tested by applying it to a set of examples consisting of performance descriptions for 277 fuel cycles of Oak Ridge National Laboratory's High Flux Isotope Reactor (HFIR). The program learned to predict the critical rod position for the HFIR from core configuration data prior to reactor startup. The functional relationship bases its predictions on initial core reactivity, the number of certain targets placed in the center of the reactor, and the total exposure of the control plates. Twelve characteristic fuel cycle clusters were identified. Nine fuel cycles were diagnosed as having noisy data, and one could not be predicted by the functional relationship. 13 refs., 6 figs.

  14. Performance of a Real-time Multipurpose 2-Dimensional Clustering Algorithm Developed for the ATLAS Experiment

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00372074; The ATLAS collaboration; Sotiropoulou, Calliope Louisa; Annovi, Alberto; Kordas, Kostantinos

    2016-01-01

    In this paper the performance of the 2D pixel clustering algorithm developed for the Input Mezzanine card of the ATLAS Fast TracKer system is presented. Fast TracKer is an approved ATLAS upgrade that has the goal to provide a complete list of tracks to the ATLAS High Level Trigger for each level-1 accepted event, at up to 100 kHz event rate with a very small latency, in the order of 100µs. The Input Mezzanine card is the input stage of the Fast TracKer system. Its role is to receive data from the silicon detector and perform real time clustering, thus to reduce the amount of data propagated to the subsequent processing levels with minimal information loss. We focus on the most challenging component on the Input Mezzanine card, the 2D clustering algorithm executed on the pixel data. We compare two different implementations of the algorithm. The first is one called the ideal one which searches clusters of pixels in the whole silicon module at once and calculates the cluster centroids exploiting the whole avail...

  15. Performance of a Real-time Multipurpose 2-Dimensional Clustering Algorithm Developed for the ATLAS Experiment

    CERN Document Server

    Gkaitatzis, Stamatios; The ATLAS collaboration

    2016-01-01

    In this paper the performance of the 2D pixel clustering algorithm developed for the Input Mezzanine card of the ATLAS Fast TracKer system is presented. Fast TracKer is an approved ATLAS upgrade that has the goal to provide a complete list of tracks to the ATLAS High Level Trigger for each level-1 accepted event, at up to 100 kHz event rate with a very small latency, in the order of 100 µs. The Input Mezzanine card is the input stage of the Fast TracKer system. Its role is to receive data from the silicon detector and perform real time clustering, thus to reduce the amount of data propagated to the subsequent processing levels with minimal information loss. We focus on the most challenging component on the Input Mezzanine card, the 2D clustering algorithm executed on the pixel data. We compare two different implementations of the algorithm. The first is one called the ideal one which searches clusters of pixels in the whole silicon module at once and calculates the cluster centroids exploiting the whole avai...

  16. On developing B-spline registration algorithms for multi-core processors

    International Nuclear Information System (INIS)

    Shackleford, J A; Kandasamy, N; Sharp, G C

    2010-01-01

    Spline-based deformable registration methods are quite popular within the medical-imaging community due to their flexibility and robustness. However, they require a large amount of computing time to obtain adequate results. This paper makes two contributions towards accelerating B-spline-based registration. First, we propose a grid-alignment scheme and associated data structures that greatly reduce the complexity of the registration algorithm. Based on this grid-alignment scheme, we then develop highly data parallel designs for B-spline registration within the stream-processing model, suitable for implementation on multi-core processors such as graphics processing units (GPUs). Particular attention is focused on an optimal method for performing analytic gradient computations in a data parallel fashion. CPU and GPU versions are validated for execution time and registration quality. Performance results on large images show that our GPU algorithm achieves a speedup of 15 times over the single-threaded CPU implementation whereas our multi-core CPU algorithm achieves a speedup of 8 times over the single-threaded implementation. The CPU and GPU versions achieve near-identical registration quality in terms of RMS differences between the generated vector fields.

  17. Development and implementation of an automatic control algorithm for the University of Utah nuclear reactor

    International Nuclear Information System (INIS)

    Crawford, Kevan C.; Sandquist, Gary M.

    1990-01-01

    The emphasis of this work is the development and implementation of an automatic control philosophy which uses the classical operational philosophies as a foundation. Three control algorithms were derived based on various simplifying assumptions. Two of the algorithms were tested in computer simulations. After realizing the insensitivity of the system to the simplifications, the most reduced form of the algorithms was implemented on the computer control system at the University of Utah (UNEL). Since the operational philosophies have a higher priority than automatic control, they determine when automatic control may be utilized. Unlike the operational philosophies, automatic control is not concerned with component failures. The object of this philosophy is the movement of absorber rods to produce a requested power. When the current power level is compared to the requested power level, an error may be detected which will require the movement of a control rod to correct the error. The automatic control philosophy adds another dimension to the classical operational philosophies. Using this philosophy, normal operator interactions with the computer would be limited only to run parameters such as power, period, and run time. This eliminates subjective judgements, objective judgements under pressure, and distractions to the operator and insures the reactor will be operated in a safe and controlled manner as well as providing reproducible operations

  18. Development and Validation of a Diabetic Retinopathy Referral Algorithm Based on Single-Field Fundus Photography.

    Directory of Open Access Journals (Sweden)

    Sangeetha Srinivasan

    Full Text Available To develop a simplified algorithm to identify and refer diabetic retinopathy (DR from single-field retinal images specifically for sight-threatening diabetic retinopathy for appropriate care (ii to determine the agreement and diagnostic accuracy of the algorithm as a pilot study among optometrists versus "gold standard" (retinal specialist grading.The severity of DR was scored based on colour photo using a colour coded algorithm, which included the lesions of DR and number of quadrants involved. A total of 99 participants underwent training followed by evaluation. Data of the 99 participants were analyzed. Fifty posterior pole 45 degree retinal images with all stages of DR were presented. Kappa scores (κ, areas under the receiver operating characteristic curves (AUCs, sensitivity and specificity were determined, with further comparison between working optometrists and optometry students.Mean age of the participants was 22 years (range: 19-43 years, 87% being women. Participants correctly identified 91.5% images that required immediate referral (κ = 0.696, 62.5% of images as requiring review after 6 months (κ = 0.462, and 51.2% of those requiring review after 1 year (κ = 0.532. The sensitivity and specificity of the optometrists were 91% and 78% for immediate referral, 62% and 84% for review after 6 months, and 51% and 95% for review after 1 year, respectively. The AUC was the highest (0.855 for immediate referral, second highest (0.824 for review after 1 year, and 0.727 for review after 6 months criteria. Optometry students performed better than the working optometrists for all grades of referral.The diabetic retinopathy algorithm assessed in this work is a simple and a fairly accurate method for appropriate referral based on single-field 45 degree posterior pole retinal images.

  19. Algorithm for evaluating the effectiveness of a high-rise development project based on current yield

    Science.gov (United States)

    Soboleva, Elena

    2018-03-01

    The article is aimed at the issues of operational evaluation of development project efficiency in high-rise construction under the current economic conditions in Russia. The author touches the following issues: problems of implementing development projects, the influence of the operational evaluation quality of high-rise construction projects on general efficiency, assessing the influence of the project's external environment on the effectiveness of project activities under crisis conditions and the quality of project management. The article proposes the algorithm and the methodological approach to the quality management of the developer project efficiency based on operational evaluation of the current yield efficiency. The methodology for calculating the current efficiency of a development project for high-rise construction has been updated.

  20. Development of a Smart Release Algorithm for Mid-Air Separation of Parachute Test Articles

    Science.gov (United States)

    Moore, James W.

    2011-01-01

    The Crew Exploration Vehicle Parachute Assembly System (CPAS) project is currently developing an autonomous method to separate a capsule-shaped parachute test vehicle from an air-drop platform for use in the test program to develop and validate the parachute system for the Orion spacecraft. The CPAS project seeks to perform air-drop tests of an Orion-like boilerplate capsule. Delivery of the boilerplate capsule to the test condition has proven to be a critical and complicated task. In the current concept, the boilerplate vehicle is extracted from an aircraft on top of a Type V pallet and then separated from the pallet in mid-air. The attitude of the vehicles at separation is critical to avoiding re-contact and successfully deploying the boilerplate into a heatshield-down orientation. Neither the pallet nor the boilerplate has an active control system. However, the attitude of the mated vehicle as a function of time is somewhat predictable. CPAS engineers have designed an avionics system to monitor the attitude of the mated vehicle as it is extracted from the aircraft and command a release when the desired conditions are met. The algorithm includes contingency capabilities designed to release the test vehicle before undesirable orientations occur. The algorithm was verified with simulation and ground testing. The pre-flight development and testing is discussed and limitations of ground testing are noted. The CPAS project performed a series of three drop tests as a proof-of-concept of the release technique. These tests helped to refine the attitude instrumentation and software algorithm to be used on future tests. The drop tests are described in detail and the evolution of the release system with each test is described.

  1. The Development of Advanced Processing and Analysis Algorithms for Improved Neutron Multiplicity Measurements

    International Nuclear Information System (INIS)

    Santi, P.; Favalli, A.; Hauck, D.; Henzl, V.; Henzlova, D.; Ianakiev, K.; Iliev, M.; Swinhoe, M.; Croft, S.; Worrall, L.

    2015-01-01

    One of the most distinctive and informative signatures of special nuclear materials is the emission of correlated neutrons from either spontaneous or induced fission. Because the emission of correlated neutrons is a unique and unmistakable signature of nuclear materials, the ability to effectively detect, process, and analyze these emissions will continue to play a vital role in the non-proliferation, safeguards, and security missions. While currently deployed neutron measurement techniques based on 3He proportional counter technology, such as neutron coincidence and multiplicity counters currently used by the International Atomic Energy Agency, have proven to be effective over the past several decades for a wide range of measurement needs, a number of technical and practical limitations exist in continuing to apply this technique to future measurement needs. In many cases, those limitations exist within the algorithms that are used to process and analyze the detected signals from these counters that were initially developed approximately 20 years ago based on the technology and computing power that was available at that time. Over the past three years, an effort has been undertaken to address the general shortcomings in these algorithms by developing new algorithms that are based on fundamental physics principles that should lead to the development of more sensitive neutron non-destructive assay instrumentation. Through this effort, a number of advancements have been made in correcting incoming data for electronic dead time, connecting the two main types of analysis techniques used to quantify the data (Shift register analysis and Feynman variance to mean analysis), and in the underlying physical model, known as the point model, that is used to interpret the data in terms of the characteristic properties of the item being measured. The current status of the testing and evaluation of these advancements in correlated neutron analysis techniques will be discussed

  2. Integrative multicellular biological modeling: a case study of 3D epidermal development using GPU algorithms

    Directory of Open Access Journals (Sweden)

    Christley Scott

    2010-08-01

    Full Text Available Abstract Background Simulation of sophisticated biological models requires considerable computational power. These models typically integrate together numerous biological phenomena such as spatially-explicit heterogeneous cells, cell-cell interactions, cell-environment interactions and intracellular gene networks. The recent advent of programming for graphical processing units (GPU opens up the possibility of developing more integrative, detailed and predictive biological models while at the same time decreasing the computational cost to simulate those models. Results We construct a 3D model of epidermal development and provide a set of GPU algorithms that executes significantly faster than sequential central processing unit (CPU code. We provide a parallel implementation of the subcellular element method for individual cells residing in a lattice-free spatial environment. Each cell in our epidermal model includes an internal gene network, which integrates cellular interaction of Notch signaling together with environmental interaction of basement membrane adhesion, to specify cellular state and behaviors such as growth and division. We take a pedagogical approach to describing how modeling methods are efficiently implemented on the GPU including memory layout of data structures and functional decomposition. We discuss various programmatic issues and provide a set of design guidelines for GPU programming that are instructive to avoid common pitfalls as well as to extract performance from the GPU architecture. Conclusions We demonstrate that GPU algorithms represent a significant technological advance for the simulation of complex biological models. We further demonstrate with our epidermal model that the integration of multiple complex modeling methods for heterogeneous multicellular biological processes is both feasible and computationally tractable using this new technology. We hope that the provided algorithms and source code will be a

  3. Laparoscopy After Previous Laparotomy

    Directory of Open Access Journals (Sweden)

    Zulfo Godinjak

    2006-11-01

    Full Text Available Following the abdominal surgery, extensive adhesions often occur and they can cause difficulties during laparoscopic operations. However, previous laparotomy is not considered to be a contraindication for laparoscopy. The aim of this study is to present that an insertion of Veres needle in the region of umbilicus is a safe method for creating a pneumoperitoneum for laparoscopic operations after previous laparotomy. In the last three years, we have performed 144 laparoscopic operations in patients that previously underwent one or two laparotomies. Pathology of digestive system, genital organs, Cesarean Section or abdominal war injuries were the most common causes of previouslaparotomy. During those operations or during entering into abdominal cavity we have not experienced any complications, while in 7 patients we performed conversion to laparotomy following the diagnostic laparoscopy. In all patients an insertion of Veres needle and trocar insertion in the umbilical region was performed, namely a technique of closed laparoscopy. Not even in one patient adhesions in the region of umbilicus were found, and no abdominal organs were injured.

  4. METHODOLOGICAL GROUNDS ABOUT ALGORITHM OF DEVELOPMENT ORGANIZATIONAL ANALYSIS OF RAILWAYS OPERATION

    Directory of Open Access Journals (Sweden)

    H. D. Eitutis

    2010-12-01

    Full Text Available It was established that the initial stage of reorganization is to run diagnostics of the enterprise, under which a decision on development of an algorithm for structural transformations shall be made. Organizational and management analysis is an important component of diagnostics and is aimed at defining the methods and principles for the enterprise management system. The results of the carried out organizational analysis allow defining the problems and «bottle necks» in the system of strategic management of Ukrainian railways as a whole and in different directions of their economic activities.

  5. Developing the algorithm for assessing the competitive abilities of functional foods in marketing

    Directory of Open Access Journals (Sweden)

    Nilova Liudmila

    2017-01-01

    Full Text Available A thorough analysis of competitive factors of functional foods has made it possible to develop an algorithm for assessing the competitive factors of functional food products, with respect to their essential consumer features — quality, safety and functionality. Questionnaires filled in by experts and the published results of surveys of consumers from different countries were used to help select the essential consumer features in functional foods. A “desirability of consumer features” model triangle, based on functional bread and bakery products, was constructed with the use of the Harrington function.

  6. Multidisciplinary Design, Analysis, and Optimization Tool Development Using a Genetic Algorithm

    Science.gov (United States)

    Pak, Chan-gi; Li, Wesley

    2009-01-01

    Multidisciplinary design, analysis, and optimization using a genetic algorithm is being developed at the National Aeronautics and Space Administration Dryden Flight Research Center (Edwards, California) to automate analysis and design process by leveraging existing tools to enable true multidisciplinary optimization in the preliminary design stage of subsonic, transonic, supersonic, and hypersonic aircraft. This is a promising technology, but faces many challenges in large-scale, real-world application. This report describes current approaches, recent results, and challenges for multidisciplinary design, analysis, and optimization as demonstrated by experience with the Ikhana fire pod design.!

  7. Development of an improved genetic algorithm and its application in the optimal design of ship nuclear power system

    International Nuclear Information System (INIS)

    Jia Baoshan; Yu Jiyang; You Songbo

    2005-01-01

    This article focuses on the development of an improved genetic algorithm and its application in the optimal design of the ship nuclear reactor system, whose goal is to find a combination of system parameter values that minimize the mass or volume of the system given the power capacity requirement and safety criteria. An improved genetic algorithm (IGA) was developed using an 'average fitness value' grouping + 'specified survival probability' rank selection method and a 'separate-recombine' duplication operator. Combining with a simulated annealing algorithm (SAA) that continues the local search after the IGA reaches a satisfactory point, the algorithm gave satisfactory optimization results from both search efficiency and accuracy perspectives. This IGA-SAA algorithm successfully solved the design optimization problem of ship nuclear power system. It is an advanced and efficient methodology that can be applied to the similar optimization problems in other areas. (authors)

  8. Collaboration space division in collaborative product development based on a genetic algorithm

    Science.gov (United States)

    Qian, Xueming; Ma, Yanqiao; Feng, Huan

    2018-02-01

    The advance in the global environment, rapidly changing markets, and information technology has created a new stage for design. In such an environment, one strategy for success is the Collaborative Product Development (CPD). Organizing people effectively is the goal of Collaborative Product Development, and it solves the problem with certain foreseeability. The development group activities are influenced not only by the methods and decisions available, but also by correlation among personnel. Grouping the personnel according to their correlation intensity is defined as collaboration space division (CSD). Upon establishment of a correlation matrix (CM) of personnel and an analysis of the collaboration space, the genetic algorithm (GA) and minimum description length (MDL) principle may be used as tools in optimizing collaboration space. The MDL principle is used in setting up an object function, and the GA is used as a methodology. The algorithm encodes spatial information as a chromosome in binary. After repetitious crossover, mutation, selection and multiplication, a robust chromosome is found, which can be decoded into an optimal collaboration space. This new method can calculate the members in sub-spaces and individual groupings within the staff. Furthermore, the intersection of sub-spaces and public persons belonging to all sub-spaces can be determined simultaneously.

  9. A Comprehensive Training Data Set for the Development of Satellite-Based Volcanic Ash Detection Algorithms

    Science.gov (United States)

    Schmidl, Marius

    2017-04-01

    We present a comprehensive training data set covering a large range of atmospheric conditions, including disperse volcanic ash and desert dust layers. These data sets contain all information required for the development of volcanic ash detection algorithms based on artificial neural networks, urgently needed since volcanic ash in the airspace is a major concern of aviation safety authorities. Selected parts of the data are used to train the volcanic ash detection algorithm VADUGS. They contain atmospheric and surface-related quantities as well as the corresponding simulated satellite data for the channels in the infrared spectral range of the SEVIRI instrument on board MSG-2. To get realistic results, ECMWF, IASI-based, and GEOS-Chem data are used to calculate all parameters describing the environment, whereas the software package libRadtran is used to perform radiative transfer simulations returning the brightness temperatures for each atmospheric state. As optical properties are a prerequisite for radiative simulations accounting for aerosol layers, the development also included the computation of optical properties for a set of different aerosol types from different sources. A description of the developed software and the used methods is given, besides an overview of the resulting data sets.

  10. Is Uric Acid a Missing Link between Previous Gestational Diabetes Mellitus and the Development of Type 2 Diabetes at a Later Time of Life?

    Directory of Open Access Journals (Sweden)

    Piotr Molęda

    Full Text Available A high level of uric acid (UA is a strong, independent risk factor for type 2 diabetes mellitus. The relationship between UA levels and the development of type 2 diabetes in women with previous gestational diabetes mellitus (pGDM remains unclear. The aim of study was to evaluate the UA levels in pGDM women in relation to their current nutritional status and carbohydrate metabolism.199 women with pGDM diagnoses based on oral glucose tolerance tests (OGTTs 5-12 years previously and a control group of 50 women without pGDM. The assessment included anthropometric parameters, body composition (Tanita SC-330S, current OGTT, insulin resistance index (HOMA-IR, β-cell function (HOMA-%B, HbA1c, lipids, and uric acid.No differences between groups were found in terms of age, time from the index pregnancy, anthropometric parameters, lipids or creatinine levels. The incidences of overweight and obesity were similar. Carbohydrate abnormalities were more frequent in the pGDM group than the control group (43.2% vs 12.0% p<0.001. The women with pGDM had significantly higher fasting glucose, HbA1c, glucose and insulin levels in the OGTTs, but similar HOMA-IR values. Their UA levels were significantly higher (258±58 vs 230±50 μmol/L, p<0.005 and correlated with BMI and the severity of carbohydrate disorders. The normal weight and normoglycemic pGDM women also demonstrated higher UA levels than a similar control subgroup (232±48 vs 208±48 μmol/L, p<0.05. Multivariate analysis revealed significant correlations of UA level with BMI (β = 0.38, 95% CI 0.25-0.51, p<0.0001, creatinine level (β = 0.23, 95% CI 0.11-0.35, p<0.0005, triglycerides (β = 0.20, 95% CI 0.07-0.33, p<0.005 and family history of diabetes (β = 0.13, 95% CI 0.01-0.25, p<0.05. In logistic regression analysis, the association between higher UA level (defined as value ≥297 μmol/L and presence of any carbohydrate metabolism disorder (IFG, IGT or diabetes was statistically significant (odds

  11. The Possibility to Use Genetic Algorithms and Fuzzy Systems in the Development of Tutorial Systems

    Directory of Open Access Journals (Sweden)

    Anca Ioana ANDREESCU

    2006-01-01

    Full Text Available In this paper we are presenting state of the art information methods and techniques that can be applied in the development of efficient tutorial systems and also the possibility to use genetic algorithms and fuzzy systems in the construction of such systems. All this topics have been studied during the development of the research project INFOSOC entitled "Tutorial System based on Eduknowledge for Work Security and Health in SMEs According to the European Union Directives" accomplished by a teaching stuff from the Academy of Economic Studies, Bucharest, in collaboration with the National Institute for Research and Development in Work Security, the National Institute for Small and Middle Enterprises and SC Q’NET International srl.

  12. China's experimental pragmatics of "Scientific development" in wind power: Algorithmic struggles over software in wind turbines

    DEFF Research Database (Denmark)

    Kirkegaard, Julia

    2016-01-01

    . This increased focus on quality, to ensure the sustainable and scientific development of China's wind energy market, requires improved indigenous Chinese innovation capabilities in wind turbine technology. To shed light on how the turn to quality impacts upon the industry and global competition, this study......This article presents a case study on the development of China's wind power market. As China's wind industry has experienced a quality crisis, the Chinese government has intervened to steer the industry towards a turn to quality, indicating a pragmatist and experimental mode of market development...... unfold over issues associated with intellectual property rights (IPRs), certification and standardisation of software algorithms. The article concludes that the use of this STS lens makes a fresh contribution to the often path-dependent, structuralist and hierarchical China literature, offering instead...

  13. Conceptual aspects: analyses law, ethical, human, technical, social factors of development ICT, e-learning and intercultural development in different countries setting out the previous new theoretical model and preliminary findings

    NARCIS (Netherlands)

    Kommers, Petrus A.M.; Smyrnova-Trybulska, Eugenia; Morze, Natalia; Issa, Tomayess; Issa, Theodora

    2015-01-01

    This paper, prepared by an international team of authors focuses on the conceptual aspects: analyses law, ethical, human, technical, social factors of ICT development, e-learning and intercultural development in different countries, setting out the previous and new theoretical model and preliminary

  14. Algorithm Design and Validation for Adaptive Nonlinear Control Enhancement (ADVANCE) Technology Development for Resilient Flight Control, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — SSCI proposes to develop and test a framework referred to as the ADVANCE (Algorithm Design and Validation for Adaptive Nonlinear Control Enhancement), within which...

  15. Development of accurate standardized algorithms for conversion between SRP grid coordinates and latitude/longitude

    International Nuclear Information System (INIS)

    Looney, B.B.; Marsh, J.T. Jr.; Hayes, D.W.

    1987-01-01

    The Savannah Rive Plant (SRP) is a nuclear production facility operated by E.I. du Pont de Nemours and Co. for the United States Department of Energy. SRP is located along the Savannah River in South Carolina. Construction of SRP began in the early 1950's. At the time the plant was built, a local coordinate system was developed to assist in defining the locations of plant facilities. Over the years, large quantities of data have been developed using ''SRP Coordinates.'' These data include: building locations, plant boundaries, environmental sampling locations, waste disposal area locations, and a wide range of other geographical information. Currently, staff persons at SRP are organizing these data into automated information systems to allow more rapid, more robust and higher quality interpretation, interchange and presentation of spatial data. A key element in this process is the ability to incorporate outside data bases into the systems, as well as to share SRP data with interested organizations outside as SRP. Most geographical information outside of SRP is organized using latitude and longitude. Thus, straightforward, accurate and consistent algorithms to convert SRP Coordinates to/from latitude and longitude are needed. Appropriate algorithms are presented in this document

  16. Development of Gis Tool for the Solution of Minimum Spanning Tree Problem using Prim's Algorithm

    Science.gov (United States)

    Dutta, S.; Patra, D.; Shankar, H.; Alok Verma, P.

    2014-11-01

    minimum spanning tree (MST) of a connected, undirected and weighted network is a tree of that network consisting of all its nodes and the sum of weights of all its edges is minimum among all such possible spanning trees of the same network. In this study, we have developed a new GIS tool using most commonly known rudimentary algorithm called Prim's algorithm to construct the minimum spanning tree of a connected, undirected and weighted road network. This algorithm is based on the weight (adjacency) matrix of a weighted network and helps to solve complex network MST problem easily, efficiently and effectively. The selection of the appropriate algorithm is very essential otherwise it will be very hard to get an optimal result. In case of Road Transportation Network, it is very essential to find the optimal results by considering all the necessary points based on cost factor (time or distance). This paper is based on solving the Minimum Spanning Tree (MST) problem of a road network by finding it's minimum span by considering all the important network junction point. GIS technology is usually used to solve the network related problems like the optimal path problem, travelling salesman problem, vehicle routing problems, location-allocation problems etc. Therefore, in this study we have developed a customized GIS tool using Python script in ArcGIS software for the solution of MST problem for a Road Transportation Network of Dehradun city by considering distance and time as the impedance (cost) factors. It has a number of advantages like the users do not need a greater knowledge of the subject as the tool is user-friendly and that allows to access information varied and adapted the needs of the users. This GIS tool for MST can be applied for a nationwide plan called Prime Minister Gram Sadak Yojana in India to provide optimal all weather road connectivity to unconnected villages (points). This tool is also useful for constructing highways or railways spanning several

  17. Transition of Care Practices from Emergency Department to Inpatient: Survey Data and Development of Algorithm

    Directory of Open Access Journals (Sweden)

    Sangil Lee

    2017-01-01

    Full Text Available We aimed to assess the current scope of handoff education and practice among resident physicians in academic centers and to propose a standardized handoff algorithm for the transition of care from the emergency department (ED to an inpatient setting. This was a cross-sectional survey targeted at the program directors, associate or assistant program directors, and faculty members of emergency medicine (EM residency programs in the United States (U.S.. The web-based survey was distributed to potential subjects through a listserv. A panel of experts used a modified Delphi approach to develop a standardized algorithm for ED to inpatient handoff. 121 of 172 programs responded to the survey for an overall response rate of 70.3%. Our survey showed that most EM programs in the U.S. have some form of handoff training, and the majority of them occur either during orientation or in the clinical setting. The handoff structure from ED to inpatient is not well standardized, and in those places with a formalized handoff system, over 70% of residents do not uniformly follow it. Approximately half of responding programs felt that their current handoff system was safe and effective. About half of the programs did not formally assess the handoff proficiency of trainees. Handoffs most commonly take place over the phone, though respondents disagree about the ideal place for a handoff to occur, with nearly equivalent responses between programs favoring the bedside over the phone or faceto-face on a computer. Approximately two-thirds of responding programs reported that their residents were competent in performing ED to inpatient handoffs. Based on this survey and on the review of the literature, we developed a five-step algorithm for the transition of care from the ED to the inpatient setting. Our results identified the current trends of education and practice in transitions of care, from the ED to the inpatient setting in U.S. academic medical centers. An algorithm

  18. THE ALGORITHM OF THE CASE FORMATION DURING THE DEVELOPMENT OF CLINICAL DISCIPLINES IN MEDICAL SCHOOL

    Directory of Open Access Journals (Sweden)

    Andrey A. Garanin

    2016-01-01

    Full Text Available The aim of the study is to develop the algorithm of formation of the case on discipline «Clinical Medicine». Methods. The methods involve the effectiveness analysis of the self-diagnosed levels of professional and personal abilities of students in the process of self-study. Results. The article deals with the organization of independent work of students of case-method, which is one of the most important and complex active learning methods. When implementing the method of case analysis in the educational process the main job of the teacher focused on the development of individual cases. While developing the case study of medical character the teacher needs to pay special attention to questions of pathogenesis and pathological anatomy for students’ formation of the fundamental clinical thinking allowing to estimate the patient’s condition as a complete organism, taking into account all its features, to understand the relationships of cause and effect arising at development of a concrete disease, to master new and to improve the available techniques of statement of the differential diagnosis. Scientific novelty and practical significance. The structure of a medical case study to be followed in the development of the case on discipline «Clinical Medicine» is proposed. Unification algorithm formation cases is necessary for the full implementation of the introduction in the educational process in the higher medical school as one of the most effective active ways of learning – method of case analysis, in accordance with the requirements that apply to higher professional education modern reforms and, in particular, the introduction of new Federal State Educational Standards. 

  19. Superior Generalization Capability of Hardware-Learing Algorithm Developed for Self-Learning Neuron-MOS Neural Networks

    Science.gov (United States)

    Kondo, Shuhei; Shibata, Tadashi; Ohmi, Tadahiro

    1995-02-01

    We have investigated the learning performance of the hardware backpropagation (HBP) algorithm, a hardware-oriented learning algorithm developed for the self-learning architecture of neural networks constructed using neuron MOS (metal-oxide-semiconductor) transistors. The solution to finding a mirror symmetry axis in a 4×4 binary pixel array was tested by computer simulation based on the HBP algorithm. Despite the inherent restrictions imposed on the hardware-learning algorithm, HBP exhibits equivalent learning performance to that of the original backpropagation (BP) algorithm when all the pertinent parameters are optimized. Very importantly, we have found that HBP has a superior generalization capability over BP; namely, HBP exhibits higher performance in solving problems that the network has not yet learnt.

  20. [Development and validation of an algorithm to identify cancer recurrences from hospital data bases].

    Science.gov (United States)

    Manzanares-Laya, S; Burón, A; Murta-Nascimento, C; Servitja, S; Castells, X; Macià, F

    2014-01-01

    Hospital cancer registries and hospital databases are valuable and efficient sources of information for research into cancer recurrences. The aim of this study was to develop and validate algorithms for the detection of breast cancer recurrence. A retrospective observational study was conducted on breast cancer cases from the cancer registry of a third level university hospital diagnosed between 2003 and 2009. Different probable cancer recurrence algorithms were obtained by linking the hospital databases and the construction of several operational definitions, with their corresponding sensitivity, specificity, positive predictive value and negative predictive value. A total of 1,523 patients were diagnosed of breast cancer between 2003 and 2009. A request for bone gammagraphy after 6 months from the first oncological treatment showed the highest sensitivity (53.8%) and negative predictive value (93.8%), and a pathology test after 6 months after the diagnosis showed the highest specificity (93.8%) and negative predictive value (92.6%). The combination of different definitions increased the specificity and the positive predictive value, but decreased the sensitivity. Several diagnostic algorithms were obtained, and the different definitions could be useful depending on the interest and resources of the researcher. A higher positive predictive value could be interesting for a quick estimation of the number of cases, and a higher negative predictive value for a more exact estimation if more resources are available. It is a versatile and adaptable tool for other types of tumors, as well as for the needs of the researcher. Copyright © 2014 SECA. Published by Elsevier Espana. All rights reserved.

  1. Development of a Machine Learning Algorithm for the Surveillance of Autism Spectrum Disorder.

    Directory of Open Access Journals (Sweden)

    Matthew J Maenner

    Full Text Available The Autism and Developmental Disabilities Monitoring (ADDM Network conducts population-based surveillance of autism spectrum disorder (ASD among 8-year old children in multiple US sites. To classify ASD, trained clinicians review developmental evaluations collected from multiple health and education sources to determine whether the child meets the ASD surveillance case criteria. The number of evaluations collected has dramatically increased since the year 2000, challenging the resources and timeliness of the surveillance system. We developed and evaluated a machine learning approach to classify case status in ADDM using words and phrases contained in children's developmental evaluations. We trained a random forest classifier using data from the 2008 Georgia ADDM site which included 1,162 children with 5,396 evaluations (601 children met ADDM ASD criteria using standard ADDM methods. The classifier used the words and phrases from the evaluations to predict ASD case status. We evaluated its performance on the 2010 Georgia ADDM surveillance data (1,450 children with 9,811 evaluations; 754 children met ADDM ASD criteria. We also estimated ASD prevalence using predictions from the classification algorithm. Overall, the machine learning approach predicted ASD case statuses that were 86.5% concordant with the clinician-determined case statuses (84.0% sensitivity, 89.4% predictive value positive. The area under the resulting receiver-operating characteristic curve was 0.932. Algorithm-derived ASD "prevalence" was 1.46% compared to the published (clinician-determined estimate of 1.55%. Using only the text contained in developmental evaluations, a machine learning algorithm was able to discriminate between children that do and do not meet ASD surveillance criteria at one surveillance site.

  2. Developing the surveillance algorithm for detection of failure to recognize and treat severe sepsis.

    Science.gov (United States)

    Harrison, Andrew M; Thongprayoon, Charat; Kashyap, Rahul; Chute, Christopher G; Gajic, Ognjen; Pickering, Brian W; Herasevich, Vitaly

    2015-02-01

    To develop and test an automated surveillance algorithm (sepsis "sniffer") for the detection of severe sepsis and monitoring failure to recognize and treat severe sepsis in a timely manner. We conducted an observational diagnostic performance study using independent derivation and validation cohorts from an electronic medical record database of the medical intensive care unit (ICU) of a tertiary referral center. All patients aged 18 years and older who were admitted to the medical ICU from January 1 through March 31, 2013 (N=587), were included. The criterion standard for severe sepsis/septic shock was manual review by 2 trained reviewers with a third superreviewer for cases of interobserver disagreement. Critical appraisal of false-positive and false-negative alerts, along with recursive data partitioning, was performed for algorithm optimization. An algorithm based on criteria for suspicion of infection, systemic inflammatory response syndrome, organ hypoperfusion and dysfunction, and shock had a sensitivity of 80% and a specificity of 96% when applied to the validation cohort. In order, low systolic blood pressure, systemic inflammatory response syndrome positivity, and suspicion of infection were determined through recursive data partitioning to be of greatest predictive value. Lastly, 117 alert-positive patients (68% of the 171 patients with severe sepsis) had a delay in recognition and treatment, defined as no lactate and central venous pressure measurement within 2 hours of the alert. The optimized sniffer accurately identified patients with severe sepsis that bedside clinicians failed to recognize and treat in a timely manner. Copyright © 2015 Mayo Foundation for Medical Education and Research. Published by Elsevier Inc. All rights reserved.

  3. Development of a computational algorithm for the linearization of decay and transmutation chains

    International Nuclear Information System (INIS)

    Cruz L, C. A.; Francois L, J. L.

    2017-09-01

    One of the most used methodologies to solve Bate man equations, in the problem of burning, is the Tta (Transmutation Trajectory Analysis) method. In this method, a network of decays is broken down into linear elements known as trajectories, through a process known as linearization. In this work an alternative algorithm is shown to find and construct these trajectories, which considers three aspects of linearization: the information -a priori- about the elements that make up decay and transmutation network, the use of a new notation, and in the functions for the treatment of text strings (which are common in most programming languages). One of the main advantages of the algorithm is that can condense the information of a decay and transmutation network into only two vectors. From these is possible to determine how many linear chains can be extracted from the network and even their length (in the case they are not cyclical). Unlike the Deep First Search method, which is widely used for the linearization process, the method proposed in the present work does not have a backward routine and instead occupies a process of compilation, since completes fragments chain instead of going back to the beginning of the trajectories. The developed algorithm can be applied in a general way to the information search and to the linearization of the computational data structures known as trees. It can also be applied to engineering problems where one seeks to calculate the concentration of some substance as a function of time, starting from linear differential equations of balance. (Author)

  4. Analysis and Development of Walking Algorithm Kinematic Model for 5-Degree of Freedom Bipedal Robot

    Directory of Open Access Journals (Sweden)

    Gerald Wahyudi Setiono

    2012-12-01

    Full Text Available A design of walking diagram and the calculation of a bipedal robot have been developed. The bipedal robot was designed and constructed with several kinds of servo bracket for the legs, two feet and a hip. Each of the bipedal robot leg was 5-degrees of freedom, three pitches (hip joint, knee joint and ankle joint and two rolls (hip joint and ankle joint. The walking algorithm of this bipedal robot was based on the triangle formulation of cosine law to get the angle value at each joint. The hip height, height of the swinging leg and the step distance are derived based on linear equation. This paper discussed the kinematic model analysis and the development of the walking diagram of the bipedal robot. Kinematics equations were derived, the joint angles were simulated and coded into Arduino board to be executed to the robot.

  5. Development and Validation of a Spike Detection and Classification Algorithm Aimed at Implementation on Hardware Devices

    Directory of Open Access Journals (Sweden)

    E. Biffi

    2010-01-01

    Full Text Available Neurons cultured in vitro on MicroElectrode Array (MEA devices connect to each other, forming a network. To study electrophysiological activity and long term plasticity effects, long period recording and spike sorter methods are needed. Therefore, on-line and real time analysis, optimization of memory use and data transmission rate improvement become necessary. We developed an algorithm for amplitude-threshold spikes detection, whose performances were verified with (a statistical analysis on both simulated and real signal and (b Big O Notation. Moreover, we developed a PCA-hierarchical classifier, evaluated on simulated and real signal. Finally we proposed a spike detection hardware design on FPGA, whose feasibility was verified in terms of CLBs number, memory occupation and temporal requirements; once realized, it will be able to execute on-line detection and real time waveform analysis, reducing data storage problems.

  6. Recent developments in structure-preserving algorithms for oscillatory differential equations

    CERN Document Server

    Wu, Xinyuan

    2018-01-01

    The main theme of this book is recent progress in structure-preserving algorithms for solving initial value problems of oscillatory differential equations arising in a variety of research areas, such as astronomy, theoretical physics, electronics, quantum mechanics and engineering. It systematically describes the latest advances in the development of structure-preserving integrators for oscillatory differential equations, such as structure-preserving exponential integrators, functionally fitted energy-preserving integrators, exponential Fourier collocation methods, trigonometric collocation methods, and symmetric and arbitrarily high-order time-stepping methods. Most of the material presented here is drawn from the recent literature. Theoretical analysis of the newly developed schemes shows their advantages in the context of structure preservation. All the new methods introduced in this book are proven to be highly effective compared with the well-known codes in the scientific literature. This book also addre...

  7. A false-alarm aware methodology to develop robust and efficient multi-scale infrared small target detection algorithm

    Science.gov (United States)

    Moradi, Saed; Moallem, Payman; Sabahi, Mohamad Farzan

    2018-03-01

    False alarm rate and detection rate are still two contradictory metrics for infrared small target detection in an infrared search and track system (IRST), despite the development of new detection algorithms. In certain circumstances, not detecting true targets is more tolerable than detecting false items as true targets. Hence, considering background clutter and detector noise as the sources of the false alarm in an IRST system, in this paper, a false alarm aware methodology is presented to reduce false alarm rate while the detection rate remains undegraded. To this end, advantages and disadvantages of each detection algorithm are investigated and the sources of the false alarms are determined. Two target detection algorithms having independent false alarm sources are chosen in a way that the disadvantages of the one algorithm can be compensated by the advantages of the other one. In this work, multi-scale average absolute gray difference (AAGD) and Laplacian of point spread function (LoPSF) are utilized as the cornerstones of the desired algorithm of the proposed methodology. After presenting a conceptual model for the desired algorithm, it is implemented through the most straightforward mechanism. The desired algorithm effectively suppresses background clutter and eliminates detector noise. Also, since the input images are processed through just four different scales, the desired algorithm has good capability for real-time implementation. Simulation results in term of signal to clutter ratio and background suppression factor on real and simulated images prove the effectiveness and the performance of the proposed methodology. Since the desired algorithm was developed based on independent false alarm sources, our proposed methodology is expandable to any pair of detection algorithms which have different false alarm sources.

  8. Development of Quantum Devices and Algorithms for Radiation Detection and Radiation Signal Processing

    International Nuclear Information System (INIS)

    El Tokhy, M.E.S.M.E.S.

    2012-01-01

    The main functions of spectroscopy system are signal detection, filtering and amplification, pileup detection and recovery, dead time correction, amplitude analysis and energy spectrum analysis. Safeguards isotopic measurements require the best spectrometer systems with excellent resolution, stability, efficiency and throughput. However, the resolution and throughput, which depend mainly on the detector, amplifier and the analog-to-digital converter (ADC), can still be improved. These modules have been in continuous development and improvement. For this reason we are interested with both the development of quantum detectors and efficient algorithms of the digital processing measurement. Therefore, the main objective of this thesis is concentrated on both 1. Study quantum dot (QD) devices behaviors under gamma radiation 2. Development of efficient algorithms for handling problems of gamma-ray spectroscopy For gamma radiation detection, a detailed study of nanotechnology QD sources and infrared photodetectors (QDIP) for gamma radiation detection is introduced. There are two different types of quantum scintillator detectors, which dominate the area of ionizing radiation measurements. These detectors are QD scintillator detectors and QDIP scintillator detectors. By comparison with traditional systems, quantum systems have less mass, require less volume, and consume less power. These factors are increasing the need for efficient detector for gamma-ray applications such as gamma-ray spectroscopy. Consequently, the nanocomposite materials based on semiconductor quantum dots has potential for radiation detection via scintillation was demonstrated in the literature. Therefore, this thesis presents a theoretical analysis for the characteristics of QD sources and infrared photodetectors (QDIPs). A model of QD sources under incident gamma radiation detection is developed. A novel methodology is introduced to characterize the effect of gamma radiation on QD devices. The rate

  9. Description of ALARMA: the alarm algorithm developed for the Nuclear Car Wash

    International Nuclear Information System (INIS)

    Luu, T; Biltoft, P; Church, J; Descalle, M; Hall, J; Manatt, D; Mauger, J; Norman, E; Petersen, D; Pruet, J; Prussin, S; Slaughter, D

    2006-01-01

    The goal of any alarm algorithm should be that it provide the necessary tools to derive confidence limits on whether the existence of fissile materials is present in cargo containers. It should be able to extract these limits from (usually) noisy and/or weak data while maintaining a false alarm rate (FAR) that is economically suitable for port operations. It should also be able to perform its analysis within a reasonably short amount of time (i.e. ∼ seconds). To achieve this, it is essential that the algorithm be able to identify and subtract any interference signature that might otherwise be confused with a fissile signature. Lastly, the algorithm itself should be user-intuitive and user-friendly so that port operators with little or no experience with detection algorithms may use it with relative ease. In support of the Nuclear Car Wash project at Lawrence Livermore Laboratory, we have developed an alarm algorithm that satisfies the above requirements. The description of the this alarm algorithm, dubbed ALARMA, is the purpose of this technical report. The experimental setup of the nuclear car wash has been well documented [1, 2, 3]. The presence of fissile materials is inferred by examining the β-delayed gamma spectrum induced after a brief neutron irradiation of cargo, particularly in the high-energy region above approximately 2.5 MeV. In this region naturally occurring gamma rays are virtually non-existent. Thermal-neutron induced fission of 235 U and 239 P, on the other hand, leaves a unique β-delayed spectrum [4]. This spectrum comes from decays of fission products having half-lives as large as 30 seconds, many of which have high Q-values. Since high-energy photons penetrate matter more freely, it is natural to look for unique fissile signatures in this energy region after neutron irradiation. The goal of this interrogation procedure is a 95% success rate of detection of as little as 5 kilograms of fissile material while retaining at most .1% false alarm

  10. Algorithm Development for Multi-Energy SXR based Electron Temperature Profile Reconstruction

    Science.gov (United States)

    Clayton, D. J.; Tritz, K.; Finkenthal, M.; Kumar, D.; Stutman, D.

    2012-10-01

    New techniques utilizing computational tools such as neural networks and genetic algorithms are being developed to infer plasma electron temperature profiles on fast time scales (> 10 kHz) from multi-energy soft-x-ray (ME-SXR) diagnostics. Traditionally, a two-foil SXR technique, using the ratio of filtered continuum emission measured by two SXR detectors, has been employed on fusion devices as an indirect method of measuring electron temperature. However, these measurements can be susceptible to large errors due to uncertainties in time-evolving impurity density profiles, leading to unreliable temperature measurements. To correct this problem, measurements using ME-SXR diagnostics, which use three or more filtered SXR arrays to distinguish line and continuum emission from various impurities, in conjunction with constraints from spectroscopic diagnostics, can be used to account for unknown or time evolving impurity profiles [K. Tritz et al, Bull. Am. Phys. Soc. Vol. 56, No. 12 (2011), PP9.00067]. On NSTX, ME-SXR diagnostics can be used for fast (10-100 kHz) temperature profile measurements, using a Thomson scattering diagnostic (60 Hz) for periodic normalization. The use of more advanced algorithms, such as neural network processing, can decouple the reconstruction of the temperature profile from spectral modeling.

  11. Development of Predictive QSAR Models of 4-Thiazolidinones Antitrypanosomal Activity using Modern Machine Learning Algorithms.

    Science.gov (United States)

    Kryshchyshyn, Anna; Devinyak, Oleg; Kaminskyy, Danylo; Grellier, Philippe; Lesyk, Roman

    2017-11-14

    This paper presents novel QSAR models for the prediction of antitrypanosomal activity among thiazolidines and related heterocycles. The performance of four machine learning algorithms: Random Forest regression, Stochastic gradient boosting, Multivariate adaptive regression splines and Gaussian processes regression have been studied in order to reach better levels of predictivity. The results for Random Forest and Gaussian processes regression are comparable and outperform other studied methods. The preliminary descriptor selection with Boruta method improved the outcome of machine learning methods. The two novel QSAR-models developed with Random Forest and Gaussian processes regression algorithms have good predictive ability, which was proved by the external evaluation of the test set with corresponding Q 2 ext =0.812 and Q 2 ext =0.830. The obtained models can be used further for in silico screening of virtual libraries in the same chemical domain in order to find new antitrypanosomal agents. Thorough analysis of descriptors influence in the QSAR models and interpretation of their chemical meaning allows to highlight a number of structure-activity relationships. The presence of phenyl rings with electron-withdrawing atoms or groups in para-position, increased number of aromatic rings, high branching but short chains, high HOMO energy, and the introduction of 1-substituted 2-indolyl fragment into the molecular structure have been recognized as trypanocidal activity prerequisites. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Development of a 3D muon disappearance algorithm for muon scattering tomography

    Science.gov (United States)

    Blackwell, T. B.; Kudryavtsev, V. A.

    2015-05-01

    Upon passing through a material, muons lose energy, scatter off nuclei and atomic electrons, and can stop in the material. Muons will more readily lose energy in higher density materials. Therefore multiple muon disappearances within a localized volume may signal the presence of high-density materials. We have developed a new technique that improves the sensitivity of standard muon scattering tomography. This technique exploits these muon disappearances to perform non-destructive assay of an inspected volume. Muons that disappear have their track evaluated using a 3D line extrapolation algorithm, which is in turn used to construct a 3D tomographic image of the inspected volume. Results of Monte Carlo simulations that measure muon disappearance in different types of target materials are presented. The ability to differentiate between different density materials using the 3D line extrapolation algorithm is established. Finally the capability of this new muon disappearance technique to enhance muon scattering tomography techniques in detecting shielded HEU in cargo containers has been demonstrated.

  13. Development of pattern recognition algorithms for particles detection from atmospheric images

    International Nuclear Information System (INIS)

    Khatchadourian, S.

    2010-01-01

    The HESS experiment consists of a system of telescopes destined to observe cosmic rays. Since the project has achieved a high level of performances, a second phase of the project has been initiated. This implies the addition of a new telescope which is more sensitive than its predecessors and which is capable of collecting a huge amount of images. In this context, all data collected by the telescope can not be retained because of storage limitations. Therefore, a new real-time system trigger must be designed in order to select interesting events on the fly. The purpose of this thesis was to propose a trigger solution to efficiently discriminate events (images) which are captured by the telescope. The first part of this thesis was to develop pattern recognition algorithms to be implemented within the trigger. A processing chain based on neural networks and Zernike moments has been validated. The second part of the thesis has focused on the implementation of the proposed algorithms onto an FPGA target, taking into account the application constraints in terms of resources and execution time. (author)

  14. A heuristic algorithm to approximate dynamic program of a novel new product development process

    Directory of Open Access Journals (Sweden)

    Hamed Fazlollahtabar

    2016-01-01

    Full Text Available We are concerned with a new product development (NPD network in digital environment in which the aim is to find integrated attributes for value added purposes. Different views exist for new product development. Here, the effective factors are categorized into customers, competitors and the company’s own past experience. Also, various attributes are considered for the development of a product. Thus, using digital data of attributes, the optimal set of attributes is chosen for user in the new product development. Regarding the multi stage decision making process of the customer, competitor and company’s own past experience, we develop a multi-dimensional dynamic program as a useful tool for multi stage decision making. To counteract the dynamism of the digital data in different time periods, two concepts of state and policy direction are introduced to determine the cost of moving through the stages of the proposed NPD digital network. Since the space requirements and value function computations become impractical for even moderate size, we approximate the optimal value function developing a heuristic algorithm.

  15. Previous ISD Program Review.

    Science.gov (United States)

    1981-03-01

    report. The detail required for such a review would be unwieldy and would comsume inordinate amounts of time. The result of the document review will...attempts have been made at writing specific behavioral objectives (SBOs). These, however, have proven to be inadequate in that they are not stated in... behavioral terms (e.g., "will understand," "will have a knowledge of," etc.). C. Development of CRO/CRTs? In nearly all cases, ISD teams are just

  16. Evaluation of Two Robot Vision Control Algorithms Developed Based on N-R and EKG Methods for Slender Bar Placement

    Energy Technology Data Exchange (ETDEWEB)

    Son, Jae Kyung; Jang, Wan Shik; Hong, Sung Mun [Gwangju (Korea, Republic of)

    2013-04-15

    Many problems need to be solved before vision systems can actually be applied in industry, such as the precision of the kinematics model of the robot control algorithm based on visual information, active compensation of the camera's focal length and orientation during the movement of the robot, and understanding the mapping of the physical 3-D space into 2-D camera coordinates. An algorithm is proposed to enable robot to move actively even if the relative positions between the camera and the robot is unknown. To solve the correction problem, this study proposes vision system model with six camera parameters. To develop the robot vision control algorithm, the N-R and EKG methods are applied to the vision system model. Finally, the position accuracy and processing time of the two algorithms developed based based on the EKG and the N-R methods are compared experimentally by making the robot perform slender bar placement task.

  17. Evaluation of Two Robot Vision Control Algorithms Developed Based on N-R and EKG Methods for Slender Bar Placement

    International Nuclear Information System (INIS)

    Son, Jae Kyung; Jang, Wan Shik; Hong, Sung Mun

    2013-01-01

    Many problems need to be solved before vision systems can actually be applied in industry, such as the precision of the kinematics model of the robot control algorithm based on visual information, active compensation of the camera's focal length and orientation during the movement of the robot, and understanding the mapping of the physical 3-D space into 2-D camera coordinates. An algorithm is proposed to enable robot to move actively even if the relative positions between the camera and the robot is unknown. To solve the correction problem, this study proposes vision system model with six camera parameters. To develop the robot vision control algorithm, the N-R and EKG methods are applied to the vision system model. Finally, the position accuracy and processing time of the two algorithms developed based based on the EKG and the N-R methods are compared experimentally by making the robot perform slender bar placement task

  18. Development of independent MU/treatment time verification algorithm for non-IMRT treatment planning: A clinical experience

    Science.gov (United States)

    Tatli, Hamza; Yucel, Derya; Yilmaz, Sercan; Fayda, Merdan

    2018-02-01

    The aim of this study is to develop an algorithm for independent MU/treatment time (TT) verification for non-IMRT treatment plans, as a part of QA program to ensure treatment delivery accuracy. Two radiotherapy delivery units and their treatment planning systems (TPS) were commissioned in Liv Hospital Radiation Medicine Center, Tbilisi, Georgia. Beam data were collected according to vendors' collection guidelines, and AAPM reports recommendations, and processed by Microsoft Excel during in-house algorithm development. The algorithm is designed and optimized for calculating SSD and SAD treatment plans, based on AAPM TG114 dose calculation recommendations, coded and embedded in MS Excel spreadsheet, as a preliminary verification algorithm (VA). Treatment verification plans were created by TPSs based on IAEA TRS 430 recommendations, also calculated by VA, and point measurements were collected by solid water phantom, and compared. Study showed that, in-house VA can be used for non-IMRT plans MU/TT verifications.

  19. Soft sensor development for Mooney viscosity prediction in rubber mixing process based on GMMDJITGPR algorithm

    Science.gov (United States)

    Yang, Kai; Chen, Xiangguang; Wang, Li; Jin, Huaiping

    2017-01-01

    In rubber mixing process, the key parameter (Mooney viscosity), which is used to evaluate the property of the product, can only be obtained with 4-6h delay offline. It is quite helpful for the industry, if the parameter can be estimate on line. Various data driven soft sensors have been used to prediction in the rubber mixing. However, it always not functions well due to the phase and nonlinear property in the process. The purpose of this paper is to develop an efficient soft sensing algorithm to solve the problem. Based on the proposed GMMD local sample selecting criterion, the phase information is extracted in the local modeling. Using the Gaussian local modeling method within Just-in-time (JIT) learning framework, nonlinearity of the process is well handled. Efficiency of the new method is verified by comparing the performance with various mainstream soft sensors, using the samples from real industrial rubber mixing process.

  20. Development of Fire Detection Algorithm at Its Early Stage Using Fire Colour and Shape Information

    Science.gov (United States)

    Suleiman Abdullahi, Zainab; Hamisu Dalhatu, Shehu; Hassan Abdullahi, Zakariyya

    2018-04-01

    Fire can be defined as a state in which substances combined chemically with oxygen from the air and give out heat, smoke and flame. Most of the conventional fire detection techniques such as smoke, fire and heat detectors respectively have a problem of travelling delay and also give a high false alarm. The algorithm begins by loading the selected video clip from the database developed to identify the present or absence of fire in a frame. In this approach, background subtraction was employed. If the result of subtraction is less than the set threshold, the difference is ignored and the next frame is taken. However, if the difference is equal to or greater than the set threshold then it subjected to colour and shape test. This is done by using combined RGB colour model and shape signature. The proposed technique was very effective in detecting fire compared to those technique using only motion or colour clues.

  1. Game-based programming towards developing algorithmic thinking skills in primary education

    Directory of Open Access Journals (Sweden)

    Hariklia Tsalapatas

    2012-06-01

    Full Text Available This paper presents cMinds, a learning intervention that deploys game-based visual programming towards building analytical, computational, and critical thinking skills in primary education. The proposed learning method exploits the structured nature of programming, which is inherently logical and transcends cultural barriers, towards inclusive learning that exposes learners to algorithmic thinking. A visual programming environment, entitled ‘cMinds Learning Suite’, has been developed aimed for classroom use. Feedback from the deployment of the learning methods and tools in classrooms in several European countries demonstrates elevated learner motivation for engaging in logical learning activities, fostering of creativity and an entrepreneurial spirit, and promotion of problem-solving capacity

  2. Development of Web-Based Menu Planning Support System and its Solution Using Genetic Algorithm

    Science.gov (United States)

    Kashima, Tomoko; Matsumoto, Shimpei; Ishii, Hiroaki

    2009-10-01

    Recently lifestyle-related diseases have become an object of public concern, while at the same time people are being more health conscious. As an essential factor for causing the lifestyle-related diseases, we assume that the knowledge circulation on dietary habits is still insufficient. This paper focuses on everyday meals close to our life and proposes a well-balanced menu planning system as a preventive measure of lifestyle-related diseases. The system is developed by using a Web-based frontend and it provides multi-user services and menu information sharing capabilities like social networking services (SNS). The system is implemented on a Web server running Apache (HTTP server software), MySQL (database management system), and PHP (scripting language for dynamic Web pages). For the menu planning, a genetic algorithm is applied by understanding this problem as multidimensional 0-1 integer programming.

  3. Development of parallel GPU based algorithms for problems in nuclear area

    International Nuclear Information System (INIS)

    Almeida, Adino Americo Heimlich

    2009-01-01

    Graphics Processing Units (GPU) are high performance co-processors intended, originally, to improve the use and quality of computer graphics applications. Since researchers and practitioners realized the potential of using GPU for general purpose, their application has been extended to other fields out of computer graphics scope. The main objective of this work is to evaluate the impact of using GPU in two typical problems of Nuclear area. The neutron transport simulation using Monte Carlo method and solve heat equation in a bi-dimensional domain by finite differences method. To achieve this, we develop parallel algorithms for GPU and CPU in the two problems described above. The comparison showed that the GPU-based approach is faster than the CPU in a computer with two quad core processors, without precision loss. (author)

  4. Development of an Aerosol Opacity Retrieval Algorithm for Use with Multi-Angle Land Surface Images

    Science.gov (United States)

    Diner, D.; Paradise, S.; Martonchik, J.

    1994-01-01

    In 1998, the Multi-angle Imaging SpectroRadiometer (MISR) will fly aboard the EOS-AM1 spacecraft. MISR will enable unique methods for retrieving the properties of atmospheric aerosols, by providing global imagery of the Earth at nine viewing angles in four visible and near-IR spectral bands. As part of the MISR algorithm development, theoretical methods of analyzing multi-angle, multi-spectral data are being tested using images acquired by the airborne Advanced Solid-State Array Spectroradiometer (ASAS). In this paper we derive a method to be used over land surfaces for retrieving the change in opacity between spectral bands, which can then be used in conjunction with an aerosol model to derive a bound on absolute opacity.

  5. OpenMM 7: Rapid development of high performance algorithms for molecular dynamics.

    Directory of Open Access Journals (Sweden)

    Peter Eastman

    2017-07-01

    Full Text Available OpenMM is a molecular dynamics simulation toolkit with a unique focus on extensibility. It allows users to easily add new features, including forces with novel functional forms, new integration algorithms, and new simulation protocols. Those features automatically work on all supported hardware types (including both CPUs and GPUs and perform well on all of them. In many cases they require minimal coding, just a mathematical description of the desired function. They also require no modification to OpenMM itself and can be distributed independently of OpenMM. This makes it an ideal tool for researchers developing new simulation methods, and also allows those new methods to be immediately available to the larger community.

  6. An algorithm for hyperspectral remote sensing of aerosols: 1. Development of theoretical framework

    International Nuclear Information System (INIS)

    Hou, Weizhen; Wang, Jun; Xu, Xiaoguang; Reid, Jeffrey S.; Han, Dong

    2016-01-01

    This paper describes the first part of a series of investigations to develop algorithms for simultaneous retrieval of aerosol parameters and surface reflectance from a newly developed hyperspectral instrument, the GEOstationary Trace gas and Aerosol Sensor Optimization (GEO-TASO), by taking full advantage of available hyperspectral measurement information in the visible bands. We describe the theoretical framework of an inversion algorithm for the hyperspectral remote sensing of the aerosol optical properties, in which major principal components (PCs) for surface reflectance is assumed known, and the spectrally dependent aerosol refractive indices are assumed to follow a power-law approximation with four unknown parameters (two for real and two for imaginary part of refractive index). New capabilities for computing the Jacobians of four Stokes parameters of reflected solar radiation at the top of the atmosphere with respect to these unknown aerosol parameters and the weighting coefficients for each PC of surface reflectance are added into the UNified Linearized Vector Radiative Transfer Model (UNL-VRTM), which in turn facilitates the optimization in the inversion process. Theoretical derivations of the formulas for these new capabilities are provided, and the analytical solutions of Jacobians are validated against the finite-difference calculations with relative error less than 0.2%. Finally, self-consistency check of the inversion algorithm is conducted for the idealized green-vegetation and rangeland surfaces that were spectrally characterized by the U.S. Geological Survey digital spectral library. It shows that the first six PCs can yield the reconstruction of spectral surface reflectance with errors less than 1%. Assuming that aerosol properties can be accurately characterized, the inversion yields a retrieval of hyperspectral surface reflectance with an uncertainty of 2% (and root-mean-square error of less than 0.003), which suggests self-consistency in the

  7. Current algorithms used in reactor safety codes and the impact of future computer development on these algorithms

    International Nuclear Information System (INIS)

    Mahaffy, J.H.; Liles, D.R.; Woodruff, S.B.

    1985-01-01

    Computational methods and solution procedures used in the US Nuclear Regulatory Commission's reactor safety systems codes, Transient Reactor Analysis Code (TRAC) and Reactor Leak and Power Safety Excursion Code (RELAP), are reviewed. Methods used in TRAC-PF1/MOD1, including the stability-enhancing two-step (SETS) technique, which permits fast computations by allowing time steps larger than the material Courant stability limit, are described in detail, and the differences from RELAP5/MOD2 are noted. Developments in computing, including parallel and vector processing, and their applicability to nuclear reactor safety codes are described. These developments, coupled with appropriate numerical methods, make detailed faster-than-real-time reactor safety analysis a realistic near-term possibility

  8. Crytosystem Program Planning for Securing Data/Information of the Results of Research and Development using Triple DES Algorithm

    International Nuclear Information System (INIS)

    Tumpal P; Naga, Dali S.; David

    2004-01-01

    This software is a cryptosystem that uses triple DES algorithm and uses ECB (Electronic Code Book) mode. This cryptosystem can send a file with any extension whether it is encrypted or not, encrypt the data that representing the picture of bitmap file or text, as well as view the calculation that can be written. Triple DES is an efficient and effective developments of DES because same algorithm but the three times repeated operation causing the key become 168 bit from 56 bit. (author)

  9. Rapid Mental Сomputation System as a Tool for Algorithmic Thinking of Elementary School Students Development

    Directory of Open Access Journals (Sweden)

    Rushan Ziatdinov

    2012-07-01

    Full Text Available In this paper, we describe the possibilities of using a rapid mental computation system in elementary education. The system consists of a number of readily memorized operations that allow one to perform arithmetic computations very quickly. These operations are actually simple algorithms which can develop or improve the algorithmic thinking of pupils. Using a rapid mental computation system allows forming the basis for the study of computer science in secondary school.

  10. Development of a generally applicable morphokinetic algorithm capable of predicting the implantation potential of embryos transferred on Day 3

    Science.gov (United States)

    Petersen, Bjørn Molt; Boel, Mikkel; Montag, Markus; Gardner, David K.

    2016-01-01

    STUDY QUESTION Can a generally applicable morphokinetic algorithm suitable for Day 3 transfers of time-lapse monitored embryos originating from different culture conditions and fertilization methods be developed for the purpose of supporting the embryologist's decision on which embryo to transfer back to the patient in assisted reproduction? SUMMARY ANSWER The algorithm presented here can be used independently of culture conditions and fertilization method and provides predictive power not surpassed by other published algorithms for ranking embryos according to their blastocyst formation potential. WHAT IS KNOWN ALREADY Generally applicable algorithms have so far been developed only for predicting blastocyst formation. A number of clinics have reported validated implantation prediction algorithms, which have been developed based on clinic-specific culture conditions and clinical environment. However, a generally applicable embryo evaluation algorithm based on actual implantation outcome has not yet been reported. STUDY DESIGN, SIZE, DURATION Retrospective evaluation of data extracted from a database of known implantation data (KID) originating from 3275 embryos transferred on Day 3 conducted in 24 clinics between 2009 and 2014. The data represented different culture conditions (reduced and ambient oxygen with various culture medium strategies) and fertilization methods (IVF, ICSI). The capability to predict blastocyst formation was evaluated on an independent set of morphokinetic data from 11 218 embryos which had been cultured to Day 5. PARTICIPANTS/MATERIALS, SETTING, METHODS The algorithm was developed by applying automated recursive partitioning to a large number of annotation types and derived equations, progressing to a five-fold cross-validation test of the complete data set and a validation test of different incubation conditions and fertilization methods. The results were expressed as receiver operating characteristics curves using the area under the

  11. Modeling of video traffic in packet networks, low rate video compression, and the development of a lossy+lossless image compression algorithm

    Science.gov (United States)

    Sayood, K.; Chen, Y. C.; Wang, X.

    1992-01-01

    During this reporting period we have worked on three somewhat different problems. These are modeling of video traffic in packet networks, low rate video compression, and the development of a lossy + lossless image compression algorithm, which might have some application in browsing algorithms. The lossy + lossless scheme is an extension of work previously done under this grant. It provides a simple technique for incorporating browsing capability. The low rate coding scheme is also a simple variation on the standard discrete cosine transform (DCT) coding approach. In spite of its simplicity, the approach provides surprisingly high quality reconstructions. The modeling approach is borrowed from the speech recognition literature, and seems to be promising in that it provides a simple way of obtaining an idea about the second order behavior of a particular coding scheme. Details about these are presented.

  12. Development of CD3 cell quantitation algorithms for renal allograft biopsy rejection assessment utilizing open source image analysis software.

    Science.gov (United States)

    Moon, Andres; Smith, Geoffrey H; Kong, Jun; Rogers, Thomas E; Ellis, Carla L; Farris, Alton B Brad

    2018-02-01

    Renal allograft rejection diagnosis depends on assessment of parameters such as interstitial inflammation; however, studies have shown interobserver variability regarding interstitial inflammation assessment. Since automated image analysis quantitation can be reproducible, we devised customized analysis methods for CD3+ T-cell staining density as a measure of rejection severity and compared them with established commercial methods along with visual assessment. Renal biopsy CD3 immunohistochemistry slides (n = 45), including renal allografts with various degrees of acute cellular rejection (ACR) were scanned for whole slide images (WSIs). Inflammation was quantitated in the WSIs using pathologist visual assessment, commercial algorithms (Aperio nuclear algorithm for CD3+ cells/mm 2 and Aperio positive pixel count algorithm), and customized open source algorithms developed in ImageJ with thresholding/positive pixel counting (custom CD3+%) and identification of pixels fulfilling "maxima" criteria for CD3 expression (custom CD3+ cells/mm 2 ). Based on visual inspections of "markup" images, CD3 quantitation algorithms produced adequate accuracy. Additionally, CD3 quantitation algorithms correlated between each other and also with visual assessment in a statistically significant manner (r = 0.44 to 0.94, p = 0.003 to algorithms presents salient correlations with established methods of CD3 quantitation. These analysis techniques are promising and highly customizable, providing a form of on-slide "flow cytometry" that can facilitate additional diagnostic accuracy in tissue-based assessments.

  13. Development of an Experimental Platform for Testing Autonomous UAV Guidance and Control Algorithms

    National Research Council Canada - National Science Library

    Rufa, Justin R

    2007-01-01

    .... The next step to transitioning these autonomous search algorithms to an operational environment is the experimental testing of these theories through the use of surrogate vehicles, to determine...

  14. Prediction system of hydroponic plant growth and development using algorithm Fuzzy Mamdani method

    Science.gov (United States)

    Sudana, I. Made; Purnawirawan, Okta; Arief, Ulfa Mediaty

    2017-03-01

    Hydroponics is a method of farming without soil. One of the Hydroponic plants is Watercress (Nasturtium Officinale). The development and growth process of hydroponic Watercress was influenced by levels of nutrients, acidity and temperature. The independent variables can be used as input variable system to predict the value level of plants growth and development. The prediction system is using Fuzzy Algorithm Mamdani method. This system was built to implement the function of Fuzzy Inference System (Fuzzy Inference System/FIS) as a part of the Fuzzy Logic Toolbox (FLT) by using MATLAB R2007b. FIS is a computing system that works on the principle of fuzzy reasoning which is similar to humans' reasoning. Basically FIS consists of four units which are fuzzification unit, fuzzy logic reasoning unit, base knowledge unit and defuzzification unit. In addition to know the effect of independent variables on the plants growth and development that can be visualized with the function diagram of FIS output surface that is shaped three-dimensional, and statistical tests based on the data from the prediction system using multiple linear regression method, which includes multiple linear regression analysis, T test, F test, the coefficient of determination and donations predictor that are calculated using SPSS (Statistical Product and Service Solutions) software applications.

  15. Development and evaluation of a micro-macro algorithm for the simulation of polymer flow

    International Nuclear Information System (INIS)

    Feigl, Kathleen; Tanner, Franz X.

    2006-01-01

    A micro-macro algorithm for the calculation of polymer flow is developed and numerically evaluated. The system being solved consists of the momentum and mass conservation equations from continuum mechanics coupled with a microscopic-based rheological model for polymer stress. Standard finite element techniques are used to solve the conservation equations for velocity and pressure, while stochastic simulation techniques are used to compute polymer stress from the simulated polymer dynamics in the rheological model. The rheological model considered combines aspects of reptation, network and continuum models. Two types of spatial approximation are considered for the configuration fields defining the dynamics in the model: piecewise constant and piecewise linear. The micro-macro algorithm is evaluated by simulating the abrupt planar die entry flow of a polyisobutylene solution described in the literature. The computed velocity and stress fields are found to be essentially independent of mesh size and ensemble size, while there is some dependence of the results on the order of spatial approximation to the configuration fields close to the die entry. Comparison with experimental data shows that the piecewise linear approximation leads to better predictions of the centerline first normal stress difference. Finally, the computational time associated with the piecewise constant spatial approximation is found to be about 2.5 times lower than that associated with the piecewise linear approximation. This is the result of the more efficient time integration scheme that is possible with the former type of approximation due to the pointwise incompressibility guaranteed by the choice of velocity-pressure finite element

  16. Development and validation of a simple algorithm for initiation of CPAP in neonates with respiratory distress in Malawi.

    Science.gov (United States)

    Hundalani, Shilpa G; Richards-Kortum, Rebecca; Oden, Maria; Kawaza, Kondwani; Gest, Alfred; Molyneux, Elizabeth

    2015-07-01

    Low-cost bubble continuous positive airway pressure (bCPAP) systems have been shown to improve survival in neonates with respiratory distress, in developing countries including Malawi. District hospitals in Malawi implementing CPAP requested simple and reliable guidelines to enable healthcare workers with basic skills and minimal training to determine when treatment with CPAP is necessary. We developed and validated TRY (T: Tone is good, R: Respiratory Distress and Y=Yes) CPAP, a simple algorithm to identify neonates with respiratory distress who would benefit from CPAP. To validate the TRY CPAP algorithm for neonates with respiratory distress in a low-resource setting. We constructed an algorithm using a combination of vital signs, tone and birth weight to determine the need for CPAP in neonates with respiratory distress. Neonates admitted to the neonatal ward of Queen Elizabeth Central Hospital, in Blantyre, Malawi, were assessed in a prospective, cross-sectional study. Nurses and paediatricians-in-training assessed neonates to determine whether they required CPAP using the TRY CPAP algorithm. To establish the accuracy of the TRY CPAP algorithm in evaluating the need for CPAP, their assessment was compared with the decision of a neonatologist blinded to the TRY CPAP algorithm findings. 325 neonates were evaluated over a 2-month period; 13% were deemed to require CPAP by the neonatologist. The inter-rater reliability with the algorithm was 0.90 for nurses and 0.97 for paediatricians-in-training using the neonatologist's assessment as the reference standard. The TRY CPAP algorithm has the potential to be a simple and reliable tool to assist nurses and clinicians in identifying neonates who require treatment with CPAP in low-resource settings. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  17. Accessing primary care Big Data: the development of a software algorithm to explore the rich content of consultation records.

    Science.gov (United States)

    MacRae, J; Darlow, B; McBain, L; Jones, O; Stubbe, M; Turner, N; Dowell, A

    2015-08-21

    To develop a natural language processing software inference algorithm to classify the content of primary care consultations using electronic health record Big Data and subsequently test the algorithm's ability to estimate the prevalence and burden of childhood respiratory illness in primary care. Algorithm development and validation study. To classify consultations, the algorithm is designed to interrogate clinical narrative entered as free text, diagnostic (Read) codes created and medications prescribed on the day of the consultation. Thirty-six consenting primary care practices from a mixed urban and semirural region of New Zealand. Three independent sets of 1200 child consultation records were randomly extracted from a data set of all general practitioner consultations in participating practices between 1 January 2008-31 December 2013 for children under 18 years of age (n=754,242). Each consultation record within these sets was independently classified by two expert clinicians as respiratory or non-respiratory, and subclassified according to respiratory diagnostic categories to create three 'gold standard' sets of classified records. These three gold standard record sets were used to train, test and validate the algorithm. Sensitivity, specificity, positive predictive value and F-measure were calculated to illustrate the algorithm's ability to replicate judgements of expert clinicians within the 1200 record gold standard validation set. The algorithm was able to identify respiratory consultations in the 1200 record validation set with a sensitivity of 0.72 (95% CI 0.67 to 0.78) and a specificity of 0.95 (95% CI 0.93 to 0.98). The positive predictive value of algorithm respiratory classification was 0.93 (95% CI 0.89 to 0.97). The positive predictive value of the algorithm classifying consultations as being related to specific respiratory diagnostic categories ranged from 0.68 (95% CI 0.40 to 1.00; other respiratory conditions) to 0.91 (95% CI 0.79 to 1

  18. Development of a Robotic Colonoscopic Manipulation System, Using Haptic Feedback Algorithm.

    Science.gov (United States)

    Woo, Jaehong; Choi, Jae Hyuk; Seo, Jong Tae; Kim, Tae Il; Yi, Byung Ju

    2017-01-01

    Colonoscopy is one of the most effective diagnostic and therapeutic tools for colorectal diseases. We aim to propose a master-slave robotic colonoscopy that is controllable in remote site using conventional colonoscopy. The master and slave robot were developed to use conventional flexible colonoscopy. The robotic colonoscopic procedure was performed using a colonoscope training model by one expert endoscopist and two unexperienced engineers. To provide the haptic sensation, the insertion force and the rotating torque were measured and sent to the master robot. A slave robot was developed to hold the colonoscopy and its knob, and perform insertion, rotation, and two tilting motions of colonoscope. A master robot was designed to teach motions of the slave robot. These measured force and torque were scaled down by one tenth to provide the operator with some reflection force and torque at the haptic device. The haptic sensation and feedback system was successful and helpful to feel the constrained force or torque in colon. The insertion time using robotic system decreased with repeated procedures. This work proposed a robotic approach for colonoscopy using haptic feedback algorithm, and this robotic device would effectively perform colonoscopy with reduced burden and comparable safety for patients in remote site.

  19. Evaluating the administration costs of biologic drugs: development of a cost algorithm.

    Science.gov (United States)

    Tetteh, Ebenezer K; Morris, Stephen

    2014-12-01

    Biologic drugs, as with all other medical technologies, are subject to a number of regulatory, marketing, reimbursement (financing) and other demand-restricting hurdles applied by healthcare payers. One example is the routine use of cost-effectiveness analyses or health technology assessments to determine which medical technologies offer value-for-money. The manner in which these assessments are conducted suggests that, holding all else equal, the economic value of biologic drugs may be determined by how much is spent on administering these drugs or trade-offs between drug acquisition and administration costs. Yet, on the supply-side, it seems very little attention is given to how manufacturing and formulation choices affect healthcare delivery costs. This paper evaluates variations in the administration costs of biologic drugs, taking care to ensure consistent inclusion of all relevant cost resources. From this, it develops a regression-based algorithm with which manufacturers could possibly predict, during process development, how their manufacturing and formulation choices may impact on the healthcare delivery costs of their products.

  20. Development of a Robotic Colonoscopic Manipulation System, Using Haptic Feedback Algorithm

    Science.gov (United States)

    Woo, Jaehong; Choi, Jae Hyuk; Seo, Jong Tae

    2017-01-01

    Purpose Colonoscopy is one of the most effective diagnostic and therapeutic tools for colorectal diseases. We aim to propose a master-slave robotic colonoscopy that is controllable in remote site using conventional colonoscopy. Materials and Methods The master and slave robot were developed to use conventional flexible colonoscopy. The robotic colonoscopic procedure was performed using a colonoscope training model by one expert endoscopist and two unexperienced engineers. To provide the haptic sensation, the insertion force and the rotating torque were measured and sent to the master robot. Results A slave robot was developed to hold the colonoscopy and its knob, and perform insertion, rotation, and two tilting motions of colonoscope. A master robot was designed to teach motions of the slave robot. These measured force and torque were scaled down by one tenth to provide the operator with some reflection force and torque at the haptic device. The haptic sensation and feedback system was successful and helpful to feel the constrained force or torque in colon. The insertion time using robotic system decreased with repeated procedures. Conclusion This work proposed a robotic approach for colonoscopy using haptic feedback algorithm, and this robotic device would effectively perform colonoscopy with reduced burden and comparable safety for patients in remote site. PMID:27873506

  1. Performance and development plans for the Inner Detector trigger algorithms at ATLAS

    International Nuclear Information System (INIS)

    Martin-Haugh, Stewart

    2014-01-01

    A description of the algorithms and the performance of the ATLAS Inner Detector trigger for LHC Run 1 are presented, as well as prospects for a redesign of the tracking algorithms in Run 2. The Inner Detector trigger algorithms are vital for many trigger signatures at ATLAS. The performance of the algorithms for electrons is presented. The ATLAS trigger software will be restructured from two software levels into a single stage which poses a big challenge for the trigger algorithms in terms of execution time and maintaining the physics performance. Expected future improvements in the timing and efficiencies of the Inner Detector triggers are discussed, utilising the planned merging of the current two stages of the ATLAS trigger.

  2. Development and performance analysis of a lossless data reduction algorithm for voip

    International Nuclear Information System (INIS)

    Misbahuddin, S.; Boulejfen, N.

    2014-01-01

    VoIP (Voice Over IP) is becoming an alternative way of voice communications over the Internet. To better utilize voice call bandwidth, some standard compression algorithms are applied in VoIP systems. However, these algorithms affect the voice quality with high compression ratios. This paper presents a lossless data reduction technique to improve VoIP data transfer rate over the IP network. The proposed algorithm exploits the data redundancies in digitized VFs (Voice Frames) generated by VoIP systems. Performance of proposed data reduction algorithm has been presented in terms of compression ratio. The proposed algorithm will help retain the voice quality along with the improvement in VoIP data transfer rates. (author)

  3. Modelling and development of estimation and control algorithms: application to a bio process; Modelisation et elaboration d`algorithmes d`estimation et de commande: application a un bioprocede

    Energy Technology Data Exchange (ETDEWEB)

    Maher, M

    1995-02-03

    Modelling, estimation and control of an alcoholic fermentation process is the purpose of this thesis. A simple mathematical model of a fermentation process is established by using experimental results obtained on the plant. This nonlinear model is used for numerical simulation, analysis and synthesis of estimation and control algorithms. The problem of state and parameter nonlinear estimation of bio-processes is studied. Two estimation techniques are developed and proposed to bypass the lack of sensors for certain physical variables. Their performances are studied by numerical simulation. One of these estimators is validated on experimental results of batch and continuous fermentations. An adaptive control by law is proposed for the regulation and tracking of the substrate concentration of the plant by acting on the dilution rate. It is a nonlinear control strategy coupled with the previous validated estimator. The performance of this control law is evaluated by a real application to a continuous flow fermentation process. (author) refs.

  4. Utilization of Ancillary Data Sets for Conceptual SMAP Mission Algorithm Development and Product Generation

    Science.gov (United States)

    O'Neill, P.; Podest, E.

    2011-01-01

    The planned Soil Moisture Active Passive (SMAP) mission is one of the first Earth observation satellites being developed by NASA in response to the National Research Council's Decadal Survey, Earth Science and Applications from Space: National Imperatives for the Next Decade and Beyond [1]. Scheduled to launch late in 2014, the proposed SMAP mission would provide high resolution and frequent revisit global mapping of soil moisture and freeze/thaw state, utilizing enhanced Radio Frequency Interference (RFI) mitigation approaches to collect new measurements of the hydrological condition of the Earth's surface. The SMAP instrument design incorporates an L-band radar (3 km) and an L band radiometer (40 km) sharing a single 6-meter rotating mesh antenna to provide measurements of soil moisture and landscape freeze/thaw state [2]. These observations would (1) improve our understanding of linkages between the Earth's water, energy, and carbon cycles, (2) benefit many application areas including numerical weather and climate prediction, flood and drought monitoring, agricultural productivity, human health, and national security, (3) help to address priority questions on climate change, and (4) potentially provide continuity with brightness temperature and soil moisture measurements from ESA's SMOS (Soil Moisture Ocean Salinity) and NASA's Aquarius missions. In the planned SMAP mission prelaunch time frame, baseline algorithms are being developed for generating (1) soil moisture products both from radiometer measurements on a 36 km grid and from combined radar/radiometer measurements on a 9 km grid, and (2) freeze/thaw products from radar measurements on a 3 km grid. These retrieval algorithms need a variety of global ancillary data, both static and dynamic, to run the retrieval models, constrain the retrievals, and provide flags for indicating retrieval quality. The choice of which ancillary dataset to use for a particular SMAP product would be based on a number of factors

  5. Development of a deformable dosimetric phantom to verify dose accumulation algorithms for adaptive radiotherapy.

    Science.gov (United States)

    Zhong, Hualiang; Adams, Jeffrey; Glide-Hurst, Carri; Zhang, Hualin; Li, Haisen; Chetty, Indrin J

    2016-01-01

    Adaptive radiotherapy may improve treatment outcomes for lung cancer patients. Because of the lack of an effective tool for quality assurance, this therapeutic modality is not yet accepted in clinic. The purpose of this study is to develop a deformable physical phantom for validation of dose accumulation algorithms in regions with heterogeneous mass. A three-dimensional (3D) deformable phantom was developed containing a tissue-equivalent tumor and heterogeneous sponge inserts. Thermoluminescent dosimeters (TLDs) were placed at multiple locations in the phantom each time before dose measurement. Doses were measured with the phantom in both the static and deformed cases. The deformation of the phantom was actuated by a motor driven piston. 4D computed tomography images were acquired to calculate 3D doses at each phase using Pinnacle and EGSnrc/DOSXYZnrc. These images were registered using two registration software packages: VelocityAI and Elastix. With the resultant displacement vector fields (DVFs), the calculated 3D doses were accumulated using a mass-and energy congruent mapping method and compared to those measured by the TLDs at four typical locations. In the static case, TLD measurements agreed with all the algorithms by 1.8% at the center of the tumor volume and by 4.0% in the penumbra. In the deformable case, the phantom's deformation was reproduced within 1.1 mm. For the 3D dose calculated by Pinnacle, the total dose accumulated with the Elastix DVF agreed well to the TLD measurements with their differences <2.5% at four measured locations. When the VelocityAI DVF was used, their difference increased up to 11.8%. For the 3D dose calculated by EGSnrc/DOSXYZnrc, the total doses accumulated with the two DVFs were within 5.7% of the TLD measurements which are slightly over the rate of 5% for clinical acceptance. The detector-embedded deformable phantom allows radiation dose to be measured in a dynamic environment, similar to deforming lung tissues, supporting

  6. Development of a deformable dosimetric phantom to verify dose accumulation algorithms for adaptive radiotherapy

    Directory of Open Access Journals (Sweden)

    Hualiang Zhong

    2016-01-01

    Full Text Available Adaptive radiotherapy may improve treatment outcomes for lung cancer patients. Because of the lack of an effective tool for quality assurance, this therapeutic modality is not yet accepted in clinic. The purpose of this study is to develop a deformable physical phantom for validation of dose accumulation algorithms in regions with heterogeneous mass. A three-dimensional (3D deformable phantom was developed containing a tissue-equivalent tumor and heterogeneous sponge inserts. Thermoluminescent dosimeters (TLDs were placed at multiple locations in the phantom each time before dose measurement. Doses were measured with the phantom in both the static and deformed cases. The deformation of the phantom was actuated by a motor driven piston. 4D computed tomography images were acquired to calculate 3D doses at each phase using Pinnacle and EGSnrc/DOSXYZnrc. These images were registered using two registration software packages: VelocityAI and Elastix. With the resultant displacement vector fields (DVFs, the calculated 3D doses were accumulated using a mass-and energy congruent mapping method and compared to those measured by the TLDs at four typical locations. In the static case, TLD measurements agreed with all the algorithms by 1.8% at the center of the tumor volume and by 4.0% in the penumbra. In the deformable case, the phantom's deformation was reproduced within 1.1 mm. For the 3D dose calculated by Pinnacle, the total dose accumulated with the Elastix DVF agreed well to the TLD measurements with their differences <2.5% at four measured locations. When the VelocityAI DVF was used, their difference increased up to 11.8%. For the 3D dose calculated by EGSnrc/DOSXYZnrc, the total doses accumulated with the two DVFs were within 5.7% of the TLD measurements which are slightly over the rate of 5% for clinical acceptance. The detector-embedded deformable phantom allows radiation dose to be measured in a dynamic environment, similar to deforming lung

  7. Development of imaging and reconstructions algorithms on parallel processing architectures for applications in non-destructive testing

    International Nuclear Information System (INIS)

    Pedron, Antoine

    2013-01-01

    This thesis work is placed between the scientific domain of ultrasound non-destructive testing and algorithm-architecture adequation. Ultrasound non-destructive testing includes a group of analysis techniques used in science and industry to evaluate the properties of a material, component, or system without causing damage. In order to characterise possible defects, determining their position, size and shape, imaging and reconstruction tools have been developed at CEA-LIST, within the CIVA software platform. Evolution of acquisition sensors implies a continuous growth of datasets and consequently more and more computing power is needed to maintain interactive reconstructions. General purpose processors (GPP) evolving towards parallelism and emerging architectures such as GPU allow large acceleration possibilities than can be applied to these algorithms. The main goal of the thesis is to evaluate the acceleration than can be obtained for two reconstruction algorithms on these architectures. These two algorithms differ in their parallelization scheme. The first one can be properly parallelized on GPP whereas on GPU, an intensive use of atomic instructions is required. Within the second algorithm, parallelism is easier to express, but loop ordering on GPP, as well as thread scheduling and a good use of shared memory on GPU are necessary in order to obtain efficient results. Different API or libraries, such as OpenMP, CUDA and OpenCL are evaluated through chosen benchmarks. An integration of both algorithms in the CIVA software platform is proposed and different issues related to code maintenance and durability are discussed. (author) [fr

  8. Development of algorithm for continuous generation of a computer game in terms of usability and optimization of developed code in computer science

    Directory of Open Access Journals (Sweden)

    Tibor Skala

    2018-03-01

    Full Text Available As both hardware and software have become increasingly available and constantly developed, they globally contribute to improvements in technology in every field of technology and arts. Digital tools for creation and processing of graphical contents are very developed and they have been designed to shorten the time required for content creation, which is, in this case, animation. Since contemporary animation has experienced a surge in various visual styles and visualization methods, programming is built-in in everything that is currently in use. There is no doubt that there is a variety of algorithms and software which are the brain and the moving force behind any idea created for a specific purpose and applicability in society. Art and technology combined make a direct and oriented medium for publishing and marketing in every industry, including those which are not necessarily closely related to those that rely heavily on visual aspect of work. Additionally, quality and consistency of an algorithm will also depend on proper integration into the system that will be powered by that algorithm as well as on the way the algorithm is designed. Development of an endless algorithm and its effective use will be shown during the use of the computer game. In order to present the effect of various parameters, in the final phase of the computer game development an endless algorithm was tested with varying number of key input parameters (achieved time, score reached, pace of the game.

  9. Development and image quality assessment of a contrast-enhancement algorithm for display of digital chest radiographs

    International Nuclear Information System (INIS)

    Rehm, K.

    1992-01-01

    This dissertation presents a contrast-enhancement algorithm Artifact-Suppressed Adaptive Histogram Equalization (ASAHE). This algorithm was developed as part of a larger effort to replace the film radiographs currently used in radiology departments with digital images. Among the expected benefits of digital radiology are improved image management and greater diagnostic accuracy. Film radiographs record X-ray transmission data at high spatial resolution, and a wide dynamic range of signal. Current digital radiography systems record an image at reduced spatial resolution and with coarse sampling of the available dynamic range. These reductions have a negative impact on diagnostic accuracy. The contrast-enhancement algorithm presented in this dissertation is designed to boost diagnostic accuracy of radiologists using digital images. The ASAHE algorithm is an extension of an earlier technique called Adaptive Histogram Equalization (AHE). The AHE algorithm is unsuitable for chest radiographs because it over-enhances noise, and introduces boundary artifacts. The modifications incorporated in ASAHE suppress the artifacts and allow processing of chest radiographs. This dissertation describes the psychophysical methods used to evaluate the effects of processing algorithms on human observer performance. An experiment conducted with anthropomorphic phantoms and simulated nodules showed the ASAHE algorithm to be superior for human detection of nodules when compared to a computed radiography system's algorithm that is in current use. An experiment conducted using clinical images demonstrating pneumothoraces (partial lung collapse) indicated no difference in human observer accuracy when ASAHE images were compared to computed radiography images, but greater ease of diagnosis when ASAHE images were used. These results provide evidence to suggest that Artifact-Suppressed Adaptive Histogram Equalization can be effective in increasing diagnostic accuracy and efficiency

  10. Development of tight-binding based GW algorithm and its computational implementation for graphene

    Energy Technology Data Exchange (ETDEWEB)

    Majidi, Muhammad Aziz [Departemen Fisika, FMIPA, Universitas Indonesia, Kampus UI Depok (Indonesia); NUSNNI-NanoCore, Department of Physics, National University of Singapore (NUS), Singapore 117576 (Singapore); Singapore Synchrotron Light Source (SSLS), National University of Singapore (NUS), 5 Research Link, Singapore 117603 (Singapore); Naradipa, Muhammad Avicenna, E-mail: muhammad.avicenna11@ui.ac.id; Phan, Wileam Yonatan; Syahroni, Ahmad [Departemen Fisika, FMIPA, Universitas Indonesia, Kampus UI Depok (Indonesia); Rusydi, Andrivo [NUSNNI-NanoCore, Department of Physics, National University of Singapore (NUS), Singapore 117576 (Singapore); Singapore Synchrotron Light Source (SSLS), National University of Singapore (NUS), 5 Research Link, Singapore 117603 (Singapore)

    2016-04-19

    Graphene has been a hot subject of research in the last decade as it holds a promise for various applications. One interesting issue is whether or not graphene should be classified into a strongly or weakly correlated system, as the optical properties may change upon several factors, such as the substrate, voltage bias, adatoms, etc. As the Coulomb repulsive interactions among electrons can generate the correlation effects that may modify the single-particle spectra (density of states) and the two-particle spectra (optical conductivity) of graphene, we aim to explore such interactions in this study. The understanding of such correlation effects is important because eventually they play an important role in inducing the effective attractive interactions between electrons and holes that bind them into excitons. We do this study theoretically by developing a GW method implemented on the basis of the tight-binding (TB) model Hamiltonian. Unlike the well-known GW method developed within density functional theory (DFT) framework, our TB-based GW implementation may serve as an alternative technique suitable for systems which Hamiltonian is to be constructed through a tight-binding based or similar models. This study includes theoretical formulation of the Green’s function G, the renormalized interaction function W from random phase approximation (RPA), and the corresponding self energy derived from Feynman diagrams, as well as the development of the algorithm to compute those quantities. As an evaluation of the method, we perform calculations of the density of states and the optical conductivity of graphene, and analyze the results.

  11. Development of tight-binding based GW algorithm and its computational implementation for graphene

    International Nuclear Information System (INIS)

    Majidi, Muhammad Aziz; Naradipa, Muhammad Avicenna; Phan, Wileam Yonatan; Syahroni, Ahmad; Rusydi, Andrivo

    2016-01-01

    Graphene has been a hot subject of research in the last decade as it holds a promise for various applications. One interesting issue is whether or not graphene should be classified into a strongly or weakly correlated system, as the optical properties may change upon several factors, such as the substrate, voltage bias, adatoms, etc. As the Coulomb repulsive interactions among electrons can generate the correlation effects that may modify the single-particle spectra (density of states) and the two-particle spectra (optical conductivity) of graphene, we aim to explore such interactions in this study. The understanding of such correlation effects is important because eventually they play an important role in inducing the effective attractive interactions between electrons and holes that bind them into excitons. We do this study theoretically by developing a GW method implemented on the basis of the tight-binding (TB) model Hamiltonian. Unlike the well-known GW method developed within density functional theory (DFT) framework, our TB-based GW implementation may serve as an alternative technique suitable for systems which Hamiltonian is to be constructed through a tight-binding based or similar models. This study includes theoretical formulation of the Green’s function G, the renormalized interaction function W from random phase approximation (RPA), and the corresponding self energy derived from Feynman diagrams, as well as the development of the algorithm to compute those quantities. As an evaluation of the method, we perform calculations of the density of states and the optical conductivity of graphene, and analyze the results.

  12. Development and Validation of a Portable Platform for Deploying Decision-Support Algorithms in Prehospital Settings

    Science.gov (United States)

    Reisner, A. T.; Khitrov, M. Y.; Chen, L.; Blood, A.; Wilkins, K.; Doyle, W.; Wilcox, S.; Denison, T.; Reifman, J.

    2013-01-01

    Summary Background Advanced decision-support capabilities for prehospital trauma care may prove effective at improving patient care. Such functionality would be possible if an analysis platform were connected to a transport vital-signs monitor. In practice, there are technical challenges to implementing such a system. Not only must each individual component be reliable, but, in addition, the connectivity between components must be reliable. Objective We describe the development, validation, and deployment of the Automated Processing of Physiologic Registry for Assessment of Injury Severity (APPRAISE) platform, intended to serve as a test bed to help evaluate the performance of decision-support algorithms in a prehospital environment. Methods We describe the hardware selected and the software implemented, and the procedures used for laboratory and field testing. Results The APPRAISE platform met performance goals in both laboratory testing (using a vital-sign data simulator) and initial field testing. After its field testing, the platform has been in use on Boston MedFlight air ambulances since February of 2010. Conclusion These experiences may prove informative to other technology developers and to healthcare stakeholders seeking to invest in connected electronic systems for prehospital as well as in-hospital use. Our experiences illustrate two sets of important questions: are the individual components reliable (e.g., physical integrity, power, core functionality, and end-user interaction) and is the connectivity between components reliable (e.g., communication protocols and the metadata necessary for data interpretation)? While all potential operational issues cannot be fully anticipated and eliminated during development, thoughtful design and phased testing steps can reduce, if not eliminate, technical surprises. PMID:24155791

  13. Development of Variational Guiding Center Algorithms for Parallel Calculations in Experimental Magnetic Equilibria

    Energy Technology Data Exchange (ETDEWEB)

    Ellison, C. Leland [PPPL; Finn, J. M. [LANL; Qin, H. [PPPL; Tang, William M. [PPPL

    2014-10-01

    Structure-preserving algorithms obtained via discrete variational principles exhibit strong promise for the calculation of guiding center test particle trajectories. The non-canonical Hamiltonian structure of the guiding center equations forms a novel and challenging context for geometric integration. To demonstrate the practical relevance of these methods, a prototypical variational midpoint algorithm is applied to an experimental magnetic equilibrium. The stability characteristics, conservation properties, and implementation requirements associated with the variational algorithms are addressed. Furthermore, computational run time is reduced for large numbers of particles by parallelizing the calculation on GPU hardware.

  14. Development of an algorithm for heartbeats detection and classification in Holter records based on temporal and morphological features

    International Nuclear Information System (INIS)

    García, A; Romano, H; Laciar, E; Correa, R

    2011-01-01

    In this work a detection and classification algorithm for heartbeats analysis in Holter records was developed. First, a QRS complexes detector was implemented and their temporal and morphological characteristics were extracted. A vector was built with these features; this vector is the input of the classification module, based on discriminant analysis. The beats were classified in three groups: Premature Ventricular Contraction beat (PVC), Atrial Premature Contraction beat (APC) and Normal Beat (NB). These beat categories represent the most important groups of commercial Holter systems. The developed algorithms were evaluated in 76 ECG records of two validated open-access databases 'arrhythmias MIT BIH database' and M IT BIH supraventricular arrhythmias database . A total of 166343 beats were detected and analyzed, where the QRS detection algorithm provides a sensitivity of 99.69 % and a positive predictive value of 99.84 %. The classification stage gives sensitivities of 97.17% for NB, 97.67% for PCV and 92.78% for APC.

  15. Cognitive Development Optimization Algorithm Based Support Vector Machines for Determining Diabetes

    Directory of Open Access Journals (Sweden)

    Utku Kose

    2016-03-01

    Full Text Available The definition, diagnosis and classification of Diabetes Mellitus and its complications are very important. First of all, the World Health Organization (WHO and other societies, as well as scientists have done lots of studies regarding this subject. One of the most important research interests of this subject is the computer supported decision systems for diagnosing diabetes. In such systems, Artificial Intelligence techniques are often used for several disease diagnostics to streamline the diagnostic process in daily routine and avoid misdiagnosis. In this study, a diabetes diagnosis system, which is formed via both Support Vector Machines (SVM and Cognitive Development Optimization Algorithm (CoDOA has been proposed. Along the training of SVM, CoDOA was used for determining the sigma parameter of the Gauss (RBF kernel function, and eventually, a classification process was made over the diabetes data set, which is related to Pima Indians. The proposed approach offers an alternative solution to the field of Artificial Intelligence-based diabetes diagnosis, and contributes to the related literature on diagnosis processes.

  16. Spline Trajectory Algorithm Development: Bezier Curve Control Point Generation for UAVs

    Science.gov (United States)

    Howell, Lauren R.; Allen, B. Danette

    2016-01-01

    A greater need for sophisticated autonomous piloting systems has risen in direct correlation with the ubiquity of Unmanned Aerial Vehicle (UAV) technology. Whether surveying unknown or unexplored areas of the world, collecting scientific data from regions in which humans are typically incapable of entering, locating lost or wanted persons, or delivering emergency supplies, an unmanned vehicle moving in close proximity to people and other vehicles, should fly smoothly and predictably. The mathematical application of spline interpolation can play an important role in autopilots' on-board trajectory planning. Spline interpolation allows for the connection of Three-Dimensional Euclidean Space coordinates through a continuous set of smooth curves. This paper explores the motivation, application, and methodology used to compute the spline control points, which shape the curves in such a way that the autopilot trajectory is able to meet vehicle-dynamics limitations. The spline algorithms developed used to generate these curves supply autopilots with the information necessary to compute vehicle paths through a set of coordinate waypoints.

  17. Development of algorithms for detecting citrus canker based on hyperspectral reflectance imaging.

    Science.gov (United States)

    Li, Jiangbo; Rao, Xiuqin; Ying, Yibin

    2012-01-15

    Automated discrimination of fruits with canker from other fruit with normal surface and different type of peel defects has become a helpful task to enhance the competitiveness and profitability of the citrus industry. Over the last several years, hyperspectral imaging technology has received increasing attention in the agricultural products inspection field. This paper studied the feasibility of classification of citrus canker from other peel conditions including normal surface and nine peel defects by hyperspectal imaging. A combination algorithm based on principal component analysis and the two-band ratio (Q(687/630)) method was proposed. Since fewer wavelengths were desired in order to develop a rapid multispectral imaging system, the canker classification performance of the two-band ratio (Q(687/630)) method alone was also evaluated. The proposed combination approach and two-band ratio method alone resulted in overall classification accuracy for training set samples and test set samples of 99.5%, 84.5% and 98.2%, 82.9%, respectively. The proposed combination approach was more efficient for classifying canker against various conditions under reflectance hyperspectral imagery. However, the two-band ratio (Q(687/630)) method alone also demonstrated effectiveness in discriminating citrus canker from normal fruit and other peel diseases except for copper burn and anthracnose. Copyright © 2011 Society of Chemical Industry.

  18. Development of tomographic reconstruction algorithms for the PIXE analysis of biological samples

    International Nuclear Information System (INIS)

    Nguyen, D.T.

    2008-05-01

    The development of 3-dimensional microscopy techniques offering a spatial resolution of 1 μm or less has opened a large field of investigation in Cell Biology. Amongst them, an interesting advantage of ion beam micro-tomography is its ability to give quantitative results in terms of local concentrations in a direct way, using Particle Induced X-ray Emission (PIXET) combined to Scanning Transmission Ion Microscopy (STIMT) Tomography. After a brief introduction of existing reconstruction techniques, we present the principle of the DISRA code, the most complete written so far, which is the basis of the present work. We have modified and extended the DISRA algorithm by considering the specific aspects of biologic specimens. Moreover, correction procedures were added in the code to reduce noise in the tomograms. For portability purpose, a Windows graphic interface was designed to easily enter and modify experimental parameters used in the reconstruction, and control the several steps of data reduction. Results of STIMT and PIXET experiments on reference specimens and on human cancer cells will be also presented. (author)

  19. Development of Ray Tracing Algorithms for Scanning Plane and Transverse Plane Analysis for Satellite Multibeam Application

    Directory of Open Access Journals (Sweden)

    N. H. Abd Rahman

    2014-01-01

    Full Text Available Reflector antennas have been widely used in many areas. In the implementation of parabolic reflector antenna for broadcasting satellite applications, it is essential for the spacecraft antenna to provide precise contoured beam to effectively serve the required region. For this purpose, combinations of more than one beam are required. Therefore, a tool utilizing ray tracing method is developed to calculate precise off-axis beams for multibeam antenna system. In the multibeam system, each beam will be fed from different feed positions to allow the main beam to be radiated at the exact direction on the coverage area. Thus, detailed study on caustics of a parabolic reflector antenna is performed and presented in this paper, which is to investigate the behaviour of the rays and its relation to various antenna parameters. In order to produce accurate data for the analysis, the caustic behaviours are investigated in two distinctive modes: scanning plane and transverse plane. This paper presents the detailed discussions on the derivation of the ray tracing algorithms, the establishment of the equations of caustic loci, and the verification of the method through calculation of radiation pattern.

  20. Development and evaluation of a new contoured cushion system with an optimized normalization algorithm.

    Science.gov (United States)

    Li, Sujiao; Zhang, Zhengxiang; Wang, Jue

    2014-01-01

    Prevention of pressure sores remains a significant problem confronting spinal cord injury patients and the elderly with limited mobility. One vital aspect of this subject concerns the development of cushions to decrease pressure ulcers for seated patients, particularly those bound by wheelchairs. Here, we present a novel cushion system that employs interface pressure distribution between the cushion and the buttocks to design custom contoured foam cushion. An optimized normalization algorithm was proposed, with which interface pressure distribution was transformed into the carving depth of foam cushions according to the biomechanical characteristics of the foam. The shape and pressure-relief performance of the custom contoured foam cushions was investigated. The outcomes showed that the contoured shape of personalized cushion matched the buttock contour very well. Moreover, the custom contoured cushion could alleviate pressure under buttocks and increase subjective comfort and stability significantly. Furthermore, the fabricating method not only decreased the unit production cost but also simplified the procedure for manufacturing. All in all, this prototype seat cushion would be an effective and economical way to prevent pressure ulcers.

  1. Probabilistic analysis algorithm for UA slope software program.

    Science.gov (United States)

    2013-12-01

    A reliability-based computational algorithm for using a single row and equally spaced drilled shafts to : stabilize an unstable slope has been developed in this research. The Monte-Carlo simulation (MCS) : technique was used in the previously develop...

  2. Performance and development for the Inner Detector Trigger Algorithms at ATLAS

    CERN Document Server

    Penc, Ondrej; The ATLAS collaboration

    2015-01-01

    A redesign of the tracking algorithms for the ATLAS trigger for Run 2 starting in spring 2015 is in progress. The ATLAS HLT software has been restructured to run as a more flexible single stage HLT, instead of two separate stages (Level 2 and Event Filter) as in Run 1. The new tracking strategy employed for Run 2 will use a Fast Track Finder (FTF) algorithm to seed subsequent Precision Tracking, and will result in improved track parameter resolution and faster execution times than achieved during Run 1. The performance of the new algorithms has been evaluated to identify those aspects where code optimisation would be most beneficial. The performance and timing of the algorithms for electron and muon reconstruction in the trigger are presented. The profiling infrastructure, constructed to provide prompt feedback from the optimisation, is described, including the methods used to monitor the relative performance improvements as the code evolves.

  3. The performance and development for the Inner Detector Trigger algorithms at ATLAS

    International Nuclear Information System (INIS)

    Penc, Ondrej

    2015-01-01

    A redesign of the tracking algorithms for the ATLAS trigger for LHC's Run 2 starting in 2015 is in progress. The ATLAS HLT software has been restructured to run as a more flexible single stage HLT, instead of two separate stages (Level 2 and Event Filter) as in Run 1. The new tracking strategy employed for Run 2 will use a Fast Track Finder (FTF) algorithm to seed subsequent Precision Tracking, and will result in improved track parameter resolution and faster execution times than achieved during Run 1. The performance of the new algorithms has been evaluated to identify those aspects where code optimisation would be most beneficial. The performance and timing of the algorithms for electron and muon reconstruction in the trigger are presented. The profiling infrastructure, constructed to provide prompt feedback from the optimisation, is described, including the methods used to monitor the relative performance improvements as the code evolves. (paper)

  4. The performance and development for the Inner Detector Trigger algorithms at ATLAS

    CERN Document Server

    Penc, O; The ATLAS collaboration

    2015-01-01

    A redesign of the tracking algorithms for the ATLAS trigger for LHC's Run 2 starting in 2015 is in progress. The ATLAS HLT software has been restructured to run as a more flexible single stage HLT, instead of two separate stages (Level 2 and Event Filter) as in Run 1. The new tracking strategy employed for Run 2 will use a Fast Track Finder (FTF) algorithm to seed subsequent Precision Tracking, and will result in improved track parameter resolution and faster execution times than achieved during Run 1. The performance of the new algorithms has been evaluated to identify those aspects where code optimisation would be most beneficial. The performance and timing of the algorithms for electron and muon reconstruction in the trigger are presented. The profiling infrastructure, constructed to provide prompt feedback from the optimisation, is described, including the methods used to monitor the relative performance improvements as the code evolves.

  5. Development of an operationally efficient PTC braking enforcement algorithm for freight trains.

    Science.gov (United States)

    2013-08-01

    Software algorithms used in positive train control (PTC) systems designed to predict freight train stopping distance and enforce a penalty brake application have been shown to be overly conservative, which can lead to operational inefficiencies by in...

  6. Recent Development of Multigrid Algorithms for Mixed and Noncomforming Methods for Second Order Elliptical Problems

    Science.gov (United States)

    Chen, Zhangxin; Ewing, Richard E.

    1996-01-01

    Multigrid algorithms for nonconforming and mixed finite element methods for second order elliptic problems on triangular and rectangular finite elements are considered. The construction of several coarse-to-fine intergrid transfer operators for nonconforming multigrid algorithms is discussed. The equivalence between the nonconforming and mixed finite element methods with and without projection of the coefficient of the differential problems into finite element spaces is described.

  7. The Development of Several Electromagnetic Monitoring Strategies and Algorithms for Validating Pre-Earthquake Electromagnetic Signals

    Science.gov (United States)

    Bleier, T. E.; Dunson, J. C.; Roth, S.; Mueller, S.; Lindholm, C.; Heraud, J. A.

    2012-12-01

    QuakeFinder, a private research group in California, reports on the development of a 100+ station network consisting of 3-axis induction magnetometers, and air conductivity sensors to collect and characterize pre-seismic electromagnetic (EM) signals. These signals are combined with daily Infra Red signals collected from the GOES weather satellite infrared (IR) instrument to compare and correlate with the ground EM signals, both from actual earthquakes and boulder stressing experiments. This presentation describes the efforts QuakeFinder has undertaken to automatically detect these pulse patterns using their historical data as a reference, and to develop other discriminative algorithms that can be used with air conductivity sensors, and IR instruments from the GOES satellites. The overall big picture results of the QuakeFinder experiment are presented. In 2007, QuakeFinder discovered the occurrence of strong uni-polar pulses in their magnetometer coil data that increased in tempo dramatically prior to the M5.1 earthquake at Alum Rock, California. Suggestions that these pulses might have been lightning or power-line arcing did not fit with the data actually recorded as was reported in Bleier [2009]. Then a second earthquake occurred near the same site on January 7, 2010 as was reported in Dunson [2011], and the pattern of pulse count increases before the earthquake occurred similarly to the 2007 event. There were fewer pulses, and the magnitude of them was decreased, both consistent with the fact that the earthquake was smaller (M4.0 vs M5.4) and farther away (7Km vs 2km). At the same time similar effects were observed at the QuakeFinder Tacna, Peru site before the May 5th, 2010 M6.2 earthquake and a cluster of several M4-5 earthquakes.

  8. Development of Turbulent Diffusion Transfer Algorithms to Estimate Lake Tahoe Water Budget

    Science.gov (United States)

    Sahoo, G. B.; Schladow, S. G.; Reuter, J. E.

    2012-12-01

    The evaporative loss is a dominant component in the Lake Tahoe hydrologic budget because watershed area (813km2) is very small compared to the lake surface area (501 km2). The 5.5 m high dam built at the lake's only outlet, the Truckee River at Tahoe City can increase the lake's capacity by approximately 0.9185 km3. The lake serves as a flood protection for downstream areas and source of water supply for downstream cities, irrigation, hydropower, and instream environmental requirements. When the lake water level falls below the natural rim, cessation of flows from the lake cause problems for water supply, irrigation, and fishing. Therefore, it is important to develop algorithms to correctly estimate the lake hydrologic budget. We developed a turbulent diffusion transfer model and coupled to the dynamic lake model (DLM-WQ). We generated the stream flows and pollutants loadings of the streams using the US Environmental Protection Agency (USEPA) supported watershed model, Loading Simulation Program in C++ (LSPC). The bulk transfer coefficients were calibrated using correlation coefficient (R2) as the objective function. Sensitivity analysis was conducted for the meteorological inputs and model parameters. The DLM-WQ estimated lake water level and water temperatures were in agreement to those of measured records with R2 equal to 0.96 and 0.99, respectively for the period 1994 to 2008. The estimated average evaporation from the lake, stream inflow, precipitation over the lake, groundwater fluxes, and outflow from the lake during 1994 to 2008 were found to be 32.0%, 25.0%, 19.0%, 0.3%, and 11.7%, respectively.

  9. TIA: algorithms for development of identity-linked SNP islands for analysis by massively parallel DNA sequencing.

    Science.gov (United States)

    Farris, M Heath; Scott, Andrew R; Texter, Pamela A; Bartlett, Marta; Coleman, Patricia; Masters, David

    2018-04-11

    Single nucleotide polymorphisms (SNPs) located within the human genome have been shown to have utility as markers of identity in the differentiation of DNA from individual contributors. Massively parallel DNA sequencing (MPS) technologies and human genome SNP databases allow for the design of suites of identity-linked target regions, amenable to sequencing in a multiplexed and massively parallel manner. Therefore, tools are needed for leveraging the genotypic information found within SNP databases for the discovery of genomic targets that can be evaluated on MPS platforms. The SNP island target identification algorithm (TIA) was developed as a user-tunable system to leverage SNP information within databases. Using data within the 1000 Genomes Project SNP database, human genome regions were identified that contain globally ubiquitous identity-linked SNPs and that were responsive to targeted resequencing on MPS platforms. Algorithmic filters were used to exclude target regions that did not conform to user-tunable SNP island target characteristics. To validate the accuracy of TIA for discovering these identity-linked SNP islands within the human genome, SNP island target regions were amplified from 70 contributor genomic DNA samples using the polymerase chain reaction. Multiplexed amplicons were sequenced using the Illumina MiSeq platform, and the resulting sequences were analyzed for SNP variations. 166 putative identity-linked SNPs were targeted in the identified genomic regions. Of the 309 SNPs that provided discerning power across individual SNP profiles, 74 previously undefined SNPs were identified during evaluation of targets from individual genomes. Overall, DNA samples of 70 individuals were uniquely identified using a subset of the suite of identity-linked SNP islands. TIA offers a tunable genome search tool for the discovery of targeted genomic regions that are scalable in the population frequency and numbers of SNPs contained within the SNP island regions

  10. Development and validation of a novel algorithm based on the ECG magnet response for rapid identification of any unknown pacemaker.

    Science.gov (United States)

    Squara, Fabien; Chik, William W; Benhayon, Daniel; Maeda, Shingo; Latcu, Decebal Gabriel; Lacaze-Gadonneix, Jonathan; Tibi, Thierry; Thomas, Olivier; Cooper, Joshua M; Duthoit, Guillaume

    2014-08-01

    Pacemaker (PM) interrogation requires correct manufacturer identification. However, an unidentified PM is a frequent occurrence, requiring time-consuming steps to identify the device. The purpose of this study was to develop and validate a novel algorithm for PM manufacturer identification, using the ECG response to magnet application. Data on the magnet responses of all recent PM models (≤15 years) from the 5 major manufacturers were collected. An algorithm based on the ECG response to magnet application to identify the PM manufacturer was subsequently developed. Patients undergoing ECG during magnet application in various clinical situations were prospectively recruited in 7 centers. The algorithm was applied in the analysis of every ECG by a cardiologist blinded to PM information. A second blinded cardiologist analyzed a sample of randomly selected ECGs in order to assess the reproducibility of the results. A total of 250 ECGs were analyzed during magnet application. The algorithm led to the correct single manufacturer choice in 242 ECGs (96.8%), whereas 7 (2.8%) could only be narrowed to either 1 of 2 manufacturer possibilities. Only 2 (0.4%) incorrect manufacturer identifications occurred. The algorithm identified Medtronic and Sorin Group PMs with 100% sensitivity and specificity, Biotronik PMs with 100% sensitivity and 99.5% specificity, and St. Jude and Boston Scientific PMs with 92% sensitivity and 100% specificity. The results were reproducible between the 2 blinded cardiologists with 92% concordant findings. Unknown PM manufacturers can be accurately identified by analyzing the ECG magnet response using this newly developed algorithm. Copyright © 2014 Heart Rhythm Society. Published by Elsevier Inc. All rights reserved.

  11. Molecular descriptor subset selection in theoretical peptide quantitative structure-retention relationship model development using nature-inspired optimization algorithms.

    Science.gov (United States)

    Žuvela, Petar; Liu, J Jay; Macur, Katarzyna; Bączek, Tomasz

    2015-10-06

    In this work, performance of five nature-inspired optimization algorithms, genetic algorithm (GA), particle swarm optimization (PSO), artificial bee colony (ABC), firefly algorithm (FA), and flower pollination algorithm (FPA), was compared in molecular descriptor selection for development of quantitative structure-retention relationship (QSRR) models for 83 peptides that originate from eight model proteins. The matrix with 423 descriptors was used as input, and QSRR models based on selected descriptors were built using partial least squares (PLS), whereas root mean square error of prediction (RMSEP) was used as a fitness function for their selection. Three performance criteria, prediction accuracy, computational cost, and the number of selected descriptors, were used to evaluate the developed QSRR models. The results show that all five variable selection methods outperform interval PLS (iPLS), sparse PLS (sPLS), and the full PLS model, whereas GA is superior because of its lowest computational cost and higher accuracy (RMSEP of 5.534%) with a smaller number of variables (nine descriptors). The GA-QSRR model was validated initially through Y-randomization. In addition, it was successfully validated with an external testing set out of 102 peptides originating from Bacillus subtilis proteomes (RMSEP of 22.030%). Its applicability domain was defined, from which it was evident that the developed GA-QSRR exhibited strong robustness. All the sources of the model's error were identified, thus allowing for further application of the developed methodology in proteomics.

  12. Developing the science product algorithm testbed for Chinese next-generation geostationary meteorological satellites: Fengyun-4 series

    Science.gov (United States)

    Min, Min; Wu, Chunqiang; Li, Chuan; Liu, Hui; Xu, Na; Wu, Xiao; Chen, Lin; Wang, Fu; Sun, Fenglin; Qin, Danyu; Wang, Xi; Li, Bo; Zheng, Zhaojun; Cao, Guangzhen; Dong, Lixin

    2017-08-01

    Fengyun-4A (FY-4A), the first of the Chinese next-generation geostationary meteorological satellites, launched in 2016, offers several advances over the FY-2: more spectral bands, faster imaging, and infrared hyperspectral measurements. To support the major objective of developing the prototypes of FY-4 science algorithms, two science product algorithm testbeds for imagers and sounders have been developed by the scientists in the FY-4 Algorithm Working Group (AWG). Both testbeds, written in FORTRAN and C programming languages for Linux or UNIX systems, have been tested successfully by using Intel/g compilers. Some important FY-4 science products, including cloud mask, cloud properties, and temperature profiles, have been retrieved successfully through using a proxy imager, Himawari-8/Advanced Himawari Imager (AHI), and sounder data, obtained from the Atmospheric InfraRed Sounder, thus demonstrating their robustness. In addition, in early 2016, the FY-4 AWG was developed based on the imager testbed—a near real-time processing system for Himawari-8/AHI data for use by Chinese weather forecasters. Consequently, robust and flexible science product algorithm testbeds have provided essential and productive tools for popularizing FY-4 data and developing substantial improvements in FY-4 products.

  13. Development and validation of algorithms to differentiate ductal carcinoma in situ from invasive breast cancer within administrative claims data.

    Science.gov (United States)

    Hirth, Jacqueline M; Hatch, Sandra S; Lin, Yu-Li; Giordano, Sharon H; Silva, H Colleen; Kuo, Yong-Fang

    2018-04-18

    Overtreatment is a common concern for patients with ductal carcinoma in situ (DCIS), but this entity is difficult to distinguish from invasive breast cancers in administrative claims data sets because DCIS often is coded as invasive breast cancer. Therefore, the authors developed and validated algorithms to select DCIS cases from administrative claims data to enable outcomes research in this type of data. This retrospective cohort using invasive breast cancer and DCIS cases included women aged 66 to 70 years in the 2004 through 2011 Texas Cancer Registry (TCR) data linked to Medicare administrative claims data. TCR records were used as "gold" standards to evaluate the sensitivity, specificity, and positive predictive value (PPV) of 2 algorithms. Women with a biopsy enrolled in Medicare parts A and B at 12 months before and 6 months after their first biopsy without a second incident diagnosis of DCIS or invasive breast cancer within 12 months in the TCR were included. Women in 2010 Medicare data were selected to test the algorithms in a general sample. In the TCR data set, a total of 6907 cases met inclusion criteria, with 1244 DCIS cases. The first algorithm had a sensitivity of 79%, a specificity of 89%, and a PPV of 62%. The second algorithm had a sensitivity of 50%, a specificity of 97%. and a PPV of 77%. Among women in the general sample, the specificity was high and the sensitivity was similar for both algorithms. However, the PPV was approximately 6% to 7% lower. DCIS frequently is miscoded as invasive breast cancer, and thus the proposed algorithms are useful to examine DCIS outcomes using data sets not linked to cancer registries. Cancer 2018. © 2018 American Cancer Society. © 2018 American Cancer Society.

  14. The development of a 3D mesoscopic model of metallic foam based on an improved watershed algorithm

    Science.gov (United States)

    Zhang, Jinhua; Zhang, Yadong; Wang, Guikun; Fang, Qin

    2018-06-01

    The watershed algorithm has been used widely in the x-ray computed tomography (XCT) image segmentation. It provides a transformation defined on a grayscale image and finds the lines that separate adjacent images. However, distortion occurs in developing a mesoscopic model of metallic foam based on XCT image data. The cells are oversegmented at some events when the traditional watershed algorithm is used. The improved watershed algorithm presented in this paper can avoid oversegmentation and is composed of three steps. Firstly, it finds all of the connected cells and identifies the junctions of the corresponding cell walls. Secondly, the image segmentation is conducted to separate the adjacent cells. It generates the lost cell walls between the adjacent cells. Optimization is then performed on the segmentation image. Thirdly, this improved algorithm is validated when it is compared with the image of the metallic foam, which shows that it can avoid the image segmentation distortion. A mesoscopic model of metallic foam is thus formed based on the improved algorithm, and the mesoscopic characteristics of the metallic foam, such as cell size, volume and shape, are identified and analyzed.

  15. Planning fuel-conservative descents in an airline environmental using a small programmable calculator: algorithm development and flight test results

    Energy Technology Data Exchange (ETDEWEB)

    Knox, C.E.; Vicroy, D.D.; Simmon, D.A.

    1985-05-01

    A simple, airborne, flight-management descent algorithm was developed and programmed into a small programmable calculator. The algorithm may be operated in either a time mode or speed mode. The time mode was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The speed model was designed for planning fuel-conservative descents when time is not a consideration. The descent path for both modes was calculated for a constant with considerations given for the descent Mach/airspeed schedule, gross weight, wind, wind gradient, and nonstandard temperature effects. Flight tests, using the algorithm on the programmable calculator, showed that the open-loop guidance could be useful to airline flight crews for planning and executing fuel-conservative descents.

  16. Development and validation of a risk prediction algorithm for the recurrence of suicidal ideation among general population with low mood.

    Science.gov (United States)

    Liu, Y; Sareen, J; Bolton, J M; Wang, J L

    2016-03-15

    Suicidal ideation is one of the strongest predictors of recent and future suicide attempt. This study aimed to develop and validate a risk prediction algorithm for the recurrence of suicidal ideation among population with low mood 3035 participants from U.S National Epidemiologic Survey on Alcohol and Related Conditions with suicidal ideation at their lowest mood at baseline were included. The Alcohol Use Disorder and Associated Disabilities Interview Schedule, based on the DSM-IV criteria was used. Logistic regression modeling was conducted to derive the algorithm. Discrimination and calibration were assessed in the development and validation cohorts. In the development data, the proportion of recurrent suicidal ideation over 3 years was 19.5 (95% CI: 17.7, 21.5). The developed algorithm consisted of 6 predictors: age, feelings of emptiness, sudden mood changes, self-harm history, depressed mood in the past 4 weeks, interference with social activities in the past 4 weeks because of physical health or emotional problems and emptiness was the most important risk factor. The model had good discriminative power (C statistic=0.8273, 95% CI: 0.8027, 0.8520). The C statistic was 0.8091 (95% CI: 0.7786, 0.8395) in the external validation dataset and was 0.8193 (95% CI: 0.8001, 0.8385) in the combined dataset. This study does not apply to people with suicidal ideation who are not depressed. The developed risk algorithm for predicting the recurrence of suicidal ideation has good discrimination and excellent calibration. Clinicians can use this algorithm to stratify the risk of recurrence in patients and thus improve personalized treatment approaches, make advice and further intensive monitoring. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Diagnosis and treatment of acute ankle injuries: development of an evidence-based algorithm

    Directory of Open Access Journals (Sweden)

    Hans Polzer

    2012-01-01

    Full Text Available Acute ankle injuries are among the most common injuries in emergency departments. However, a standardized examination and an evidence-based treatment are missing. Therefore, aim of this study was to systematically search the current literature, classify the evidence and develop an algorithm for diagnosis and treatment of acute ankle injuries. We systematically searched PubMed and the Cochrane Database for randomized controlled trials, meta-analysis, systematic reviews, or if applicable observational studies and classified them according to their level of evidence. According to the currently available literature, the following recommendations are given. The Ottawa Ankle/Foot Rule should be applied in order to rule out fractures, Physical examination is sufficient for diagnosing injuries to the lateral ligament complex. Classification into stable and unstable injuries is applicable and of clinical importance. The squeeze-, crossed leg- and external rotation test are indicative for injuries of the syndesmosis. Magnetic resonance imaging is recommended to verify such injuries. Stable ankle sprains have a good prognosis, while for unstable ankle sprains conservative treatment is at least as effective as operative treatment without carrying possible complications. Early functional treatment leads to the fastest recovery and the least rate of re-injury. Supervised rehabilitation reduces residual symptoms and re-injuries. Taken these recommendations into account, we here present an applicable and evidence-based step by step decision pathway for the diagnosis and treatment of acute ankle injuries, which can be implemented in any emergency department or doctor’s practice. It provides quality assurance for the patient and confidence for the attending physician.

  18. Prepatellar and olecranon bursitis: literature review and development of a treatment algorithm.

    Science.gov (United States)

    Baumbach, Sebastian F; Lobo, Christopher M; Badyine, Ilias; Mutschler, Wolf; Kanz, Karl-Georg

    2014-03-01

    Olecranon bursitis and prepatellar bursitis are common entities, with a minimum annual incidence of 10/100,000, predominantly affecting male patients (80 %) aged 40-60 years. Approximately 1/3 of cases are septic (SB) and 2/3 of cases are non-septic (NSB), with substantial variations in treatment regimens internationally. The aim of the study was the development of a literature review-based treatment algorithm for prepatellar and olecranon bursitis. Following a systematic review of Pubmed, the Cochrane Library, textbooks of emergency medicine and surgery, and a manual reference search, 52 relevant papers were identified. The initial differentiation between SB and NSB was based on clinical presentation, bursal aspirate, and blood sampling analysis. Physical findings suggesting SB were fever >37.8 °C, prebursal temperature difference greater 2.2 °C, and skin lesions. Relevant findings for bursal aspirate were purulent aspirate, fluid-to-serum glucose ratio 3,000 cells/μl, polymorphonuclear cells >50 %, positive Gram staining, and positive culture. General treatment measures for SB and NSB consist of bursal aspiration, NSAIDs, and PRICE. For patients with confirmed NSB and high athletic or occupational demands, intrabursal steroid injection may be performed. In the case of SB, antibiotic therapy should be initiated. Surgical treatment, i.e., incision, drainage, or bursectomy, should be restricted to severe, refractory, or chronic/recurrent cases. The available evidence did not support the central European concept of immediate bursectomy in cases of SB. A conservative treatment regimen should be pursued, following bursal aspirate-based differentiation between SB and NSB.

  19. DEVELOPMENT OF THE SOCIAL TENSION RISK PREDICTING ALGORITHM IN THE POPULATION OF CERTAIN REGIONS OF RUSSIA

    Directory of Open Access Journals (Sweden)

    A. B. Mulik

    2017-01-01

    Full Text Available Aim. The aim of the study was development of approaches to predict the risk of social tension for population of the Russian Federation regions.Methods. Theoretical studies based on the analysis of cartographic material from the National Atlas of Russia. The use of geo-information technologies has provided modeling of environmental load in the territory of certain regions of Russia. Experimental studies were performed using standard methods of psycho-physiological testing involving 336 persons 18-23 years old of both sexes.Results. As a fundamental biologically significant factor of the environment, differentiating the Russian Federation territory to areas with discrete actual physical effects, total solar radiation was determined. The subsequent allocation of model regions (Republic of Crimea, Rostov and Saratov regions based on the principle of minimizing regional differences associated factors of environmental pressure per person. Experimental studies have revealed persistent systemic relationships of phenotypic characteristics and tendency of person to neuropsychic tension. The risk of social tension for the study area population is predicted on the condition of finding more than two thirds of the representatives of sample within the borders of a high level of general non-specific reactivity of an organism.Main conclusions. The expediency of using the northern latitude as an integral index of differentiation of areas on the specifics of the severity of the physical factors of environmental impact on human activity is justified. The possibility of the application for the level of general nonspecific reactivity of an organism as a phenotypic trait marker of social tension risk is identified. An algorithm for predicting the risk of social tension among the population, compactly living in certain territories of the Russian Federation is designed. 

  20. Developing algorithms for healthcare insurers to systematically monitor surgical site infection rates

    Directory of Open Access Journals (Sweden)

    Livingston James M

    2007-06-01

    Full Text Available Abstract Background Claims data provide rapid indicators of SSIs for coronary artery bypass surgery and have been shown to successfully rank hospitals by SSI rates. We now operationalize this method for use by payers without transfer of protected health information, or any insurer data, to external analytic centers. Results We performed a descriptive study testing the operationalization of software for payers to routinely assess surgical infection rates among hospitals where enrollees receive cardiac procedures. We developed five SAS programs and a user manual for direct use by health plans and payers. The manual and programs were refined following provision to two national insurers who applied the programs to claims databases, following instructions on data preparation, data validation, analysis, and verification and interpretation of program output. A final set of programs and user manual successfully guided health plan programmer analysts to apply SSI algorithms to claims databases. Validation steps identified common problems such as incomplete preparation of data, missing data, insufficient sample size, and other issues that might result in program failure. Several user prompts enabled health plans to select time windows, strata such as insurance type, and the threshold number of procedures performed by a hospital before inclusion in regression models assessing relative SSI rates among hospitals. No health plan data was transferred to outside entities. Programs, on default settings, provided descriptive tables of SSI indicators stratified by hospital, insurer type, SSI indicator (inpatient, outpatient, antibiotic, and six-month period. Regression models provided rankings of hospital SSI indicator rates by quartiles, adjusted for comorbidities. Programs are publicly available without charge. Conclusion We describe a free, user-friendly software package that enables payers to routinely assess and identify hospitals with potentially high SSI

  1. Development of a Mobile Robot Test Platform and Methods for Validation of Prognostics-Enabled Decision Making Algorithms

    Directory of Open Access Journals (Sweden)

    Jose R. Celaya

    2013-01-01

    Full Text Available As fault diagnosis and prognosis systems in aerospace applications become more capable, the ability to utilize information supplied by them becomes increasingly important. While certain types of vehicle health data can be effectively processed and acted upon by crew or support personnel, others, due to their complexity or time constraints, require either automated or semi-automated reasoning. Prognostics-enabled Decision Making (PDM is an emerging research area that aims to integrate prognostic health information and knowledge about the future operating conditions into the process of selecting subsequent actions for the system. The newly developed PDM algorithms require suitable software and hardware platforms for testing under realistic fault scenarios. The paper describes the development of such a platform, based on the K11 planetary rover prototype. A variety of injectable fault modes are being investigated for electrical, mechanical, and power subsystems of the testbed, along with methods for data collection and processing. In addition to the hardware platform, a software simulator with matching capabilities has been developed. The simulator allows for prototyping and initial validation of the algorithms prior to their deployment on the K11. The simulator is also available to the PDM algorithms to assist with the reasoning process. A reference set of diagnostic, prognostic, and decision making algorithms is also described, followed by an overview of the current test scenarios and the results of their execution on the simulator.

  2. Progressive geometric algorithms

    NARCIS (Netherlands)

    Alewijnse, S.P.A.; Bagautdinov, T.M.; de Berg, M.T.; Bouts, Q.W.; ten Brink, Alex P.; Buchin, K.A.; Westenberg, M.A.

    2015-01-01

    Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms

  3. Progressive geometric algorithms

    NARCIS (Netherlands)

    Alewijnse, S.P.A.; Bagautdinov, T.M.; Berg, de M.T.; Bouts, Q.W.; Brink, ten A.P.; Buchin, K.; Westenberg, M.A.

    2014-01-01

    Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms

  4. Algorithms Development in Detection of the Gelatinization Process during Enzymatic ‘Dodol’ Processing

    Directory of Open Access Journals (Sweden)

    Azman Hamzah

    2013-09-01

    Full Text Available Computer vision systems have found wide application in foods processing industry to perform quality evaluation. The systems enable to replace human inspectors for the evaluation of a variety of quality attributes. This paper describes the implementation of the Fast Fourier Transform and Kalman filtering algorithms to detect the glutinous rice flour slurry (GRFS gelatinization in an enzymatic „dodol. processing. The onset of the GRFS gelatinization is critical in determining the quality of an enzymatic „dodol.. Combinations of these two algorithms were able to detect the gelatinization of the GRFS. The result shows that the gelatinization of the GRFS was at the time range of 11.75 minutes to 14.75 minutes for 24 batches of processing. This paper will highlight the capability of computer vision using our proposed algorithms in monitoring and controlling of an enzymatic „dodol. processing via image processing technology.

  5. Algorithms Development in Detection of the Gelatinization Process during Enzymatic ‘Dodol’ Processing

    Directory of Open Access Journals (Sweden)

    Azman Hamzah

    2007-11-01

    Full Text Available Computer vision systems have found wide application in foods processing industry to perform the quality evaluation. The systems enable to replace human inspectors for the evaluation of a variety of quality attributes. This paper describes the implementation of the Fast Fourier Transform and Kalman filtering algorithms to detect the glutinous rice flour slurry (GRFS gelatinization in an enzymatic ‘dodol’ processing. The onset of the GRFS gelatinization is critical in determining the quality of an enzymatic ‘dodol’. Combinations of these two algorithms were able to detect the gelatinization of the GRFS. The result shows that the gelatinization of the GRFS was at the time range of 11.75 minutes to 15.33 minutes for 20 batches of processing. This paper will highlight the capability of computer vision using our proposed algorithms in monitoring and controlling of an enzymatic ‘dodol’ processing via image processing technology.

  6. The Performance and Development of the Inner Detector Trigger Algorithms at ATLAS for LHC Run 2

    CERN Document Server

    Sowden, Benjamin Charles; The ATLAS collaboration

    2015-01-01

    A description of the design and performance of the newly reimplemented tracking algorithms for the ATLAS trigger for LHC Run 2, to commence in spring 2015, is provided. The ATLAS High Level Trigger (HLT) has been restructured to run as a more flexible single stage process, rather than the two separate Level 2 and Event Filter stages used during Run 1. To make optimal use of this new scenario, a new tracking strategy has been implemented for Run 2 for the HLT. This new strategy will use a Fast Track Finder (FTF) algorithm to directly seed the subsequent Precision Tracking, and will result in improved track parameter resolution and significantly faster execution times than achieved during Run 1 but with no significant reduction in efficiency. The performance and timing of the algorithms for numerous physics signatures in the trigger are presented. The profiling infrastructure, constructed to provide prompt feedback from the optimisation, is described, including the methods used to monitor the relative performan...

  7. Development of regularized expectation maximization algorithms for fan-beam SPECT data

    International Nuclear Information System (INIS)

    Kim, Soo Mee; Lee, Jae Sung; Lee, Dong Soo; Lee, Soo Jin; Kim, Kyeong Min

    2005-01-01

    SPECT using a fan-beam collimator improves spatial resolution and sensitivity. For the reconstruction from fan-beam projections, it is necessary to implement direct fan-beam reconstruction methods without transforming the data into the parallel geometry. In this study, various fan-beam reconstruction algorithms were implemented and their performances were compared. The projector for fan-beam SPECT was implemented using a ray-tracing method. The direct reconstruction algorithms implemented for fan-beam projection data were FBP (filtered backprojection), EM (expectation maximization), OS-EM (ordered subsets EM) and MAP-EM OSL (maximum a posteriori EM using the one-step late method) with membrane and thin-plate models as priors. For comparison, the fan-beam projection data were also rebinned into the parallel data using various interpolation methods, such as the nearest neighbor, bilinear and bicubic interpolations, and reconstructed using the conventional EM algorithm for parallel data. Noiseless and noisy projection data from the digital Hoffman brain and Shepp/Logan phantoms were reconstructed using the above algorithms. The reconstructed images were compared in terms of a percent error metric. For the fan-beam data with Poisson noise, the MAP-EM OSL algorithm with the thin-plate prior showed the best result in both percent error and stability. Bilinear interpolation was the most effective method for rebinning from the fan-beam to parallel geometry when the accuracy and computation load were considered. Direct fan-beam EM reconstructions were more accurate than the standard EM reconstructions obtained from rebinned parallel data. Direct fan-beam reconstruction algorithms were implemented, which provided significantly improved reconstructions

  8. Development of homotopy algorithms for fixed-order mixed H2/H(infinity) controller synthesis

    Science.gov (United States)

    Whorton, M.; Buschek, H.; Calise, A. J.

    1994-01-01

    A major difficulty associated with H-infinity and mu-synthesis methods is the order of the resulting compensator. Whereas model and/or controller reduction techniques are sometimes applied, performance and robustness properties are not preserved. By directly constraining compensator order during the optimization process, these properties are better preserved, albeit at the expense of computational complexity. This paper presents a novel homotopy algorithm to synthesize fixed-order mixed H2/H-infinity compensators. Numerical results are presented for a four-disk flexible structure to evaluate the efficiency of the algorithm.

  9. Development of a meta-algorithm for guiding primary care encounters for patients with multimorbidity using evidence-based and case-based guideline development methodology.

    Science.gov (United States)

    Muche-Borowski, Cathleen; Lühmann, Dagmar; Schäfer, Ingmar; Mundt, Rebekka; Wagner, Hans-Otto; Scherer, Martin

    2017-06-22

    The study aimed to develop a comprehensive algorithm (meta-algorithm) for primary care encounters of patients with multimorbidity. We used a novel, case-based and evidence-based procedure to overcome methodological difficulties in guideline development for patients with complex care needs. Systematic guideline development methodology including systematic evidence retrieval (guideline synopses), expert opinions and informal and formal consensus procedures. Primary care. The meta-algorithm was developed in six steps:1. Designing 10 case vignettes of patients with multimorbidity (common, epidemiologically confirmed disease patterns and/or particularly challenging health care needs) in a multidisciplinary workshop.2. Based on the main diagnoses, a systematic guideline synopsis of evidence-based and consensus-based clinical practice guidelines was prepared. The recommendations were prioritised according to the clinical and psychosocial characteristics of the case vignettes.3. Case vignettes along with the respective guideline recommendations were validated and specifically commented on by an external panel of practicing general practitioners (GPs).4. Guideline recommendations and experts' opinions were summarised as case specific management recommendations (N-of-one guidelines).5. Healthcare preferences of patients with multimorbidity were elicited from a systematic literature review and supplemented with information from qualitative interviews.6. All N-of-one guidelines were analysed using pattern recognition to identify common decision nodes and care elements. These elements were put together to form a generic meta-algorithm. The resulting meta-algorithm reflects the logic of a GP's encounter of a patient with multimorbidity regarding decision-making situations, communication needs and priorities. It can be filled with the complex problems of individual patients and hereby offer guidance to the practitioner. Contrary to simple, symptom-oriented algorithms, the meta-algorithm

  10. Developing a Random Forest Algorithm for MODIS Global Burned Area Classification

    Directory of Open Access Journals (Sweden)

    Rubén Ramo

    2017-11-01

    Full Text Available This paper aims to develop a global burned area (BA algorithm for MODIS BRDF-corrected images based on the Random Forest (RF classifier. Two RF models were generated, including: (1 all MODIS reflective bands; and (2 only the red (R and near infrared (NIR bands. Active fire information, vegetation indices and auxiliary variables were taken into account as well. Both RF models were trained using a statistically designed sample of 130 reference sites, which took into account the global diversity of fire conditions. For each site, fire perimeters were obtained from multitemporal pairs of Landsat TM/ETM+ images acquired in 2008. Those fire perimeters were used to extract burned and unburned areas to train the RF models. Using the standard MD43A4 resolution (500 × 500 m, the training dataset included 48,365 burned pixels and 6,293,205 unburned pixels. Different combinations of number of trees and number of parameters were tested. The final RF models included 600 trees and 5 attributes. The RF full model (considering all bands provided a balanced accuracy of 0.94, while the RF RNIR model had 0.93. As a first assessment of these RF models, they were used to classify daily MCD43A4 images in three test sites for three consecutive years (2006–2008. The selected sites included different ecosystems: Australia (Tropical, Boreal (Canada and Temperate (California, and extended coverage (totaling more than 2,500,000 km2. Results from both RF models for those sites were compared with national fire perimeters, as well as with two existing BA MODIS products; the MCD45 and MCD64. Considering all three years and three sites, commission error for the RF Full model was 0.16, with an omission error of 0.23. For the RF RNIR model, these errors were 0.19 and 0.21, respectively. The existing MODIS BA products had lower commission errors, but higher omission errors (0.09 and 0.33 for the MCD45 and 0.10 and 0.29 for the MCD64 than those obtained with the RF models, and

  11. Development and validation of a prediction algorithm for the onset of common mental disorders in a working population.

    Science.gov (United States)

    Fernandez, Ana; Salvador-Carulla, Luis; Choi, Isabella; Calvo, Rafael; Harvey, Samuel B; Glozier, Nicholas

    2018-01-01

    Common mental disorders are the most common reason for long-term sickness absence in most developed countries. Prediction algorithms for the onset of common mental disorders may help target indicated work-based prevention interventions. We aimed to develop and validate a risk algorithm to predict the onset of common mental disorders at 12 months in a working population. We conducted a secondary analysis of the Household, Income and Labour Dynamics in Australia Survey, a longitudinal, nationally representative household panel in Australia. Data from the 6189 working participants who did not meet the criteria for a common mental disorders at baseline were non-randomly split into training and validation databases, based on state of residence. Common mental disorders were assessed with the mental component score of 36-Item Short Form Health Survey questionnaire (score ⩽45). Risk algorithms were constructed following recommendations made by the Transparent Reporting of a multivariable prediction model for Prevention Or Diagnosis statement. Different risk factors were identified among women and men for the final risk algorithms. In the training data, the model for women had a C-index of 0.73 and effect size (Hedges' g) of 0.91. In men, the C-index was 0.76 and the effect size was 1.06. In the validation data, the C-index was 0.66 for women and 0.73 for men, with positive predictive values of 0.28 and 0.26, respectively Conclusion: It is possible to develop an algorithm with good discrimination for the onset identifying overall and modifiable risks of common mental disorders among working men. Such models have the potential to change the way that prevention of common mental disorders at the workplace is conducted, but different models may be required for women.

  12. Development and Verification of the Tire/Road Friction Estimation Algorithm for Antilock Braking System

    Directory of Open Access Journals (Sweden)

    Jian Zhao

    2014-01-01

    Full Text Available Road friction information is very important for vehicle active braking control systems such as ABS, ASR, or ESP. It is not easy to estimate the tire/road friction forces and coefficient accurately because of the nonlinear system, parameters uncertainties, and signal noises. In this paper, a robust and effective tire/road friction estimation algorithm for ABS is proposed, and its performance is further discussed by simulation and experiment. The tire forces were observed by the discrete Kalman filter, and the road friction coefficient was estimated by the recursive least square method consequently. Then, the proposed algorithm was analysed and verified by simulation and road test. A sliding mode based ABS with smooth wheel slip ratio control and a threshold based ABS by pulse pressure control with significant fluctuations were used for the simulation. Finally, road tests were carried out in both winter and summer by the car equipped with the same threshold based ABS, and the algorithm was evaluated on different road surfaces. The results show that the proposed algorithm can identify the variation of road conditions with considerable accuracy and response speed.

  13. Data-driven Development of ROTEM and TEG Algorithms for the Management of Trauma Hemorrhage

    DEFF Research Database (Denmark)

    Baksaas-Aasen, Kjersti; Van Dieren, Susan; Balvers, Kirsten

    2018-01-01

    for ROTEM, TEG, and CCTs to be used in addition to ratio driven transfusion and tranexamic acid. CONCLUSIONS: We describe a systematic approach to define threshold parameters for ROTEM and TEG. These parameters were incorporated into algorithms to support data-driven adjustments of resuscitation...

  14. Development of a New Fractal Algorithm to Predict Quality Traits of MRI Loins

    DEFF Research Database (Denmark)

    Caballero, Daniel; Caro, Andrés; Amigo, José Manuel

    2017-01-01

    Traditionally, the quality traits of meat products have been estimated by means of physico-chemical methods. Computer vision algorithms on MRI have also been presented as an alternative to these destructive methods since MRI is non-destructive, non-ionizing and innocuous. The use of fractals...

  15. DEVELOPMENT OF THE UNIVERSAL ALGORITHM FOR THE AIRSPACE CONFLICTS RESOLUTION WITH AN AIRPLANE EN-ROUTE

    Directory of Open Access Journals (Sweden)

    N. A. Petrov

    2014-01-01

    Full Text Available The paper outlines the formulation and solution of the problem of an airplane trajectory control within dynamically changing flight conditions. Physical and mathematical formulation of the problem was justified and algorithms were proposed to solve it using modern automated technologies.

  16. Development and validation of an algorithm for identifying urinary retention in a cohort of patients with epilepsy in a large US administrative claims database.

    Science.gov (United States)

    Quinlan, Scott C; Cheng, Wendy Y; Ishihara, Lianna; Irizarry, Michael C; Holick, Crystal N; Duh, Mei Sheng

    2016-04-01

    The aim of this study was to develop and validate an insurance claims-based algorithm for identifying urinary retention (UR) in epilepsy patients receiving antiepileptic drugs to facilitate safety monitoring. Data from the HealthCore Integrated Research Database(SM) in 2008-2011 (retrospective) and 2012-2013 (prospective) were used to identify epilepsy patients with UR. During the retrospective phase, three algorithms identified potential UR: (i) UR diagnosis code with a catheterization procedure code; (ii) UR diagnosis code alone; or (iii) diagnosis with UR-related symptoms. Medical records for 50 randomly selected patients satisfying ≥1 algorithm were reviewed by urologists to ascertain UR status. Positive predictive value (PPV) and 95% confidence intervals (CI) were calculated for the three component algorithms and the overall algorithm (defined as satisfying ≥1 component algorithms). Algorithms were refined using urologist review notes. In the prospective phase, the UR algorithm was refined using medical records for an additional 150 cases. In the retrospective phase, the PPV of the overall algorithm was 72.0% (95%CI: 57.5-83.8%). Algorithm 3 performed poorly and was dropped. Algorithm 1 was unchanged; urinary incontinence and cystitis were added as exclusionary diagnoses to Algorithm 2. The PPV for the modified overall algorithm was 89.2% (74.6-97.0%). In the prospective phase, the PPV for the modified overall algorithm was 76.0% (68.4-82.6%). Upon adding overactive bladder, nocturia and urinary frequency as exclusionary diagnoses, the PPV for the final overall algorithm was 81.9% (73.7-88.4%). The current UR algorithm yielded a PPV > 80% and could be used for more accurate identification of UR among epilepsy patients in a large claims database. Copyright © 2016 John Wiley & Sons, Ltd.

  17. Column Reduction of Polynomial Matrices; Some Remarks on the Algorithm of Wolovich

    NARCIS (Netherlands)

    Praagman, C.

    1996-01-01

    Recently an algorithm has been developed for column reduction of polynomial matrices. In a previous report the authors described a Fortran implementation of this algorithm. In this paper we compare the results of that implementation with an implementation of the algorithm originally developed by

  18. Development and Evaluation of Algorithms to Improve Small- and Medium-Size Commercial Building Operations

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Woohyun [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Katipamula, Srinivas [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Lutes, Robert G. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Underhill, Ronald M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-10-31

    Small- and medium-sized (<100,000 sf) commercial buildings (SMBs) represent over 95% of the U.S. commercial building stock and consume over 60% of total site energy consumption. Many of these buildings use rudimentary controls that are mostly manual, with limited scheduling capability, no monitoring or failure management. Therefore, many of these buildings are operated inefficiently and consume excess energy. SMBs typically utilize packaged rooftop units (RTUs) that are controlled by an individual thermostat. There is increased urgency to improve the operating efficiency of existing commercial building stock in the U.S. for many reasons, chief among them is to mitigate the climate change impacts. Studies have shown that managing set points and schedules of the RTUs will result in up to 20% energy and cost savings. Another problem associated with RTUs is short-cycling, where an RTU goes through ON and OFF cycles too frequently. Excessive cycling can lead to excessive wear and lead to premature failure of the compressor or its components. The short cycling can result in a significantly decreased average efficiency (up to 10%), even if there are no physical failures in the equipment. Also, SMBs use a time-of-day scheduling is to start the RTUs before the building will be occupied and shut it off when unoccupied. Ensuring correct use of the zone set points and eliminating frequent cycling of RTUs thereby leading to persistent building operations can significantly increase the operational efficiency of the SMBs. A growing trend is to use low-cost control infrastructure that can enable scalable and cost-effective intelligent building operations. The work reported in this report describes three algorithms for detecting the zone set point temperature, RTU cycling rate and occupancy schedule detection that can be deployed on the low-cost infrastructure. These algorithms only require the zone temperature data for detection. The algorithms have been tested and validated using

  19. Development and Evaluation of an Automated Machine Learning Algorithm for In-Hospital Mortality Risk Adjustment Among Critical Care Patients.

    Science.gov (United States)

    Delahanty, Ryan J; Kaufman, David; Jones, Spencer S

    2018-06-01

    Risk adjustment algorithms for ICU mortality are necessary for measuring and improving ICU performance. Existing risk adjustment algorithms are not widely adopted. Key barriers to adoption include licensing and implementation costs as well as labor costs associated with human-intensive data collection. Widespread adoption of electronic health records makes automated risk adjustment feasible. Using modern machine learning methods and open source tools, we developed and evaluated a retrospective risk adjustment algorithm for in-hospital mortality among ICU patients. The Risk of Inpatient Death score can be fully automated and is reliant upon data elements that are generated in the course of usual hospital processes. One hundred thirty-one ICUs in 53 hospitals operated by Tenet Healthcare. A cohort of 237,173 ICU patients discharged between January 2014 and December 2016. The data were randomly split into training (36 hospitals), and validation (17 hospitals) data sets. Feature selection and model training were carried out using the training set while the discrimination, calibration, and accuracy of the model were assessed in the validation data set. Model discrimination was evaluated based on the area under receiver operating characteristic curve; accuracy and calibration were assessed via adjusted Brier scores and visual analysis of calibration curves. Seventeen features, including a mix of clinical and administrative data elements, were retained in the final model. The Risk of Inpatient Death score demonstrated excellent discrimination (area under receiver operating characteristic curve = 0.94) and calibration (adjusted Brier score = 52.8%) in the validation dataset; these results compare favorably to the published performance statistics for the most commonly used mortality risk adjustment algorithms. Low adoption of ICU mortality risk adjustment algorithms impedes progress toward increasing the value of the healthcare delivered in ICUs. The Risk of Inpatient Death

  20. Algorithmic alternatives

    International Nuclear Information System (INIS)

    Creutz, M.

    1987-11-01

    A large variety of Monte Carlo algorithms are being used for lattice gauge simulations. For purely bosonic theories, present approaches are generally adequate; nevertheless, overrelaxation techniques promise savings by a factor of about three in computer time. For fermionic fields the situation is more difficult and less clear. Algorithms which involve an extrapolation to a vanishing step size are all quite closely related. Methods which do not require such an approximation tend to require computer time which grows as the square of the volume of the system. Recent developments combining global accept/reject stages with Langevin or microcanonical updatings promise to reduce this growth to V/sup 4/3/

  1. Development of an algorithm for X-ray exposures using the Panasonic UD-802A thermoluminescent dosemeter

    International Nuclear Information System (INIS)

    McKittrick, Leo; Currivan, Lorraine; Pollard, David; Nicholls, Colyn; Romero, A.M.; Palethorpe, Jeffrey

    2008-01-01

    Full text: As part of its continuous quality improvement the Dosimetry Service of the Radiological Protection Institute of Ireland (RPII) in conjunction with Panasonic Industrial Europe (UK) has investigated further the use of the standard Panasonic algorithm for X-ray exposures using the Panasonic UD-802A TL dosemeter. Originally developed to satisfy the obsolete standard ANSI 13.11-1983, the standard Panasonic dose algorithm has undergone several revisions such as HPS N13.11-2001. This paper presents a dose algorithm that can be used to correct the dose response at low energies such as X-ray radiation using a four element TL dosemeter due to the behaviour of two different independent phosphors. A series of irradiations with a range of energies using N-20 up to Co-60 were carried out with our particular interest being in responses to X-ray irradiations. Irradiations were performed at: RRPPS, University Hospital Birmingham NHS Foundation Trust, U.K.; HPA, U.K. and CIEMAT, Madrid, Spain. Different irradiation conditions were employed which included: X-ray from narrow and wide spectra as described by ISO 4037-1 (1996), and ISO water slab phantom and PMMA slab phantom respectively. Using the UD-802A TLD and UD-854AT hanger combination, the response data from the series of irradiations was utilised to validate and if necessary, modify the photon/beta branches of the algorithm to: 1. Best estimate Hp(10) and Hp(0.07); 2. Provide information on irradiation energies; 3. Verification by performance tests. This work further advances the algorithm developed at CIEMAT whereby a best-fit, polynomial trend is used with the dose response variations between the independent phosphors. (author)

  2. Development of an algorithm for assessing the risk to food safety posed by a new animal disease.

    Science.gov (United States)

    Parker, E M; Jenson, I; Jordan, D; Ward, M P

    2012-05-01

    An algorithm was developed as a tool to rapidly assess the potential for a new or emerging disease of livestock to adversely affect humans via consumption or handling of meat product, so that the risks and uncertainties can be understood and appropriate risk management and communication implemented. An algorithm describing the sequence of events from occurrence of the disease in livestock, release of the causative agent from an infected animal, contamination of fresh meat and then possible adverse effects in humans following meat handling and consumption was created. A list of questions complements the algorithm to help the assessors address the issues of concern at each step of the decision pathway. The algorithm was refined and validated through consultation with a panel of experts and a review group of animal health and food safety policy advisors via five case studies of potential emerging diseases of cattle. Tasks for model validation included describing the path taken in the algorithm and stating an outcome. Twenty-nine per cent of the 62 experts commented on the model, and one-third of those responding also completed the tasks required for model validation. The feedback from the panel of experts and the review group was used to further develop the tool and remove redundancies and ambiguities. There was agreement in the pathways and assessments for diseases in which the causative agent was well understood (for example, bovine pneumonia due to Mycoplasma bovis). The stated pathways and assessments of other diseases (for example, bovine Johne's disease) were not as consistent. The framework helps to promote objectivity by requiring questions to be answered sequentially and providing the opportunity to record consensus or differences of opinion. Areas for discussion and future investigation are highlighted by the points of diversion on the pathway taken by different assessors. © 2011 Blackwell Verlag GmbH.

  3. Algorithm and simulation development in support of response strategies for contamination events in air and water systems.

    Energy Technology Data Exchange (ETDEWEB)

    Waanders, Bart Van Bloemen

    2006-01-01

    Chemical/Biological/Radiological (CBR) contamination events pose a considerable threat to our nation's infrastructure, especially in large internal facilities, external flows, and water distribution systems. Because physical security can only be enforced to a limited degree, deployment of early warning systems is being considered. However to achieve reliable and efficient functionality, several complex questions must be answered: (1) where should sensors be placed, (2) how can sparse sensor information be efficiently used to determine the location of the original intrusion, (3) what are the model and data uncertainties, (4) how should these uncertainties be handled, and (5) how can our algorithms and forward simulations be sufficiently improved to achieve real time performance? This report presents the results of a three year algorithmic and application development to support the identification, mitigation, and risk assessment of CBR contamination events. The main thrust of this investigation was to develop (1) computationally efficient algorithms for strategically placing sensors, (2) identification process of contamination events by using sparse observations, (3) characterization of uncertainty through developing accurate demands forecasts and through investigating uncertain simulation model parameters, (4) risk assessment capabilities, and (5) reduced order modeling methods. The development effort was focused on water distribution systems, large internal facilities, and outdoor areas.

  4. DOE Research and Development Accomplishments Previous Highlights

    Science.gov (United States)

    through his Nobel Lecture in 1961, about unraveling the secrets of photosynthesis -- the process by which . March 10, 2015 Twenty years ago, the top quark was first observed in experiments at the Tevatron proton sophisticated detectors, the top was hard to find. After a top is made from a proton-antiproton collision, a

  5. Development and clinical implementation of an enhanced display algorithm for use in networked electronic portal imaging

    International Nuclear Information System (INIS)

    Heuvel, Frank van den; Han, Ihn; Chungbin, Suzanne; Strowbridge, Amy; Tekyi-Mensah, Sam; Ragan, Don P.

    1999-01-01

    Purpose: To introduce and clinically validate a preprocessing algorithm that allows clinical images from an electronic portal imaging device (EPID) to be displayed on any computer monitor, without loss of clinical usability. The introduction of such a system frees EPI systems from the constraints of fixed viewing workstations and increases mobility of the images in a department. Methods and Materials: The preprocessing algorithm, together with its variable parameters is introduced. Clinically, the algorithm is tested using an observer study of 316 EPID images of the pelvic region in the framework of treatment of carcinoma of the cervix and endometrium. Both anterior-posterior (AP/PA) and latero-lateral (LAT) images were used. The images scored were taken from six different patients, five of whom were obese, female, and postmenopausal. The result is tentatively compared with results from other groups. The scoring system, based on the number of visible landmarks in the port, is proposed and validated. Validation was performed by having the observer panel score images with artificially induced noise levels. A comparative study was undertaken with a standard automatic window and leveling display technique. Finally, some case studies using different image sites and EPI detectors are presented. Results: The image quality for all images in this study was deemed to be clinically useful (mean score > 1). Most of the images received a score which was second highest (AP/PA landmarks ≥ 6 and LAT landmarks ≥ 5). Obesity, which has been an important factor determining the image quality, was not seen to be a factor here. Compared to standard techniques a highly significant improvement was determined with regard to clinical usefulness. The algorithm performs fast (less than 9 seconds) and needs no additional user interaction in most of the cases. The algorithm works well on both direct detection portal imagers and camera-based imagers whether analog or digital cameras

  6. The Algorithmic Imaginary

    DEFF Research Database (Denmark)

    Bucher, Taina

    2017-01-01

    the notion of the algorithmic imaginary. It is argued that the algorithmic imaginary – ways of thinking about what algorithms are, what they should be and how they function – is not just productive of different moods and sensations but plays a generative role in moulding the Facebook algorithm itself...... of algorithms affect people's use of these platforms, if at all? To help answer these questions, this article examines people's personal stories about the Facebook algorithm through tweets and interviews with 25 ordinary users. To understand the spaces where people and algorithms meet, this article develops...

  7. Algorithms for worst-case tolerance optimization

    DEFF Research Database (Denmark)

    Schjær-Jacobsen, Hans; Madsen, Kaj

    1979-01-01

    New algorithms are presented for the solution of optimum tolerance assignment problems. The problems considered are defined mathematically as a worst-case problem (WCP), a fixed tolerance problem (FTP), and a variable tolerance problem (VTP). The basic optimization problem without tolerances...... is denoted the zero tolerance problem (ZTP). For solution of the WCP we suggest application of interval arithmetic and also alternative methods. For solution of the FTP an algorithm is suggested which is conceptually similar to algorithms previously developed by the authors for the ZTP. Finally, the VTP...... is solved by a double-iterative algorithm in which the inner iteration is performed by the FTP- algorithm. The application of the algorithm is demonstrated by means of relatively simple numerical examples. Basic properties, such as convergence properties, are displayed based on the examples....

  8. Algorithm Development and Validation of CDOM Properties for Estuarine and Continental Shelf Waters Along the Northeastern U.S. Coast

    Science.gov (United States)

    Mannino, Antonio; Novak, Michael G.; Hooker, Stanford B.; Hyde, Kimberly; Aurin, Dick

    2014-01-01

    An extensive set of field measurements have been collected throughout the continental margin of the northeastern U.S. from 2004 to 2011 to develop and validate ocean color satellite algorithms for the retrieval of the absorption coefficient of chromophoric dissolved organic matter (aCDOM) and CDOM spectral slopes for the 275:295 nm and 300:600 nm spectral range (S275:295 and S300:600). Remote sensing reflectance (Rrs) measurements computed from in-water radiometry profiles along with aCDOM() data are applied to develop several types of algorithms for the SeaWiFS and MODIS-Aqua ocean color satellite sensors, which involve least squares linear regression of aCDOM() with (1) Rrs band ratios, (2) quasi-analytical algorithm-based (QAA based) products of total absorption coefficients, (3) multiple Rrs bands within a multiple linear regression (MLR) analysis, and (4) diffuse attenuation coefficient (Kd). The relative error (mean absolute percent difference; MAPD) for the MLR retrievals of aCDOM(275), aCDOM(355), aCDOM(380), aCDOM(412) and aCDOM(443) for our study region range from 20.4-23.9 for MODIS-Aqua and 27.3-30 for SeaWiFS. Because of the narrower range of CDOM spectral slope values, the MAPD for the MLR S275:295 and QAA-based S300:600 algorithms are much lower ranging from 9.9 and 8.3 for SeaWiFS, respectively, and 8.7 and 6.3 for MODIS, respectively. Seasonal and spatial MODIS-Aqua and SeaWiFS distributions of aCDOM, S275:295 and S300:600 processed with these algorithms are consistent with field measurements and the processes that impact CDOM levels along the continental shelf of the northeastern U.S. Several satellite data processing factors correlate with higher uncertainty in satellite retrievals of aCDOM, S275:295 and S300:600 within the coastal ocean, including solar zenith angle, sensor viewing angle, and atmospheric products applied for atmospheric corrections. Algorithms that include ultraviolet Rrs bands provide a better fit to field measurements than

  9. The development of a line-scan imaging algorithm for the detection of fecal contamination on leafy geens

    Science.gov (United States)

    Yang, Chun-Chieh; Kim, Moon S.; Chuang, Yung-Kun; Lee, Hoyoung

    2013-05-01

    This paper reports the development of a multispectral algorithm, using the line-scan hyperspectral imaging system, to detect fecal contamination on leafy greens. Fresh bovine feces were applied to the surfaces of washed loose baby spinach leaves. A hyperspectral line-scan imaging system was used to acquire hyperspectral fluorescence images of the contaminated leaves. Hyperspectral image analysis resulted in the selection of the 666 nm and 688 nm wavebands for a multispectral algorithm to rapidly detect feces on leafy greens, by use of the ratio of fluorescence intensities measured at those two wavebands (666 nm over 688 nm). The algorithm successfully distinguished most of the lowly diluted fecal spots (0.05 g feces/ml water and 0.025 g feces/ml water) and some of the highly diluted spots (0.0125 g feces/ml water and 0.00625 g feces/ml water) from the clean spinach leaves. The results showed the potential of the multispectral algorithm with line-scan imaging system for application to automated food processing lines for food safety inspection of leafy green vegetables.

  10. Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs.

    Science.gov (United States)

    Gulshan, Varun; Peng, Lily; Coram, Marc; Stumpe, Martin C; Wu, Derek; Narayanaswamy, Arunachalam; Venugopalan, Subhashini; Widner, Kasumi; Madams, Tom; Cuadros, Jorge; Kim, Ramasamy; Raman, Rajiv; Nelson, Philip C; Mega, Jessica L; Webster, Dale R

    2016-12-13

    Deep learning is a family of computational methods that allow an algorithm to program itself by learning from a large set of examples that demonstrate the desired behavior, removing the need to specify rules explicitly. Application of these methods to medical imaging requires further assessment and validation. To apply deep learning to create an algorithm for automated detection of diabetic retinopathy and diabetic macular edema in retinal fundus photographs. A specific type of neural network optimized for image classification called a deep convolutional neural network was trained using a retrospective development data set of 128 175 retinal images, which were graded 3 to 7 times for diabetic retinopathy, diabetic macular edema, and image gradability by a panel of 54 US licensed ophthalmologists and ophthalmology senior residents between May and December 2015. The resultant algorithm was validated in January and February 2016 using 2 separate data sets, both graded by at least 7 US board-certified ophthalmologists with high intragrader consistency. Deep learning-trained algorithm. The sensitivity and specificity of the algorithm for detecting referable diabetic retinopathy (RDR), defined as moderate and worse diabetic retinopathy, referable diabetic macular edema, or both, were generated based on the reference standard of the majority decision of the ophthalmologist panel. The algorithm was evaluated at 2 operating points selected from the development set, one selected for high specificity and another for high sensitivity. The EyePACS-1 data set consisted of 9963 images from 4997 patients (mean age, 54.4 years; 62.2% women; prevalence of RDR, 683/8878 fully gradable images [7.8%]); the Messidor-2 data set had 1748 images from 874 patients (mean age, 57.6 years; 42.6% women; prevalence of RDR, 254/1745 fully gradable images [14.6%]). For detecting RDR, the algorithm had an area under the receiver operating curve of 0.991 (95% CI, 0.988-0.993) for EyePACS-1 and 0

  11. Development of fuzzy algorithm with learning function for nuclear steam generator level control

    International Nuclear Information System (INIS)

    Park, Gee Yong; Seong, Poong Hyun

    1993-01-01

    A fuzzy algorithm with learning function is applied to the steam generator level control of nuclear power plant. This algorithm can make its rule base and membership functions suited for steam generator level control by use of the data obtained from the control actions of a skilled operator or of other controllers (i.e., PID controller). The rule base of fuzzy controller with learning function is divided into two parts. One part of the rule base is provided to level control of steam generator at low power level (0 % - 30 % of full power) and the other to level control at high power level (30 % - 100 % of full power). Response time of steam generator level control at low power range with this rule base is shown to be shorter than that of fuzzy controller with direct inference. (Author)

  12. Development of a computationally efficient algorithm for attitude estimation of a remote sensing satellite

    Science.gov (United States)

    Labibian, Amir; Bahrami, Amir Hossein; Haghshenas, Javad

    2017-09-01

    This paper presents a computationally efficient algorithm for attitude estimation of remote a sensing satellite. In this study, gyro, magnetometer, sun sensor and star tracker are used in Extended Kalman Filter (EKF) structure for the purpose of Attitude Determination (AD). However, utilizing all of the measurement data simultaneously in EKF structure increases computational burden. Specifically, assuming n observation vectors, an inverse of a 3n×3n matrix is required for gain calculation. In order to solve this problem, an efficient version of EKF, namely Murrell's version, is employed. This method utilizes measurements separately at each sampling time for gain computation. Therefore, an inverse of a 3n×3n matrix is replaced by an inverse of a 3×3 matrix for each measurement vector. Moreover, gyro drifts during the time can reduce the pointing accuracy. Therefore, a calibration algorithm is utilized for estimation of the main gyro parameters.

  13. Development and Beam Tests of an Automatic Algorithm for Alignment of LHC Collimators with Embedded BPMs

    CERN Document Server

    Valentino, G; Gasior, M; Mirarchi, D; Nosych, A A; Redaelli, S; Salvachua, B; Assmann, R W; Sammut, N

    2013-01-01

    Collimators with embedded Beam Position Monitor (BPM) buttons will be installed in the LHC during the upcoming long shutdown period. During the subsequent operation, the BPMs will allow the collimator jaws to be kept centered around the beam trajectory. In this manner, the best possible beam cleaning efficiency and machine protection can be provided at unprecedented higher beam energies and intensities. A collimator alignment algorithm is proposed to center the jaws automatically around the beam. The algorithm is based on successive approximation, as the BPM measurements are affected by non-linearities, which vary with the distance between opposite buttons, as well as the difference between the beam and the jaw centers. The successful test results, as well as some considerations for eventual operation in the LHC are also presented.

  14. Development and testing of an algorithm to detect implantable cardioverter-defibrillator lead failure.

    Science.gov (United States)

    Gunderson, Bruce D; Gillberg, Jeffrey M; Wood, Mark A; Vijayaraman, Pugazhendhi; Shepard, Richard K; Ellenbogen, Kenneth A

    2006-02-01

    Implantable cardioverter-defibrillator (ICD) lead failures often present as inappropriate shock therapy. An algorithm that can reliably discriminate between ventricular tachyarrhythmias and noise due to lead failure may prevent patient discomfort and anxiety and avoid device-induced proarrhythmia by preventing inappropriate ICD shocks. The goal of this analysis was to test an ICD tachycardia detection algorithm that differentiates noise due to lead failure from ventricular tachyarrhythmias. We tested an algorithm that uses a measure of the ventricular intracardiac electrogram baseline to discriminate the sinus rhythm isoelectric line from the right ventricular coil-can (i.e., far-field) electrogram during oversensing of noise caused by a lead failure. The baseline measure was defined as the product of the sum (mV) and standard deviation (mV) of the voltage samples for a 188-ms window centered on each sensed electrogram. If the minimum baseline measure of the last 12 beats was algorithm to detect lead failures. The minimum baseline measure for the 24 lead failure episodes (0.28 +/- 0.34 mV-mV) was smaller than the 135 ventricular tachycardia (40.8 +/- 43.0 mV-mV, P <.0001) and 55 ventricular fibrillation episodes (19.1 +/- 22.8 mV-mV, P <.05). A minimum baseline <0.35 mV-mV threshold had a sensitivity of 83% (20/24) with a 100% (190/190) specificity. A baseline measure of the far-field electrogram had a high sensitivity and specificity to detect lead failure noise compared with ventricular tachycardia or fibrillation.

  15. Development of a Response Planner using the UCT Algorithm for Cyber Defense

    Science.gov (United States)

    2013-03-01

    President Obama as “one of the most serious economic and national security challenges we face as a nation, but one that we as a government, or as a country...an expensive, exhaustive search. Some other data sets, similar to KDD 99, used with the SFS feature selection algorithm include the Auckland -VI, NZIXII...complex task, but there has been advancements that help guide the searching of these state-based planners to make them faster. Some state-based

  16. Development of GPS Receiver Kalman Filter Algorithms for Stationary, Low-Dynamics, and High-Dynamics Applications

    Science.gov (United States)

    2016-06-01

    Filter Algorithms for Stationary, Low-Dynamics, and High-Dynamics Applications Executive Summary The Global Positioning system ( GPS ) is the primary...software that may need to be developed for performance prediction of current or future systems that incorporate GPS . The ultimate aim is to help inform...Defence Science and Technology Organisation in 1986. His major areas of work were adaptive tracking , sig- nal processing, and radar systems engineering

  17. Selection and collection of multi parameter physiological data for cardiac rhythm diagnostic algorithm development

    Energy Technology Data Exchange (ETDEWEB)

    Bostock, J.; Weller, P. [School of Informatics, City University London, London EC1V 0HB (United Kingdom); Cooklin, M., E-mail: jbostock1@msn.co [Cardiovascular Directorate, Guy' s and St. Thomas' NHS Foundation Trust, London, SE1 7EH (United Kingdom)

    2010-07-01

    Automated diagnostic algorithms are used in implantable cardioverter-defibrillators (ICD's) to detect abnormal heart rhythms. Algorithms misdiagnose and improved specificity is needed to prevent inappropriate therapy. Knowledge engineering (KE) and artificial intelligence (AI) could improve this. A pilot study of KE was performed with artificial neural network (ANN) as AI system. A case note review analysed arrhythmic events stored in patients ICD memory. 13.2% patients received inappropriate therapy. The best ICD algorithm had sensitivity 1.00, specificity 0.69 (p<0.001 different to gold standard). A subset of data was used to train and test an ANN. A feed-forward, back-propagation network with 7 inputs, a 4 node hidden layer and 1 output had sensitivity 1.00, specificity 0.71 (p<0.001). A prospective study was performed using KE to list arrhythmias, factors and indicators for which measurable parameters were evaluated and results reviewed by a domain expert. Waveforms from electrodes in the heart and thoracic bio-impedance; temperature and motion data were collected from 65 patients during cardiac electrophysiological studies. 5 incomplete datasets were due to technical failures. We concluded that KE successfully guided selection of parameters and ANN produced a usable system and that complex data collection carries greater risk of technical failure, leading to data loss.

  18. Selection and collection of multi parameter physiological data for cardiac rhythm diagnostic algorithm development

    International Nuclear Information System (INIS)

    Bostock, J.; Weller, P.; Cooklin, M.

    2010-01-01

    Automated diagnostic algorithms are used in implantable cardioverter-defibrillators (ICD's) to detect abnormal heart rhythms. Algorithms misdiagnose and improved specificity is needed to prevent inappropriate therapy. Knowledge engineering (KE) and artificial intelligence (AI) could improve this. A pilot study of KE was performed with artificial neural network (ANN) as AI system. A case note review analysed arrhythmic events stored in patients ICD memory. 13.2% patients received inappropriate therapy. The best ICD algorithm had sensitivity 1.00, specificity 0.69 (p<0.001 different to gold standard). A subset of data was used to train and test an ANN. A feed-forward, back-propagation network with 7 inputs, a 4 node hidden layer and 1 output had sensitivity 1.00, specificity 0.71 (p<0.001). A prospective study was performed using KE to list arrhythmias, factors and indicators for which measurable parameters were evaluated and results reviewed by a domain expert. Waveforms from electrodes in the heart and thoracic bio-impedance; temperature and motion data were collected from 65 patients during cardiac electrophysiological studies. 5 incomplete datasets were due to technical failures. We concluded that KE successfully guided selection of parameters and ANN produced a usable system and that complex data collection carries greater risk of technical failure, leading to data loss.

  19. Development of an Algorithm for Heart Rate Measurement Using a Mobile Phone Camera

    Directory of Open Access Journals (Sweden)

    D. A. Laure

    2014-01-01

    Full Text Available Nowadays there exist many different ways to measure a person’s heart rate. One of them assumes the usage of a mobile phone built-in camera. This method is easy to use and does not require any additional skills or special devices for heart rate measurement. It requires only a mobile cellphone with a built-in camera and a flash. The main idea of the method is to detect changes in finger skin color that occur due to blood pulsation. The measurement process is simple: the user covers the camera lens with a finger and the application on the mobile phone starts catching and analyzing frames from the camera. Heart rate can be calculated by analyzing average red component values of frames taken by the mobile cellphone camera that contain images of an area of the skin.In this paper the authors review the existing algorithms for heart rate measurement with the help of a mobile phone camera and propose their own algorithm which is more efficient than the reviewed algorithms.

  20. Performance and development plans for the Inner Detector trigger algorithms at ATLAS

    CERN Document Server

    Martin-haugh, Stewart; The ATLAS collaboration

    2015-01-01

    A description of the design and performance of the newly re-implemented tracking algorithms for the ATLAS trigger for LHC Run 2, to commence in spring 2015, is presented. The ATLAS High Level Trigger (HLT) has been restructured to run as a more flexible single stage process, rather than the two separate Level 2 and Event Filter stages used during Run 1. To make optimal use of this new scenario, a new tracking strategy has been implemented for Run 2. This new strategy will use a Fast Track Finder (FTF) algorithm to directly seed the subsequent Precision Tracking, and will result in improved track parameter resolution and significantly faster execution times than achieved during Run 1 and with better efficiency. The performance and timing of the algorithms for electron and tau track triggers are presented. The profiling infrastructure, constructed to provide prompt feedback from the optimisation, is described, including the methods used to monitor the relative performance improvements as the code evolves. The o...

  1. Performance and development plans for the Inner Detector trigger algorithms at ATLAS

    CERN Document Server

    Martin-haugh, Stewart; The ATLAS collaboration

    2015-01-01

    A description of the design and performance of the newly re-implemented tracking algorithms for the ATLAS trigger for LHC Run 2, to commence in spring 2015, is presented. The ATLAS High Level Trigger (HLT) has been restructured to run as a more flexible single stage process, rather than the two separate Level 2 and Event Filter stages used during Run 1. To make optimal use of this new scenario, a new tracking strategy has been implemented for Run 2. This new strategy will use a FastTrackFinder algorithm to directly seed the subsequent Precision Tracking, and will result in improved track parameter resolution and significantly faster execution times than achieved during Run 1 and with better efficiency. The timings of the algorithms for electron and tau track triggers are presented. The profiling infrastructure, constructed to provide prompt feedback from the optimisation, is described, including the methods used to monitor the relative performance improvements as the code evolves. The online deployment and co...

  2. Development of reversible jump Markov Chain Monte Carlo algorithm in the Bayesian mixture modeling for microarray data in Indonesia

    Science.gov (United States)

    Astuti, Ani Budi; Iriawan, Nur; Irhamah, Kuswanto, Heri

    2017-12-01

    In the Bayesian mixture modeling requires stages the identification number of the most appropriate mixture components thus obtained mixture models fit the data through data driven concept. Reversible Jump Markov Chain Monte Carlo (RJMCMC) is a combination of the reversible jump (RJ) concept and the Markov Chain Monte Carlo (MCMC) concept used by some researchers to solve the problem of identifying the number of mixture components which are not known with certainty number. In its application, RJMCMC using the concept of the birth/death and the split-merge with six types of movement, that are w updating, θ updating, z updating, hyperparameter β updating, split-merge for components and birth/death from blank components. The development of the RJMCMC algorithm needs to be done according to the observed case. The purpose of this study is to know the performance of RJMCMC algorithm development in identifying the number of mixture components which are not known with certainty number in the Bayesian mixture modeling for microarray data in Indonesia. The results of this study represent that the concept RJMCMC algorithm development able to properly identify the number of mixture components in the Bayesian normal mixture model wherein the component mixture in the case of microarray data in Indonesia is not known for certain number.

  3. Development of a hybrid energy storage sizing algorithm associated with the evaluation of power management in different driving cycles

    International Nuclear Information System (INIS)

    Masoud, Masih Tehrani; Mohammad Reza, Ha'iri Yazdi; Esfahanian, Vahid; Sagha, Hossein

    2012-01-01

    In this paper, a hybrid energy storage sizing algorithm for electric vehicles is developed to achieve a semi optimum cost effective design. Using the developed algorithm, a driving cycle is divided into its micro-trips and the power and energy demands in each micro trip are determined. The battery size is estimated because the battery fulfills the power demands. Moreover, the ultra capacitor (UC) energy (or the number of UC modules) is assessed because the UC delivers the maximum energy demands of the different micro trips of a driving cycle. Finally, a design factor, which shows the power of the hybrid energy storage control strategy, is utilized to evaluate the newly designed control strategies. Using the developed algorithm, energy saving loss, driver satisfaction criteria, and battery life criteria are calculated using a feed forward dynamic modeling software program and are utilized for comparison among different energy storage candidates. This procedure is applied to the hybrid energy storage sizing of a series hybrid electric city bus in Manhattan and to the Tehran driving cycle. Results show that a higher aggressive driving cycle (Manhattan) requires more expensive energy storage system and more sophisticated energy management strategy

  4. Development of Reinforcement Learning Algorithm for Automation of Slide Gate Check Structure in Canals

    Directory of Open Access Journals (Sweden)

    K. Shahverdi

    2016-02-01

    Full Text Available Introduction: Nowadays considering water shortage and weak management in agricultural water sector and for optimal uses of water, irrigation networks performance need to be improveed. Recently, intelligent management of water conveyance and delivery, and better control technologies have been considered for improving the performance of irrigation networks and their operation. For this affair, providing of mathematical model of automatic control system and related structures, which connected with hydrodynamic models, is necessary. The main objective of this research, is development of mathematical model of RL upstream control algorithm inside ICSS hydrodynamic model as a subroutine. Materials and Methods: In the learning systems, a set of state-action rules called classifiers compete to control the system based on the system's receipt from the environment. One could be identified five main elements of the RL: an agent, an environment, a policy, a reward function, and a simulator. The learner (decision-maker is called the agent. The thing it interacts with, comprising everything outside the agent, is called the environment. The agent selects an action based on existing state in the environment. When the agent takes an action and performs on environment, the environment goes new state and reward is assigned based on it. The agent and the environment continually interact to maximize the reward. The policy is a set of state-action pair, which have higher rewards. It defines the agent's behavior and says which action must be taken in which state. The reward function defines the goal in a RL problem. The reward function defines what the good and bad events are for the agent. The higher the reward, the better the action. The simulator provides environment information. In irrigation canals, the agent is the check structures. The action and state are the check structures adjustment and the water depth, respectively. The environment comprises the hydraulic

  5. Watershed model calibration framework developed using an influence coefficient algorithm and a genetic algorithm and analysis of pollutant discharge characteristics and load reduction in a TMDL planning area.

    Science.gov (United States)

    Cho, Jae Heon; Lee, Jong Ho

    2015-11-01

    Manual calibration is common in rainfall-runoff model applications. However, rainfall-runoff models include several complicated parameters; thus, significant time and effort are required to manually calibrate the parameters individually and repeatedly. Automatic calibration has relative merit regarding time efficiency and objectivity but shortcomings regarding understanding indigenous processes in the basin. In this study, a watershed model calibration framework was developed using an influence coefficient algorithm and genetic algorithm (WMCIG) to automatically calibrate the distributed models. The optimization problem used to minimize the sum of squares of the normalized residuals of the observed and predicted values was solved using a genetic algorithm (GA). The final model parameters were determined from the iteration with the smallest sum of squares of the normalized residuals of all iterations. The WMCIG was applied to a Gomakwoncheon watershed located in an area that presents a total maximum daily load (TMDL) in Korea. The proportion of urbanized area in this watershed is low, and the diffuse pollution loads of nutrients such as phosphorus are greater than the point-source pollution loads because of the concentration of rainfall that occurs during the summer. The pollution discharges from the watershed were estimated for each land-use type, and the seasonal variations of the pollution loads were analyzed. Consecutive flow measurement gauges have not been installed in this area, and it is difficult to survey the flow and water quality in this area during the frequent heavy rainfall that occurs during the wet season. The Hydrological Simulation Program-Fortran (HSPF) model was used to calculate the runoff flow and water quality in this basin. Using the water quality results, a load duration curve was constructed for the basin, the exceedance frequency of the water quality standard was calculated for each hydrologic condition class, and the percent reduction

  6. Development of Multi-perspective Diagnostics and Analysis Algorithms with Applications to Subsonic and Supersonic Combustors

    Science.gov (United States)

    Wickersham, Andrew Joseph

    There are two critical research needs for the study of hydrocarbon combustion in high speed flows: 1) combustion diagnostics with adequate temporal and spatial resolution, and 2) mathematical techniques that can extract key information from large datasets. The goal of this work is to address these needs, respectively, by the use of high speed and multi-perspective chemiluminescence and advanced mathematical algorithms. To obtain the measurements, this work explored the application of high speed chemiluminescence diagnostics and the use of fiber-based endoscopes (FBEs) for non-intrusive and multi-perspective chemiluminescence imaging up to 20 kHz. Non-intrusive and full-field imaging measurements provide a wealth of information for model validation and design optimization of propulsion systems. However, it is challenging to obtain such measurements due to various implementation difficulties such as optical access, thermal management, and equipment cost. This work therefore explores the application of FBEs for non-intrusive imaging to supersonic propulsion systems. The FBEs used in this work are demonstrated to overcome many of the aforementioned difficulties and provided datasets from multiple angular positions up to 20 kHz in a supersonic combustor. The combustor operated on ethylene fuel at Mach 2 with an inlet stagnation temperature and pressure of approximately 640 degrees Fahrenheit and 70 psia, respectively. The imaging measurements were obtained from eight perspectives simultaneously, providing full-field datasets under such flow conditions for the first time, allowing the possibility of inferring multi-dimensional measurements. Due to the high speed and multi-perspective nature, such new diagnostic capability generates a large volume of data and calls for analysis algorithms that can process the data and extract key physics effectively. To extract the key combustion dynamics from the measurements, three mathematical methods were investigated in this work

  7. Development of double dosimetry algorithm for assessment of effective dose to staff in interventional radiology

    International Nuclear Information System (INIS)

    Kim, Ji Young

    2011-02-01

    Medical staff involving interventional radiology(IR) procedures are significantly exposed to the scatter radiation because they stand in close proximity to the patient. Since modern IR techniques are often very complicated and require extended operation time, doses to IR workers tend to increase considerably. In general, the personal dose equivalent at 10 mm depth, H p (10), read from one dosimeter worn on the trunk of a radiation worker is assumed to be a good estimate of the effective dose and compared to the dose limits for regulatory compliance. This assumption is based on the exposure conditions that the radiation field is broad and rather homogeneous. However, IR workers usually wear protective clothing like lead aprons and thyroid shield which allow part of the body being exposed to much higher doses. To solve this problem, i.e. to adequately estimate the effective doses of IR workers, use of double dosimeters, one under the apron and one over the apron where unshielded part of the body exposed, was recommended. Several algorithms on the interpretation of the two dosimeter readings have been proposed. However, the dosimeter weighting factors applied to the algorithm differ significantly, which quests a question on the reliability of the algorithm. Moreover, there are some changes in the process of calculating the effective dose in the 2007 recommendations of the International Commission on Radiological Protection(ICRP): changes in the radiation weighting factors, tissue weighting factors and the computational reference phantoms. Therefore, this study attempts to set a new algorithm for interpreting two dosimeter readings to provide a proper estimate of the effective dose for IR workers, incorporating those changes in definition of effective dose. The effective doses were estimated using Monte Carlo simulations for various practical conditions based on the vogel reference phantom and the new tissue weighting factors. A quasi-effective dose, which is

  8. Development of double dosimetry algorithm for assessment of effective dose to staff in interventional radiology

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ji Young

    2011-02-15

    Medical staff involving interventional radiology(IR) procedures are significantly exposed to the scatter radiation because they stand in close proximity to the patient. Since modern IR techniques are often very complicated and require extended operation time, doses to IR workers tend to increase considerably. In general, the personal dose equivalent at 10 mm depth, H{sub p}(10), read from one dosimeter worn on the trunk of a radiation worker is assumed to be a good estimate of the effective dose and compared to the dose limits for regulatory compliance. This assumption is based on the exposure conditions that the radiation field is broad and rather homogeneous. However, IR workers usually wear protective clothing like lead aprons and thyroid shield which allow part of the body being exposed to much higher doses. To solve this problem, i.e. to adequately estimate the effective doses of IR workers, use of double dosimeters, one under the apron and one over the apron where unshielded part of the body exposed, was recommended. Several algorithms on the interpretation of the two dosimeter readings have been proposed. However, the dosimeter weighting factors applied to the algorithm differ significantly, which quests a question on the reliability of the algorithm. Moreover, there are some changes in the process of calculating the effective dose in the 2007 recommendations of the International Commission on Radiological Protection(ICRP): changes in the radiation weighting factors, tissue weighting factors and the computational reference phantoms. Therefore, this study attempts to set a new algorithm for interpreting two dosimeter readings to provide a proper estimate of the effective dose for IR workers, incorporating those changes in definition of effective dose. The effective doses were estimated using Monte Carlo simulations for various practical conditions based on the vogel reference phantom and the new tissue weighting factors. A quasi-effective dose, which is

  9. Algorithms to identify colonic ischemia, complications of constipation and irritable bowel syndrome in medical claims data: development and validation.

    Science.gov (United States)

    Sands, Bruce E; Duh, Mei-Sheng; Cali, Clorinda; Ajene, Anuli; Bohn, Rhonda L; Miller, David; Cole, J Alexander; Cook, Suzanne F; Walker, Alexander M

    2006-01-01

    A challenge in the use of insurance claims databases for epidemiologic research is accurate identification and verification of medical conditions. This report describes the development and validation of claims-based algorithms to identify colonic ischemia, hospitalized complications of constipation, and irritable bowel syndrome (IBS). From the research claims databases of a large healthcare company, we selected at random 120 potential cases of IBS and 59 potential cases each of colonic ischemia and hospitalized complications of constipation. We sought the written medical records and were able to abstract 107, 57, and 51 records, respectively. We established a 'true' case status for each subject by applying standard clinical criteria to the available chart data. Comparing the insurance claims histories to the assigned case status, we iteratively developed, tested, and refined claims-based algorithms that would capture the diagnoses obtained from the medical records. We set goals of high specificity for colonic ischemia and hospitalized complications of constipation, and high sensitivity for IBS. The resulting algorithms substantially improved on the accuracy achievable from a naïve acceptance of the diagnostic codes attached to insurance claims. The specificities for colonic ischemia and serious complications of constipation were 87.2 and 92.7%, respectively, and the sensitivity for IBS was 98.9%. U.S. commercial insurance claims data appear to be usable for the study of colonic ischemia, IBS, and serious complications of constipation. (c) 2005 John Wiley & Sons, Ltd.

  10. Research on magnetorheological damper suspension with permanent magnet and magnetic valve based on developed FOA-optimal control algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Xiao, Ping; Gao, Hong [Anhui Polytechnic University, Wuhu (China); Niu, Limin [Anhui University of Technology, Maanshan (China)

    2017-07-15

    Due to the fail safe problem, it was difficult for the existing Magnetorheological damper (MD) to be widely applied in automotive suspensions. Therefore, permanent magnets and magnetic valves were introduced to existing MDs so that fail safe problem could be solved by the magnets and damping force could be adjusted easily by the magnetic valve. Thus, a new Magnetorheological damper with permanent magnet and magnetic valve (MDPMMV) was developed and MDPMMV suspension was studied. First of all, mechanical structure of existing magnetorheological damper applied in automobile suspensions was redesigned, comprising a permanent magnet and a magnetic valve. In addition, prediction model of damping force was built based on electromagnetics theory and Bingham model. Experimental research was onducted on the newly designed damper and goodness of fit between experiment results and simulated ones by models was high. On this basis, a quarter suspension model was built. Then, fruit Fly optimization algorithm (FOA)-optimal control algorithm suitable for automobile suspension was designed based on developing normal FOA. Finally, simulation experiments and bench tests with input surface of pulse road and B road were carried out and the results indicated that working erformance of MDPMMV suspension based on FOA-optimal control algorithm was good.

  11. Developing a NIR multispectral imaging for prediction and visualization of peanut protein content using variable selection algorithms

    Science.gov (United States)

    Cheng, Jun-Hu; Jin, Huali; Liu, Zhiwei

    2018-01-01

    The feasibility of developing a multispectral imaging method using important wavelengths from hyperspectral images selected by genetic algorithm (GA), successive projection algorithm (SPA) and regression coefficient (RC) methods for modeling and predicting protein content in peanut kernel was investigated for the first time. Partial least squares regression (PLSR) calibration model was established between the spectral data from the selected optimal wavelengths and the reference measured protein content ranged from 23.46% to 28.43%. The RC-PLSR model established using eight key wavelengths (1153, 1567, 1972, 2143, 2288, 2339, 2389 and 2446 nm) showed the best predictive results with the coefficient of determination of prediction (R2P) of 0.901, and root mean square error of prediction (RMSEP) of 0.108 and residual predictive deviation (RPD) of 2.32. Based on the obtained best model and image processing algorithms, the distribution maps of protein content were generated. The overall results of this study indicated that developing a rapid and online multispectral imaging system using the feature wavelengths and PLSR analysis is potential and feasible for determination of the protein content in peanut kernels.

  12. Development of computational algorithms for quantification of pulmonary structures; Desenvolvimento de algoritmos computacionais para quantificacao de estruturas pulmonares

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Marcela de; Alvarez, Matheus; Alves, Allan F.F.; Miranda, Jose R.A., E-mail: marceladeoliveira@ig.com.br [Universidade Estadual Paulista Julio de Mesquita Filho (UNESP), Botucatu, SP (Brazil). Instituto de Biociencias. Departamento de Fisica e Biofisica; Pina, Diana R. [Universidade Estadual Paulista Julio de Mesquita Filho (UNESP), Botucatu, SP (Brazil). Hospital das Clinicas. Departamento de Doencas Tropicais e Diagnostico por Imagem

    2012-12-15

    The high-resolution computed tomography has become the imaging diagnostic exam most commonly used for the evaluation of the squeals of Paracoccidioidomycosis. The subjective evaluations the radiological abnormalities found on HRCT images do not provide an accurate quantification. The computer-aided diagnosis systems produce a more objective assessment of the abnormal patterns found in HRCT images. Thus, this research proposes the development of algorithms in MATLAB® computing environment can quantify semi-automatically pathologies such as pulmonary fibrosis and emphysema. The algorithm consists in selecting a region of interest (ROI), and by the use of masks, filter densities and morphological operators, to obtain a quantification of the injured area to the area of a healthy lung. The proposed method was tested on ten HRCT scans of patients with confirmed PCM. The results of semi-automatic measurements were compared with subjective evaluations performed by a specialist in radiology, falling to a coincidence of 80% for emphysema and 58% for fibrosis. (author)

  13. Some computational challenges of developing efficient parallel algorithms for data-dependent computations in thermal-hydraulics supercomputer applications

    International Nuclear Information System (INIS)

    Woodruff, S.B.

    1992-01-01

    The Transient Reactor Analysis Code (TRAC), which features a two- fluid treatment of thermal-hydraulics, is designed to model transients in water reactors and related facilities. One of the major computational costs associated with TRAC and similar codes is calculating constitutive coefficients. Although the formulations for these coefficients are local the costs are flow-regime- or data-dependent; i.e., the computations needed for a given spatial node often vary widely as a function of time. Consequently, poor load balancing will degrade efficiency on either vector or data parallel architectures when the data are organized according to spatial location. Unfortunately, a general automatic solution to the load-balancing problem associated with data-dependent computations is not yet available for massively parallel architectures. This document discusses why developers algorithms, such as a neural net representation, that do not exhibit algorithms, such as a neural net representation, that do not exhibit load-balancing problems

  14. Data and software tools for gamma radiation spectral threat detection and nuclide identification algorithm development and evaluation

    International Nuclear Information System (INIS)

    Portnoy, David; Fisher, Brian; Phifer, Daniel

    2015-01-01

    The detection of radiological and nuclear threats is extremely important to national security. The federal government is spending significant resources developing new detection systems and attempting to increase the performance of existing ones. The detection of illicit radionuclides that may pose a radiological or nuclear threat is a challenging problem complicated by benign radiation sources (e.g., cat litter and medical treatments), shielding, and large variations in background radiation. Although there is a growing acceptance within the community that concentrating efforts on algorithm development (independent of the specifics of fully assembled systems) has the potential for significant overall system performance gains, there are two major hindrances to advancements in gamma spectral analysis algorithms under the current paradigm: access to data and common performance metrics along with baseline performance measures. Because many of the signatures collected during performance measurement campaigns are classified, dissemination to algorithm developers is extremely limited. This leaves developers no choice but to collect their own data if they are lucky enough to have access to material and sensors. This is often combined with their own definition of metrics for measuring performance. These two conditions make it all but impossible for developers and external reviewers to make meaningful comparisons between algorithms. Without meaningful comparisons, performance advancements become very hard to achieve and (more importantly) recognize. The objective of this work is to overcome these obstacles by developing and freely distributing real and synthetically generated gamma-spectra data sets as well as software tools for performance evaluation with associated performance baselines to national labs, academic institutions, government agencies, and industry. At present, datasets for two tracks, or application domains, have been developed: one that includes temporal

  15. Data and software tools for gamma radiation spectral threat detection and nuclide identification algorithm development and evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Portnoy, David; Fisher, Brian; Phifer, Daniel

    2015-06-01

    The detection of radiological and nuclear threats is extremely important to national security. The federal government is spending significant resources developing new detection systems and attempting to increase the performance of existing ones. The detection of illicit radionuclides that may pose a radiological or nuclear threat is a challenging problem complicated by benign radiation sources (e.g., cat litter and medical treatments), shielding, and large variations in background radiation. Although there is a growing acceptance within the community that concentrating efforts on algorithm development (independent of the specifics of fully assembled systems) has the potential for significant overall system performance gains, there are two major hindrances to advancements in gamma spectral analysis algorithms under the current paradigm: access to data and common performance metrics along with baseline performance measures. Because many of the signatures collected during performance measurement campaigns are classified, dissemination to algorithm developers is extremely limited. This leaves developers no choice but to collect their own data if they are lucky enough to have access to material and sensors. This is often combined with their own definition of metrics for measuring performance. These two conditions make it all but impossible for developers and external reviewers to make meaningful comparisons between algorithms. Without meaningful comparisons, performance advancements become very hard to achieve and (more importantly) recognize. The objective of this work is to overcome these obstacles by developing and freely distributing real and synthetically generated gamma-spectra data sets as well as software tools for performance evaluation with associated performance baselines to national labs, academic institutions, government agencies, and industry. At present, datasets for two tracks, or application domains, have been developed: one that includes temporal

  16. Algorithm Development and Validation for Satellite-Derived Distributions of DOC and CDOM in the US Middle Atlantic Bight

    Science.gov (United States)

    Mannino, Antonio; Russ, Mary E.; Hooker, Stanford B.

    2007-01-01

    In coastal ocean waters, distributions of dissolved organic carbon (DOC) and chromophoric dissolved organic matter (CDOM) vary seasonally and interannually due to multiple source inputs and removal processes. We conducted several oceanographic cruises within the continental margin of the U.S. Middle Atlantic Bight (MAB) to collect field measurements in order to develop algorithms to retrieve CDOM and DOC from NASA's MODIS-Aqua and SeaWiFS satellite sensors. In order to develop empirical algorithms for CDOM and DOC, we correlated the CDOM absorption coefficient (a(sub cdom)) with in situ radiometry (remote sensing reflectance, Rrs, band ratios) and then correlated DOC to Rrs band ratios through the CDOM to DOC relationships. Our validation analyses demonstrate successful retrieval of DOC and CDOM from coastal ocean waters using the MODIS-Aqua and SeaWiFS satellite sensors with mean absolute percent differences from field measurements of cdom)(355)1,6 % for a(sub cdom)(443), and 12% for the CDOM spectral slope. To our knowledge, the algorithms presented here represent the first validated algorithms for satellite retrieval of a(sub cdom) DOC, and CDOM spectral slope in the coastal ocean. The satellite-derived DOC and a(sub cdom) products demonstrate the seasonal net ecosystem production of DOC and photooxidation of CDOM from spring to fall. With accurate satellite retrievals of CDOM and DOC, we will be able to apply satellite observations to investigate interannual and decadal-scale variability in surface CDOM and DOC within continental margins and monitor impacts of climate change and anthropogenic activities on coastal ecosystems.

  17. JOURNAL CLUB: Plagiarism in Manuscripts Submitted to the AJR: Development of an Optimal Screening Algorithm and Management Pathways.

    Science.gov (United States)

    Taylor, Donna B

    2017-04-01

    The objective of this study was to investigate the incidence of plagiarism in a sample of manuscripts submitted to the AJR using CrossCheck, develop an algorithm to identify significant plagiarism, and formulate management pathways. A sample of 110 of 1610 (6.8%) manuscripts submitted to AJR in 2014 in the categories of Original Research or Review were analyzed using CrossCheck and manual assessment. The overall similarity index (OSI), highest similarity score from a single source, whether duplication was from single or multiple origins, journal section, and presence or absence of referencing the source were recorded. The criteria outlined by the International Committee of Medical Journal Editors were the reference standard for identifying manuscripts containing plagiarism. Statistical analysis was used to develop a screening algorithm to maximize sensitivity and specificity for the detection of plagiarism. Criteria for defining the severity of plagiarism and management pathways based on the severity of the plagiarism were determined. Twelve manuscripts (10.9%) contained plagiarism. Nine had an OSI excluding quotations and references of less than 20%. In seven, the highest similarity score from a single source was less than 10%. The highest similarity score from a single source was the work of the same author or authors in nine. Common sections for duplication were the Materials and Methods, Discussion, and abstract. Referencing the original source was lacking in 11. Plagiarism was undetected at submission in five of these 12 articles; two had been accepted for publication. The most effective screening algorithm was to average the OSI including quotations and references and the highest similarity score from a single source and to submit manuscripts with an average value of more than 12% for further review. The current methods for detecting plagiarism are suboptimal. A new screening algorithm is proposed.

  18. Autonomous Star Tracker Algorithms

    DEFF Research Database (Denmark)

    Betto, Maurizio; Jørgensen, John Leif; Kilsgaard, Søren

    1998-01-01

    Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances.......Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances....

  19. Development of a B-flavor tagging algorithm for the Belle II experiment

    Energy Technology Data Exchange (ETDEWEB)

    Abudinen, Fernando; Li Gioi, Luigi [Max-Planck-Institut fuer Physik Muenchen (Germany); Gelb, Moritz [Karlsruher Institut fuer Technologie (Germany)

    2015-07-01

    The high luminosity SuperB-factory SuperKEKB will allow a precision measurement of the time-dependent CP violation parameters in the B-meson system. The analysis requires the reconstruction of one of the two exclusively produced neutral B mesons to a CP eigenstate and the determination of the flavor of the other one. Because of the high amount of decay possibilities, full reconstruction of the tagging B is not feasible. Consequently, inclusive methods that utilize flavor specific signatures of B decays are employed. The algorithm is based on multivariate methods and follows the approach adopted by BaBar. It proceeds in three steps: the track level, where the most probable target track is selected for each decay category; the event level, where the flavor specific signatures of the selected targets are analyzed; and the combiner, where the results of all categories are combined into the final output. The framework has been completed reaching a tagging efficiency of ca. 25%. A comprehensive optimization is being launched in order to increase the efficiency. This includes studies on the categories, the method-specific parameters and the kinematic variables. An overview of the algorithm is presented together with the results at the current status.

  20. Development of Future Rule Curves for Multipurpose Reservoir Operation Using Conditional Genetic and Tabu Search Algorithms

    Directory of Open Access Journals (Sweden)

    Anongrit Kangrang

    2018-01-01

    Full Text Available Optimal rule curves are necessary guidelines in the reservoir operation that have been used to assess performance of any reservoir to satisfy water supply, irrigation, industrial, hydropower, and environmental conservation requirements. This study applied the conditional genetic algorithm (CGA and the conditional tabu search algorithm (CTSA technique to connect with the reservoir simulation model in order to search optimal reservoir rule curves. The Ubolrat Reservoir located in the northeast region of Thailand was an illustrative application including historic monthly inflow, future inflow generated by the SWAT hydrological model using 50-year future climate data from the PRECIS regional climate model in case of B2 emission scenario by IPCC SRES, water demand, hydrologic data, and physical reservoir data. The future and synthetic inflow data of reservoirs were used to simulate reservoir system for evaluating water situation. The situations of water shortage and excess water were shown in terms of frequency magnitude and duration. The results have shown that the optimal rule curves from CGA and CTSA connected with the simulation model can mitigate drought and flood situations than the existing rule curves. The optimal future rule curves were more suitable for future situations than the other rule curves.

  1. Development of a novel diagnostic algorithm to predict NASH in HCV-positive patients.

    Science.gov (United States)

    Gallotta, Andrea; Paneghetti, Laura; Mrázová, Viera; Bednárová, Adriana; Kružlicová, Dáša; Frecer, Vladimir; Miertus, Stanislav; Biasiolo, Alessandra; Martini, Andrea; Pontisso, Patrizia; Fassina, Giorgio

    2018-05-01

    Non-alcoholic steato-hepatitis (NASH) is a severe disease characterised by liver inflammation and progressive hepatic fibrosis, which may progress to cirrhosis and hepatocellular carcinoma. Clinical evidence suggests that in hepatitis C virus patients steatosis and NASH are associated with faster fibrosis progression and hepatocellular carcinoma. A safe and reliable non-invasive diagnostic method to detect NASH at its early stages is still needed to prevent progression of the disease. We prospectively enrolled 91 hepatitis C virus-positive patients with histologically proven chronic liver disease: 77 patients were included in our study; of these, 10 had NASH. For each patient, various clinical and serological variables were collected. Different algorithms combining squamous cell carcinoma antigen-immunoglobulin-M (SCCA-IgM) levels with other common clinical data were created to provide the probability of having NASH. Our analysis revealed a statistically significant correlation between the histological presence of NASH and SCCA-IgM, insulin, homeostasis model assessment, haemoglobin, high-density lipoprotein and ferritin levels, and smoke. Compared to the use of a single marker, algorithms that combined four, six or seven variables identified NASH with higher accuracy. The best diagnostic performance was obtained with the logistic regression combination, which included all seven variables correlated with NASH. The combination of SCCA-IgM with common clinical data shows promising diagnostic performance for the detection of NASH in hepatitis C virus patients.

  2. Development of Quadratic Programming Algorithm Based on Interior Point Method with Estimation Mechanism of Active Constraints

    Science.gov (United States)

    Hashimoto, Hiroyuki; Takaguchi, Yusuke; Nakamura, Shizuka

    Instability of calculation process and increase of calculation time caused by increasing size of continuous optimization problem remain the major issues to be solved to apply the technique to practical industrial systems. This paper proposes an enhanced quadratic programming algorithm based on interior point method mainly for improvement of calculation stability. The proposed method has dynamic estimation mechanism of active constraints on variables, which fixes the variables getting closer to the upper/lower limit on them and afterwards releases the fixed ones as needed during the optimization process. It is considered as algorithm-level integration of the solution strategy of active-set method into the interior point method framework. We describe some numerical results on commonly-used bench-mark problems called “CUTEr” to show the effectiveness of the proposed method. Furthermore, the test results on large-sized ELD problem (Economic Load Dispatching problems in electric power supply scheduling) are also described as a practical industrial application.

  3. Algorithming the Algorithm

    DEFF Research Database (Denmark)

    Mahnke, Martina; Uprichard, Emma

    2014-01-01

    Imagine sailing across the ocean. The sun is shining, vastness all around you. And suddenly [BOOM] you’ve hit an invisible wall. Welcome to the Truman Show! Ever since Eli Pariser published his thoughts on a potential filter bubble, this movie scenario seems to have become reality, just with slight...... changes: it’s not the ocean, it’s the internet we’re talking about, and it’s not a TV show producer, but algorithms that constitute a sort of invisible wall. Building on this assumption, most research is trying to ‘tame the algorithmic tiger’. While this is a valuable and often inspiring approach, we...

  4. SeaWiFS Technical Report Series. Volume 42; Satellite Primary Productivity Data and Algorithm Development: A Science Plan for Mission to Planet Earth

    Science.gov (United States)

    Falkowski, Paul G.; Behrenfeld, Michael J.; Esaias, Wayne E.; Balch, William; Campbell, Janet W.; Iverson, Richard L.; Kiefer, Dale A.; Morel, Andre; Yoder, James A.; Hooker, Stanford B. (Editor); hide

    1998-01-01

    Two issues regarding primary productivity, as it pertains to the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) Program and the National Aeronautics and Space Administration (NASA) Mission to Planet Earth (MTPE) are presented in this volume. Chapter 1 describes the development of a science plan for deriving primary production for the world ocean using satellite measurements, by the Ocean Primary Productivity Working Group (OPPWG). Chapter 2 presents discussions by the same group, of algorithm classification, algorithm parameterization and data availability, algorithm testing and validation, and the benefits of a consensus primary productivity algorithm.

  5. SU-F-T-352: Development of a Knowledge Based Automatic Lung IMRT Planning Algorithm with Non-Coplanar Beams

    International Nuclear Information System (INIS)

    Zhu, W; Wu, Q; Yuan, L

    2016-01-01

    Purpose: To improve the robustness of a knowledge based automatic lung IMRT planning method and to further validate the reliability of this algorithm by utilizing for the planning of clinical cases with non-coplanar beams. Methods: A lung IMRT planning method which automatically determines both plan optimization objectives and beam configurations with non-coplanar beams has been reported previously. A beam efficiency index map is constructed to guide beam angle selection in this algorithm. This index takes into account both the dose contributions from individual beams and the combined effect of multiple beams which is represented by a beam separation score. We studied the effect of this beam separation score on plan quality and determined the optimal weight for this score.14 clinical plans were re-planned with the knowledge-based algorithm. Significant dosimetric metrics for the PTV and OARs in the automatic plans are compared with those in the clinical plans by the two-sample t-test. In addition, a composite dosimetric quality index was defined to obtain the relationship between the plan quality and the beam separation score. Results: On average, we observed more than 15% reduction on conformity index and homogeneity index for PTV and V_4_0, V_6_0 for heart while an 8% and 3% increase on V_5, V_2_0 for lungs, respectively. The variation curve of the composite index as a function of angle spread score shows that 0.6 is the best value for the weight of the beam separation score. Conclusion: Optimal value for beam angle spread score in automatic lung IMRT planning is obtained. With this value, model can result in statistically the “best” achievable plans. This method can potentially improve the quality and planning efficiency for IMRT plans with no-coplanar angles.

  6. Developing an eye-tracking algorithm as a potential tool for early diagnosis of autism spectrum disorder in children.

    Directory of Open Access Journals (Sweden)

    Natalia I Vargas-Cuentas

    Full Text Available Autism spectrum disorder (ASD currently affects nearly 1 in 160 children worldwide. In over two-thirds of evaluations, no validated diagnostics are used and gold standard diagnostic tools are used in less than 5% of evaluations. Currently, the diagnosis of ASD requires lengthy and expensive tests, in addition to clinical confirmation. Therefore, fast, cheap, portable, and easy-to-administer screening instruments for ASD are required. Several studies have shown that children with ASD have a lower preference for social scenes compared with children without ASD. Based on this, eye-tracking and measurement of gaze preference for social scenes has been used as a screening tool for ASD. Currently available eye-tracking software requires intensive calibration, training, or holding of the head to prevent interference with gaze recognition limiting its use in children with ASD.In this study, we designed a simple eye-tracking algorithm that does not require calibration or head holding, as a platform for future validation of a cost-effective ASD potential screening instrument. This system operates on a portable and inexpensive tablet to measure gaze preference of children for social compared to abstract scenes. A child watches a one-minute stimulus video composed of a social scene projected on the left side and an abstract scene projected on the right side of the tablet's screen. We designed five stimulus videos by changing the social/abstract scenes. Every child observed all the five videos in random order. We developed an eye-tracking algorithm that calculates the child's gaze preference for the social and abstract scenes, estimated as the percentage of the accumulated time that the child observes the left or right side of the screen, respectively. Twenty-three children without a prior history of ASD and 8 children with a clinical diagnosis of ASD were evaluated. The recorded video of the child´s eye movement was analyzed both manually by an observer

  7. Hybridization properties of long nucleic acid probes for detection of variable target sequences, and development of a hybridization prediction algorithm

    Science.gov (United States)

    Öhrmalm, Christina; Jobs, Magnus; Eriksson, Ronnie; Golbob, Sultan; Elfaitouri, Amal; Benachenhou, Farid; Strømme, Maria; Blomberg, Jonas

    2010-01-01

    One of the main problems in nucleic acid-based techniques for detection of infectious agents, such as influenza viruses, is that of nucleic acid sequence variation. DNA probes, 70-nt long, some including the nucleotide analog deoxyribose-Inosine (dInosine), were analyzed for hybridization tolerance to different amounts and distributions of mismatching bases, e.g. synonymous mutations, in target DNA. Microsphere-linked 70-mer probes were hybridized in 3M TMAC buffer to biotinylated single-stranded (ss) DNA for subsequent analysis in a Luminex® system. When mismatches interrupted contiguous matching stretches of 6 nt or longer, it had a strong impact on hybridization. Contiguous matching stretches are more important than the same number of matching nucleotides separated by mismatches into several regions. dInosine, but not 5-nitroindole, substitutions at mismatching positions stabilized hybridization remarkably well, comparable to N (4-fold) wobbles in the same positions. In contrast to shorter probes, 70-nt probes with judiciously placed dInosine substitutions and/or wobble positions were remarkably mismatch tolerant, with preserved specificity. An algorithm, NucZip, was constructed to model the nucleation and zipping phases of hybridization, integrating both local and distant binding contributions. It predicted hybridization more exactly than previous algorithms, and has the potential to guide the design of variation-tolerant yet specific probes. PMID:20864443

  8. Development of an inverse distance weighted active infrared stealth scheme using the repulsive particle swarm optimization algorithm.

    Science.gov (United States)

    Han, Kuk-Il; Kim, Do-Hwi; Choi, Jun-Hyuk; Kim, Tae-Kuk

    2018-04-20

    Treatments for detection by infrared (IR) signals are higher than for other signals such as radar or sonar because an object detected by the IR sensor cannot easily recognize its detection status. Recently, research for actively reducing IR signal has been conducted to control the IR signal by adjusting the surface temperature of the object. In this paper, we propose an active IR stealth algorithm to synchronize IR signals from the object and the background around the object. The proposed method includes the repulsive particle swarm optimization statistical optimization algorithm to estimate the IR stealth surface temperature, which will result in a synchronization between the IR signals from the object and the surrounding background by setting the inverse distance weighted contrast radiant intensity (CRI) equal to zero. We tested the IR stealth performance in mid wavelength infrared (MWIR) and long wavelength infrared (LWIR) bands for a test plate located at three different positions on a forest scene to verify the proposed method. Our results show that the inverse distance weighted active IR stealth technique proposed in this study is proved to be an effective method for reducing the contrast radiant intensity between the object and background up to 32% as compared to the previous method using the CRI determined as the simple signal difference between the object and the background.

  9. Detection of surface algal blooms using the newly developed algorithm surface algal bloom index (SABI)

    Science.gov (United States)

    Alawadi, Fahad

    2010-10-01

    Quantifying ocean colour properties has evolved over the past two decades from being able to merely detect their biological activity to the ability to estimate chlorophyll concentration using optical satellite sensors like MODIS and MERIS. The production of chlorophyll spatial distribution maps is a good indicator of plankton biomass (primary production) and is useful for the tracing of oceanographic currents, jets and blooms, including harmful algal blooms (HABs). Depending on the type of HABs involved and the environmental conditions, if their concentration rises above a critical threshold, it can impact the flora and fauna of the aquatic habitat through the introduction of the so called "red tide" phenomenon. The estimation of chlorophyll concentration is derived from quantifying the spectral relationship between the blue and the green bands reflected from the water column. This spectral relationship is employed in the standard ocean colour chlorophyll-a (Chlor-a) product, but is incapable of detecting certain macro-algal species that float near to or at the water surface in the form of dense filaments or mats. The ability to accurately identify algal formations that sometimes appear as oil spill look-alikes in satellite imagery, contributes towards the reduction of false-positive incidents arising from oil spill monitoring operations. Such algal formations that occur in relatively high concentrations may experience, as in land vegetation, what is known as the "red-edge" effect. This phenomena occurs at the highest reflectance slope between the maximum absorption in the red due to the surrounding ocean water and the maximum reflectance in the infra-red due to the photosynthetic pigments present in the surface algae. A new algorithm termed the surface algal bloom index (SABI), has been proposed to delineate the spatial distributions of floating micro-algal species like for example cyanobacteria or exposed inter-tidal vegetation like seagrass. This algorithm was

  10. Design and implementation of universal mathematical library supporting algorithm development for FPGA based systems in high energy physics experiments

    International Nuclear Information System (INIS)

    Jalmuzna, W.

    2006-02-01

    The X-ray free-electron laser XFEL that is being planned at the DESY research center in cooperation with European partners will produce high-intensity ultra-short Xray flashes with the properties of laser light. This new light source, which can only be described in terms of superlatives, will open up a whole range of new perspectives for the natural sciences. It could also offer very promising opportunities for industrial users. SIMCON (SIMulator and CONtroller) is the project of the fast, low latency digital controller dedicated for LLRF system in VUV FEL experiment based on modern FPGA chips It is being developed by ELHEP group in Institute of Electronic Systems at Warsaw University of Technology. The main purpose of the project is to create a controller for stabilizing the vector sum of fields in cavities of one cryomodule in the experiment. The device can be also used as the simulator of the cavity and testbench for other devices. Flexibility and computation power of this device allow implementation of fast mathematical algorithms. This paper describes the concept, implementation and tests of universal mathematical library for FPGA algorithm implementation. It consists of many useful components such as IQ demodulator, division block, library for complex and floating point operations, etc. It is able to speed up implementation time of many complicated algorithms. Library have already been tested using real accelerator signals and the performance achieved is satisfactory. (Orig.)

  11. Design and implementation of universal mathematical library supporting algorithm development for FPGA based systems in high energy physics experiments

    Energy Technology Data Exchange (ETDEWEB)

    Jalmuzna, W.

    2006-02-15

    The X-ray free-electron laser XFEL that is being planned at the DESY research center in cooperation with European partners will produce high-intensity ultra-short Xray flashes with the properties of laser light. This new light source, which can only be described in terms of superlatives, will open up a whole range of new perspectives for the natural sciences. It could also offer very promising opportunities for industrial users. SIMCON (SIMulator and CONtroller) is the project of the fast, low latency digital controller dedicated for LLRF system in VUV FEL experiment based on modern FPGA chips It is being developed by ELHEP group in Institute of Electronic Systems at Warsaw University of Technology. The main purpose of the project is to create a controller for stabilizing the vector sum of fields in cavities of one cryomodule in the experiment. The device can be also used as the simulator of the cavity and testbench for other devices. Flexibility and computation power of this device allow implementation of fast mathematical algorithms. This paper describes the concept, implementation and tests of universal mathematical library for FPGA algorithm implementation. It consists of many useful components such as IQ demodulator, division block, library for complex and floating point operations, etc. It is able to speed up implementation time of many complicated algorithms. Library have already been tested using real accelerator signals and the performance achieved is satisfactory. (Orig.)

  12. Analysis and development of the automated emergency algorithm to control primary to secondary LOCA for SUNPP safety upgrading

    International Nuclear Information System (INIS)

    Kim, V.; Kuznetsov, V.; Balakan, G.; Gromov, G.; Krushynsky, A.; Sholomitsky, S.; Lola, I.

    2007-01-01

    The paper presents the results of the study conducted to support planned modernization of the South Ukraine nuclear power plant. The objective of the analysis has been to develop the automated emergency control algorithm for primary to secondary LOCA accident for SUNPP WWER-1000 safety upgrading. According to the analyses performed in the framework of safety assesment report, given accident is the most complex for control and has the largest contribution into the core damage frequency value. This is because of initial event diagnostics is difficult, emergency control is complicated for personnel, time available for decision making and actions performing is limited with coolant inventory for make-up, probability of steam dump valves on affected steam generator non-closing after opening is high, and as a consequence containment bypass, irretrievable loss of coolant and radioactive materials release into the environment are possible. Unit design modifications are directed on expansion of safety systems capabilities to overcome given accident and to facilitate the personnel actions on emergency control. Safety systems modification according to developed algorithm will allow to simplify accident control by personnel and enable to control the ECCS discharge limiting pressure below the affected steam generator steam dump valve opening pressure, and decrease the probability of the containment bypass sequences. The analysis of the primary-to-secondary LOCA thermal-hydraulics has been conducted with RELAP5/Mod 3.2, and involved development of the dedicated analytical model, calculations of various plant response accident scenarios, conducting of plant personnel intervention analyses using full-scale simulator, development and justification of the emergency control algorithm aimed on the minimization of negative consequences of the primary-to-secondary LOCA (Authors)

  13. Javascript Library for Developing Interactive Micro-Level Animations for Teaching and Learning Algorithms on One-Dimensional Arrays

    Science.gov (United States)

    Végh, Ladislav

    2016-01-01

    The first data structure that first-year undergraduate students learn during the programming and algorithms courses is the one-dimensional array. For novice programmers, it might be hard to understand different algorithms on arrays (e.g. searching, mirroring, sorting algorithms), because the algorithms dynamically change the values of elements. In…

  14. Development and evaluation of a scheduling algorithm for parallel hardware tests at CERN

    CERN Document Server

    Galetzka, Michael

    This thesis aims at describing the problem of scheduling, evaluating different scheduling algorithms and comparing them with each other as well as with the current prototype solution. The implementation of the final solution will be delineated, as will the design considerations that led to it. The CERN Large Hadron Collider (LHC) has to deal with unprecedented stored energy, both in its particle beams and its superconducting magnet circuits. This energy could result in major equipment damage and downtime if it is not properly extracted from the machine. Before commissioning the machine with the particle beam, several thousands of tests have to be executed, analyzed and tracked to assess the proper functioning of the equipment and protection systems. These tests access the accelerator's equipment in order to verify the correct behavior of all systems, such as magnets, power converters and interlock controllers. A test could, for example, ramp the magnet to a certain energy level and then provoke an emergency...

  15. Development of a Genetic Algorithm to Automate Clustering of a Dependency Structure Matrix

    Science.gov (United States)

    Rogers, James L.; Korte, John J.; Bilardo, Vincent J.

    2006-01-01

    Much technology assessment and organization design data exists in Microsoft Excel spreadsheets. Tools are needed to put this data into a form that can be used by design managers to make design decisions. One need is to cluster data that is highly coupled. Tools such as the Dependency Structure Matrix (DSM) and a Genetic Algorithm (GA) can be of great benefit. However, no tool currently combines the DSM and a GA to solve the clustering problem. This paper describes a new software tool that interfaces a GA written as an Excel macro with a DSM in spreadsheet format. The results of several test cases are included to demonstrate how well this new tool works.

  16. Development of algorithms for real time track selection in the TOTEM experiment

    CERN Document Server

    Minafra, Nicola; Radicioni, E

    The TOTEM experiment at the LHC has been designed to measure the total proton-proton cross-section with a luminosity independent method and to study elastic and diffractive scattering at energy up to 14 TeV in the center of mass. Elastic interactions are detected by Roman Pot stations, placed at 147m and 220m along the two exiting beams. At the present time, data acquired by these detectors are stored on disk without any data reduction by the data acquisition chain. In this thesis several tracking and selection algorithms, suitable for real-time implementation in the firmware of the back-end electronics, have been proposed and tested using real data.

  17. Predicting the onset of hazardous alcohol drinking in primary care: development and validation of a simple risk algorithm.

    Science.gov (United States)

    Bellón, Juan Ángel; de Dios Luna, Juan; King, Michael; Nazareth, Irwin; Motrico, Emma; GildeGómez-Barragán, María Josefa; Torres-González, Francisco; Montón-Franco, Carmen; Sánchez-Celaya, Marta; Díaz-Barreiros, Miguel Ángel; Vicens, Catalina; Moreno-Peral, Patricia

    2017-04-01

    Little is known about the risk of progressing to hazardous alcohol use in abstinent or low-risk drinkers. To develop and validate a simple brief risk algorithm for the onset of hazardous alcohol drinking (HAD) over 12 months for use in primary care. Prospective cohort study in 32 health centres from six Spanish provinces, with evaluations at baseline, 6 months, and 12 months. Forty-one risk factors were measured and multilevel logistic regression and inverse probability weighting were used to build the risk algorithm. The outcome was new occurrence of HAD during the study, as measured by the AUDIT. From the lists of 174 GPs, 3954 adult abstinent or low-risk drinkers were recruited. The 'predictAL-10' risk algorithm included just nine variables (10 questions): province, sex, age, cigarette consumption, perception of financial strain, having ever received treatment for an alcohol problem, childhood sexual abuse, AUDIT-C, and interaction AUDIT-C*Age. The c-index was 0.886 (95% CI = 0.854 to 0.918). The optimal cutoff had a sensitivity of 0.83 and specificity of 0.80. Excluding childhood sexual abuse from the model (the 'predictAL-9'), the c-index was 0.880 (95% CI = 0.847 to 0.913), sensitivity 0.79, and specificity 0.81. There was no statistically significant difference between the c-indexes of predictAL-10 and predictAL-9. The predictAL-10/9 is a simple and internally valid risk algorithm to predict the onset of hazardous alcohol drinking over 12 months in primary care attendees; it is a brief tool that is potentially useful for primary prevention of hazardous alcohol drinking. © British Journal of General Practice 2017.

  18. Data-driven Development of ROTEM and TEG Algorithms for the Management of Trauma Hemorrhage: A Prospective Observational Multicenter Study.

    Science.gov (United States)

    Baksaas-Aasen, Kjersti; Van Dieren, Susan; Balvers, Kirsten; Juffermans, Nicole P; Næss, Pål A; Rourke, Claire; Eaglestone, Simon; Ostrowski, Sisse R; Stensballe, Jakob; Stanworth, Simon; Maegele, Marc; Goslings, Carel; Johansson, Pär I; Brohi, Karim; Gaarder, Christine

    2018-05-23

    Developing pragmatic data-driven algorithms for management of trauma induced coagulopathy (TIC) during trauma hemorrhage for viscoelastic hemostatic assays (VHAs). Admission data from conventional coagulation tests (CCT), rotational thrombelastometry (ROTEM) and thrombelastography (TEG) were collected prospectively at 6 European trauma centers during 2008 to 2013. To identify significant VHA parameters capable of detecting TIC (defined as INR > 1.2), hypofibrinogenemia (< 2.0 g/L), and thrombocytopenia (< 100 x10/L), univariate regression models were constructed. Area under the curve (AUC) was calculated, and threshold values for TEG and ROTEM parameters with 70% sensitivity were included in the algorithms. A total of, 2287 adult trauma patients (ROTEM: 2019 and TEG: 968) were enrolled. FIBTEM clot amplitude at 5 minutes (CA5) had the largest AUC and 10 mm detected hypofibrinogenemia with 70% sensitivity. The corresponding value for functional fibrinogen (FF) TEG maximum amplitude (MA) was 19 mm. Thrombocytopenia was similarly detected using the calculated threshold EXTEM-FIBTEM CA5 30 mm. The corresponding rTEG-FF TEG MA was 46 mm. TIC was identified by EXTEM CA5 41 mm, rTEG MA 64 mm (80% sensitivity). For hyperfibrinolysis, we examined the relationship between viscoelastic lysis parameters and clinical outcomes, with resulting threshold values of 85% for EXTEM Li30 and 10% for rTEG Ly30.Based on these analyses, we constructed algorithms for ROTEM, TEG, and CCTs to be used in addition to ratio driven transfusion and tranexamic acid. We describe a systematic approach to define threshold parameters for ROTEM and TEG. These parameters were incorporated into algorithms to support data-driven adjustments of resuscitation with therapeutics, to optimize damage control resuscitation practice in trauma.

  19. Development and Validation of the Pediatric Medical Complexity Algorithm (PMCA) Version 3.0.

    Science.gov (United States)

    Simon, Tamara D; Haaland, Wren; Hawley, Katherine; Lambka, Karen; Mangione-Smith, Rita

    2018-02-26

    To modify the Pediatric Medical Complexity Algorithm (PMCA) to include both International Classification of Diseases, Ninth and Tenth Revisions, Clinical Modification (ICD-9/10-CM) codes for classifying children with chronic disease (CD) by level of medical complexity and to assess the sensitivity and specificity of the new PMCA version 3.0 for correctly identifying level of medical complexity. To create version 3.0, PMCA version 2.0 was modified to include ICD-10-CM codes. We applied PMCA version 3.0 to Seattle Children's Hospital data for children with ≥1 emergency department (ED), day surgery, and/or inpatient encounter from January 1, 2016, to June 30, 2017. Starting with the encounter date, up to 3 years of retrospective discharge data were used to classify children as having complex chronic disease (C-CD), noncomplex chronic disease (NC-CD), and no CD. We then selected a random sample of 300 children (100 per CD group). Blinded medical record review was conducted to ascertain the levels of medical complexity for these 300 children. The sensitivity and specificity of PMCA version 3.0 was assessed. PMCA version 3.0 identified children with C-CD with 86% sensitivity and 86% specificity, children with NC-CD with 65% sensitivity and 84% specificity, and children without CD with 77% sensitivity and 93% specificity. PMCA version 3.0 is an updated publicly available algorithm that identifies children with C-CD, who have accessed tertiary hospital emergency department, day surgery, or inpatient care, with very good sensitivity and specificity when applied to hospital discharge data and with performance to earlier versions of PMCA. Copyright © 2018 Academic Pediatric Association. Published by Elsevier Inc. All rights reserved.

  20. Development of a sensorimotor algorithm able to deal with unforeseen pushes and its implementation based on VHDL

    OpenAIRE

    Lezcano Giménez, Pablo Gabriel

    2015-01-01

    Development of a Sensorimotor Algorithm Able to Deal with Unforeseen Pushes and Its Implementation Based on VHDL is the title of my thesis which concludes my Bachelor Degree in the Escuela Técnica Superior de Ingeniería y Sistemas de Telecomunicación of the Universidad Politécnica de Madrid. It encloses the overall work I did in the Neurorobotics Research Laboratory from the Beuth Hochschule für Technik Berlin during my ERASMUS year in 2015. This thesis is focused on the field of robotics, sp...

  1. DEVELOPMENT OF A PEDESTRIAN INDOOR NAVIGATION SYSTEM BASED ON MULTI-SENSOR FUSION AND FUZZY LOGIC ESTIMATION ALGORITHMS

    Directory of Open Access Journals (Sweden)

    Y. C. Lai

    2015-05-01

    Full Text Available This paper presents a pedestrian indoor navigation system based on the multi-sensor fusion and fuzzy logic estimation algorithms. The proposed navigation system is a self-contained dead reckoning navigation that means no other outside signal is demanded. In order to achieve the self-contained capability, a portable and wearable inertial measure unit (IMU has been developed. Its adopted sensors are the low-cost inertial sensors, accelerometer and gyroscope, based on the micro electro-mechanical system (MEMS. There are two types of the IMU modules, handheld and waist-mounted. The low-cost MEMS sensors suffer from various errors due to the results of manufacturing imperfections and other effects. Therefore, a sensor calibration procedure based on the scalar calibration and the least squares methods has been induced in this study to improve the accuracy of the inertial sensors. With the calibrated data acquired from the inertial sensors, the step length and strength of the pedestrian are estimated by multi-sensor fusion and fuzzy logic estimation algorithms. The developed multi-sensor fusion algorithm provides the amount of the walking steps and the strength of each steps in real-time. Consequently, the estimated walking amount and strength per step are taken into the proposed fuzzy logic estimation algorithm to estimates the step lengths of the user. Since the walking length and direction are both the required information of the dead reckoning navigation, the walking direction is calculated by integrating the angular rate acquired by the gyroscope of the developed IMU module. Both the walking length and direction are calculated on the IMU module and transmit to a smartphone with Bluetooth to perform the dead reckoning navigation which is run on a self-developed APP. Due to the error accumulating of dead reckoning navigation, a particle filter and a pre-loaded map of indoor environment have been applied to the APP of the proposed navigation system

  2. Development of a Pedestrian Indoor Navigation System Based on Multi-Sensor Fusion and Fuzzy Logic Estimation Algorithms

    Science.gov (United States)

    Lai, Y. C.; Chang, C. C.; Tsai, C. M.; Lin, S. Y.; Huang, S. C.

    2015-05-01

    This paper presents a pedestrian indoor navigation system based on the multi-sensor fusion and fuzzy logic estimation algorithms. The proposed navigation system is a self-contained dead reckoning navigation that means no other outside signal is demanded. In order to achieve the self-contained capability, a portable and wearable inertial measure unit (IMU) has been developed. Its adopted sensors are the low-cost inertial sensors, accelerometer and gyroscope, based on the micro electro-mechanical system (MEMS). There are two types of the IMU modules, handheld and waist-mounted. The low-cost MEMS sensors suffer from various errors due to the results of manufacturing imperfections and other effects. Therefore, a sensor calibration procedure based on the scalar calibration and the least squares methods has been induced in this study to improve the accuracy of the inertial sensors. With the calibrated data acquired from the inertial sensors, the step length and strength of the pedestrian are estimated by multi-sensor fusion and fuzzy logic estimation algorithms. The developed multi-sensor fusion algorithm provides the amount of the walking steps and the strength of each steps in real-time. Consequently, the estimated walking amount and strength per step are taken into the proposed fuzzy logic estimation algorithm to estimates the step lengths of the user. Since the walking length and direction are both the required information of the dead reckoning navigation, the walking direction is calculated by integrating the angular rate acquired by the gyroscope of the developed IMU module. Both the walking length and direction are calculated on the IMU module and transmit to a smartphone with Bluetooth to perform the dead reckoning navigation which is run on a self-developed APP. Due to the error accumulating of dead reckoning navigation, a particle filter and a pre-loaded map of indoor environment have been applied to the APP of the proposed navigation system to extend its

  3. Development of response models for the Earth Radiation Budget Experiment (ERBE) sensors. Part 4: Preliminary nonscanner models and count conversion algorithms

    Science.gov (United States)

    Halyo, Nesim; Choi, Sang H.

    1987-01-01

    Two count conversion algorithms and the associated dynamic sensor model for the M/WFOV nonscanner radiometers are defined. The sensor model provides and updates the constants necessary for the conversion algorithms, though the frequency with which these updates were needed was uncertain. This analysis therefore develops mathematical models for the conversion of irradiance at the sensor field of view (FOV) limiter into data counts, derives from this model two algorithms for the conversion of data counts to irradiance at the sensor FOV aperture and develops measurement models which account for a specific target source together with a sensor. The resulting algorithms are of the gain/offset and Kalman filter types. The gain/offset algorithm was chosen since it provided sufficient accuracy using simpler computations.

  4. Development of a radiation track structure clustering algorithm for the prediction of DNA DSB yields and radiation induced cell death in Eukaryotic cells.

    Science.gov (United States)

    Douglass, Michael; Bezak, Eva; Penfold, Scott

    2015-04-21

    The preliminary framework of a combined radiobiological model is developed and calibrated in the current work. The model simulates the production of individual cells forming a tumour, the spatial distribution of individual ionization events (using Geant4-DNA) and the stochastic biochemical repair of DNA double strand breaks (DSBs) leading to the prediction of survival or death of individual cells. In the current work, we expand upon a previously developed tumour generation and irradiation model to include a stochastic ionization damage clustering and DNA lesion repair model. The Geant4 code enabled the positions of each ionization event in the cells to be simulated and recorded for analysis. An algorithm was developed to cluster the ionization events in each cell into simple and complex double strand breaks. The two lesion kinetic (TLK) model was then adapted to predict DSB repair kinetics and the resultant cell survival curve. The parameters in the cell survival model were then calibrated using experimental cell survival data of V79 cells after low energy proton irradiation. A monolayer of V79 cells was simulated using the tumour generation code developed previously. The cells were then irradiated by protons with mean energies of 0.76 MeV and 1.9 MeV using a customized version of Geant4. By replicating the experimental parameters of a low energy proton irradiation experiment and calibrating the model with two sets of data, the model is now capable of predicting V79 cell survival after low energy (cell survival probability, the cell survival probability is calculated for each cell in the geometric tumour model developed in the current work. This model uses fundamental measurable microscopic quantities such as genome length rather than macroscopic radiobiological quantities such as alpha/beta ratios. This means that the model can be theoretically used under a wide range of conditions with a single set of input parameters once calibrated for a given cell line.

  5. Pseudocode Interpreter (Pseudocode Integrated Development Environment with Lexical Analyzer and Syntax Analyzer using Recursive Descent Parsing Algorithm

    Directory of Open Access Journals (Sweden)

    Christian Lester D. Gimeno

    2017-11-01

    Full Text Available –This research study focused on the development of a software that helps students design, write, validate and run their pseudocode in a semi Integrated Development Environment (IDE instead of manually writing it on a piece of paper.Specifically, the study aimed to develop lexical analyzer or lexer, syntax analyzer or parser using recursive descent parsing algorithm and an interpreter. The lexical analyzer reads pseudocodesource in a sequence of symbols or characters as lexemes.The lexemes are then analyzed by the lexer that matches a pattern for valid tokens and passes to the syntax analyzer or parser. The syntax analyzer or parser takes those valid tokens and builds meaningful commands using recursive descent parsing algorithm in a form of an abstract syntax tree. The generation of an abstract syntax tree is based on the specified grammar rule created by the researcher expressed in Extended Backus-Naur Form. The Interpreter takes the generated abstract syntax tree and starts the evaluation or interpretation to produce pseudocode output. The software was evaluated using white-box testing by several ICT professionals and black-box testing by several computer science students based on the International Organization for Standardization (ISO 9126 software quality standards. The overall results of the evaluation both for white-box and black-box were described as “Excellent in terms of functionality, reliability, usability, efficiency, maintainability and portability”.

  6. Development of Fault Models for Hybrid Fault Detection and Diagnostics Algorithm: October 1, 2014 -- May 5, 2015

    Energy Technology Data Exchange (ETDEWEB)

    Cheung, Howard [Purdue Univ., West Lafayette, IN (United States); Braun, James E. [Purdue Univ., West Lafayette, IN (United States)

    2015-12-31

    This report describes models of building faults created for OpenStudio to support the ongoing development of fault detection and diagnostic (FDD) algorithms at the National Renewable Energy Laboratory. Building faults are operating abnormalities that degrade building performance, such as using more energy than normal operation, failing to maintain building temperatures according to the thermostat set points, etc. Models of building faults in OpenStudio can be used to estimate fault impacts on building performance and to develop and evaluate FDD algorithms. The aim of the project is to develop fault models of typical heating, ventilating and air conditioning (HVAC) equipment in the United States, and the fault models in this report are grouped as control faults, sensor faults, packaged and split air conditioner faults, water-cooled chiller faults, and other uncategorized faults. The control fault models simulate impacts of inappropriate thermostat control schemes such as an incorrect thermostat set point in unoccupied hours and manual changes of thermostat set point due to extreme outside temperature. Sensor fault models focus on the modeling of sensor biases including economizer relative humidity sensor bias, supply air temperature sensor bias, and water circuit temperature sensor bias. Packaged and split air conditioner fault models simulate refrigerant undercharging, condenser fouling, condenser fan motor efficiency degradation, non-condensable entrainment in refrigerant, and liquid line restriction. Other fault models that are uncategorized include duct fouling, excessive infiltration into the building, and blower and pump motor degradation.

  7. Modeling design iteration in product design and development and its solution by a novel artificial bee colony algorithm.

    Science.gov (United States)

    Chen, Tinggui; Xiao, Renbin

    2014-01-01

    Due to fierce market competition, how to improve product quality and reduce development cost determines the core competitiveness of enterprises. However, design iteration generally causes increases of product cost and delays of development time as well, so how to identify and model couplings among tasks in product design and development has become an important issue for enterprises to settle. In this paper, the shortcomings existing in WTM model are discussed and tearing approach as well as inner iteration method is used to complement the classic WTM model. In addition, the ABC algorithm is also introduced to find out the optimal decoupling schemes. In this paper, firstly, tearing approach and inner iteration method are analyzed for solving coupled sets. Secondly, a hybrid iteration model combining these two technologies is set up. Thirdly, a high-performance swarm intelligence algorithm, artificial bee colony, is adopted to realize problem-solving. Finally, an engineering design of a chemical processing system is given in order to verify its reasonability and effectiveness.

  8. A Global algorithm for linear radiosity

    OpenAIRE

    Sbert Cassasayas, Mateu; Pueyo Sánchez, Xavier

    1993-01-01

    A linear algorithm for radiosity is presented, linear both in time and storage. The new algorithm is based on previous work by the authors and on the well known algorithms for progressive radiosity and Monte Carlo particle transport.

  9. Additional operations in algebra of structural numbers for control algorithm development

    Directory of Open Access Journals (Sweden)

    Morhun A.V.

    2016-12-01

    Full Text Available The structural numbers and the algebra of the structural numbers due to the simplicity of representation, flexibility and current algebraic operations are the powerful tool for a wide range of applications. In autonomous power supply systems and systems with distributed generation (Micro Grid mathematical apparatus of structural numbers can be effectively used for the calculation of the parameters of the operating modes of consumption of electric energy. The purpose of the article is the representation of the additional algebra of structural numbers. The standard algebra was proposed to be extended by the additional operations and modification current in order to expand the scope of their use, namely to construct a flexible, adaptive algorithms of control systems. It is achieved due to the possibility to consider each individual component of the system with its parameters and provide easy management of entire system and each individual component. Thus, structural numbers and extended algebra are the perspective line of research and further studying is required.

  10. Network Reduction Algorithm for Developing Distribution Feeders for Real-Time Simulators: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Nagarajan, Adarsh; Nelson, Austin; Prabakar, Kumaraguru; Hoke, Andy; Asano, Marc; Ueda, Reid; Nepal, Shaili

    2017-06-15

    As advanced grid-support functions (AGF) become more widely used in grid-connected photovoltaic (PV) inverters, utilities are increasingly interested in their impacts when implemented in the field. These effects can be understood by modeling feeders in real-time systems and testing PV inverters using power hardware-in-the-loop (PHIL) techniques. This paper presents a novel feeder model reduction algorithm using a Monte Carlo method that enables large feeders to be solved and operated on real-time computing platforms. Two Hawaiian Electric feeder models in Synergi Electric's load flow software were converted to reduced order models in OpenDSS, and subsequently implemented in the OPAL-RT real-time digital testing platform. Smart PV inverters were added to the real-time model with AGF responses modeled after characterizing commercially available hardware inverters. Finally, hardware inverters were tested in conjunction with the real-time model using PHIL techniques so that the effects of AGFs on the choice feeders could be analyzed.

  11. Developing Novel Machine Learning Algorithms to Improve Sedentary Assessment for Youth Health Enhancement.

    Science.gov (United States)

    Golla, Gowtham Kumar; Carlson, Jordan A; Huan, Jun; Kerr, Jacqueline; Mitchell, Tarrah; Borner, Kelsey

    2016-10-01

    Sedentary behavior of youth is an important determinant of health. However, better measures are needed to improve understanding of this relationship and the mechanisms at play, as well as to evaluate health promotion interventions. Wearable accelerometers are considered as the standard for assessing physical activity in research, but do not perform well for assessing posture (i.e., sitting vs. standing), a critical component of sedentary behavior. The machine learning algorithms that we propose for assessing sedentary behavior will allow us to re-examine existing accelerometer data to better understand the association between sedentary time and health in various populations. We collected two datasets, a laboratory-controlled dataset and a free-living dataset. We trained machine learning classifiers separately on each dataset and compared performance across datasets. The classifiers predict five postures: sit, stand, sit-stand, stand-sit, and stand\\walk. We compared a manually constructed Hidden Markov model (HMM) with an automated HMM from existing software. The manually constructed HMM gave more F1-Macro score on both datasets.

  12. Development of cyberblog-based intelligent tutorial system to improve students learning ability algorithm

    Science.gov (United States)

    Wahyudin; Riza, L. S.; Putro, B. L.

    2018-05-01

    E-learning as a learning activity conducted online by the students with the usual tools is favoured by students. The use of computer media in learning provides benefits that are not owned by other learning media that is the ability of computers to interact individually with students. But the weakness of many learning media is to assume that all students have a uniform ability, when in reality this is not the case. The concept of Intelligent Tutorial System (ITS) combined with cyberblog application can overcome the weaknesses in neglecting diversity. An Intelligent Tutorial System-based Cyberblog application (ITS) is a web-based interactive application program that implements artificial intelligence which can be used as a learning and evaluation media in the learning process. The use of ITS-based Cyberblog in learning is one of the alternative learning media that is interesting and able to help students in measuring ability in understanding the material. This research will be associated with the improvement of logical thinking ability (logical thinking) of students, especially in algorithm subjects.

  13. Development of ice floe tracker algorithm to measure Lagrangian statistics in the eastern Greenland coast

    Science.gov (United States)

    Lopez, Rosalinda; Wilhelmus, Monica M.; Schodlok, Michael; Klein, Patrice

    2017-11-01

    Sea ice export through Fram Strait is a key component of the Arctic climate system. The East Greenland Current (EGC) carries most of the sea ice southwards until it melts. Lagrangian methods using sea ice buoys have been used to map ice features in polar regions. However, their spatial and temporal coverage is limited. Satellite data can provide a better tool to map sea ice flow and its variability. Here, an automated sea ice floe detection algorithm uses ice floes as tracers for surface ocean currents. We process Moderate Resolution Imaging Spectroradiometer satellite images to track ice floes (length scale 5-10 km) in the north-eastern Greenland Sea region. Our matlab-based routines effectively filter out clouds and adaptively modify the images to segment and identify ice floes. Ice floes were tracked based on persistent surface features common in successive images throughout 2016. Their daily centroid locations were extracted and its resulting trajectories are used to describe surface circulation and its variability using differential kinematic parameters. We will discuss the application of this method to a longer time series and larger spatial coverage. This enables us to derive the inter-annual variability of mesoscale features along the eastern coast of Greenland. Supported by UCR Mechanical Engineering Departmental Fellowship.

  14. The development of small-scale mechanization means positioning algorithm using radio frequency identification technology in industrial plants

    Science.gov (United States)

    Astafiev, A.; Orlov, A.; Privezencev, D.

    2018-01-01

    The article is devoted to the development of technology and software for the construction of positioning and control systems for small mechanization in industrial plants based on radio frequency identification methods, which will be the basis for creating highly efficient intelligent systems for controlling the product movement in industrial enterprises. The main standards that are applied in the field of product movement control automation and radio frequency identification are considered. The article reviews modern publications and automation systems for the control of product movement developed by domestic and foreign manufacturers. It describes the developed algorithm for positioning of small-scale mechanization means in an industrial enterprise. Experimental studies in laboratory and production conditions have been conducted and described in the article.

  15. VISUALIZATION OF PAGERANK ALGORITHM

    OpenAIRE

    Perhaj, Ervin

    2013-01-01

    The goal of the thesis is to develop a web application that help users understand the functioning of the PageRank algorithm. The thesis consists of two parts. First we develop an algorithm to calculate PageRank values of web pages. The input of algorithm is a list of web pages and links between them. The user enters the list through the web interface. From the data the algorithm calculates PageRank value for each page. The algorithm repeats the process, until the difference of PageRank va...

  16. Algorithms and interface for ocean acoustic ray-tracing (Developed in MATLAB)

    Digital Repository Service at National Institute of Oceanography (India)

    Murty, T.V.R.; Rao, M.M.M.; Prakash, S.S.; Chandramouli, P.; Murthy, K.S.R.

    of refraction with respect to the appropriate coordinate. Eqns. (13) are the generalized form of Snell’s law. Let us consider the sound propagation problem in two dimension in x-z plane (see Fig. A1). Then along the ray path 10 ?cos?dsdx and ?sin... starting of the profile. ‘x’ indicates the range of the profile with reference to the previous profile File should contain two sets of profiles. (i) Range Independent – In this case the profile should be repeated twice (ii) Range Dependent...

  17. Analysis and Classification of Stride Patterns Associated with Children Development Using Gait Signal Dynamics Parameters and Ensemble Learning Algorithms

    Directory of Open Access Journals (Sweden)

    Meihong Wu

    2016-01-01

    Full Text Available Measuring stride variability and dynamics in children is useful for the quantitative study of gait maturation and neuromotor development in childhood and adolescence. In this paper, we computed the sample entropy (SampEn and average stride interval (ASI parameters to quantify the stride series of 50 gender-matched children participants in three age groups. We also normalized the SampEn and ASI values by leg length and body mass for each participant, respectively. Results show that the original and normalized SampEn values consistently decrease over the significance level of the Mann-Whitney U test (p<0.01 in children of 3–14 years old, which indicates the stride irregularity has been significantly ameliorated with the body growth. The original and normalized ASI values are also significantly changing when comparing between any two groups of young (aged 3–5 years, middle (aged 6–8 years, and elder (aged 10–14 years children. Such results suggest that healthy children may better modulate their gait cadence rhythm with the development of their musculoskeletal and neurological systems. In addition, the AdaBoost.M2 and Bagging algorithms were used to effectively distinguish the children’s gait patterns. These ensemble learning algorithms both provided excellent gait classification results in terms of overall accuracy (≥90%, recall (≥0.8, and precision (≥0.8077.

  18. Development of a mathematical model of the heating phase of rubber mixture and development of the synthesis of the heating control algorithm using the Pontryagin maximum principle

    Directory of Open Access Journals (Sweden)

    V. S. Kudryashov

    2017-01-01

    Full Text Available The article is devoted to the development of the algorithm of the heating phase control of a rubber compound for CJSC “Voronezh tyre plant”. The algorithm is designed for implementation on basis of controller Siemens S-300 to control the RS-270 mixer. To compile the algorithm a systematic analysis of the heating process has been performed as a control object, also the mathematical model of the heating phase has been developed on the basis of the heat balance equation, which describes the process of heating of a heat-transfer agent in the heat exchanger and further heating of the mixture in the mixer. The dynamic characteristics of temperature of the heat exchanger and the rubber mixer have been obtained. Taking into account the complexity and nonlinearity of the control object – a rubber mixer, as well as the availability of methods and great experience in managing this machine in an industrial environment, the algorithm has been implemented using the Pontryagin maximum principle. The optimization problem is reduced to determining the optimal control (heating steam supply and the optimal path of the object’s output coordinates (the temperature of the mixture which ensure the least flow of steam while heating a rubber compound in a limited time. To do this, the mathematical model of the heating phase has been written in matrix form. Coefficients matrices for each state of the control, control and disturbance vectors have been created, the Hamilton function has been obtained and time switching points have been found for constructing an optimal control and escape path of the object. Analysis of the model experiments and practical research results in the process of programming of the controller have showed a decrease in the heating steam consumption by 24.4% during the heating phase of the rubber compound.

  19. SU-G-BRA-02: Development of a Learning Based Block Matching Algorithm for Ultrasound Tracking in Radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Shepard, A; Bednarz, B [University of Wisconsin, Madison, WI (United States)

    2016-06-15

    Purpose: To develop an ultrasound learning-based tracking algorithm with the potential to provide real-time motion traces of anatomy-based fiducials that may aid in the effective delivery of external beam radiation. Methods: The algorithm was developed in Matlab R2015a and consists of two main stages: reference frame selection, and localized block matching. Immediately following frame acquisition, a normalized cross-correlation (NCC) similarity metric is used to determine a reference frame most similar to the current frame from a series of training set images that were acquired during a pretreatment scan. Segmented features in the reference frame provide the basis for the localized block matching to determine the feature locations in the current frame. The boundary points of the reference frame segmentation are used as the initial locations for the block matching and NCC is used to find the most similar block in the current frame. The best matched block locations in the current frame comprise the updated feature boundary. The algorithm was tested using five features from two sets of ultrasound patient data obtained from MICCAI 2014 CLUST. Due to the lack of a training set associated with the image sequences, the first 200 frames of the image sets were considered a valid training set for preliminary testing, and tracking was performed over the remaining frames. Results: Tracking of the five vessel features resulted in an average tracking error of 1.21 mm relative to predefined annotations. The average analysis rate was 15.7 FPS with analysis for one of the two patients reaching real-time speeds. Computations were performed on an i5-3230M at 2.60 GHz. Conclusion: Preliminary tests show tracking errors comparable with similar algorithms at close to real-time speeds. Extension of the work onto a GPU platform has the potential to achieve real-time performance, making tracking for therapy applications a feasible option. This work is partially funded by NIH grant R01CA

  20. SU-G-BRA-02: Development of a Learning Based Block Matching Algorithm for Ultrasound Tracking in Radiotherapy

    International Nuclear Information System (INIS)

    Shepard, A; Bednarz, B

    2016-01-01

    Purpose: To develop an ultrasound learning-based tracking algorithm with the potential to provide real-time motion traces of anatomy-based fiducials that may aid in the effective delivery of external beam radiation. Methods: The algorithm was developed in Matlab R2015a and consists of two main stages: reference frame selection, and localized block matching. Immediately following frame acquisition, a normalized cross-correlation (NCC) similarity metric is used to determine a reference frame most similar to the current frame from a series of training set images that were acquired during a pretreatment scan. Segmented features in the reference frame provide the basis for the localized block matching to determine the feature locations in the current frame. The boundary points of the reference frame segmentation are used as the initial locations for the block matching and NCC is used to find the most similar block in the current frame. The best matched block locations in the current frame comprise the updated feature boundary. The algorithm was tested using five features from two sets of ultrasound patient data obtained from MICCAI 2014 CLUST. Due to the lack of a training set associated with the image sequences, the first 200 frames of the image sets were considered a valid training set for preliminary testing, and tracking was performed over the remaining frames. Results: Tracking of the five vessel features resulted in an average tracking error of 1.21 mm relative to predefined annotations. The average analysis rate was 15.7 FPS with analysis for one of the two patients reaching real-time speeds. Computations were performed on an i5-3230M at 2.60 GHz. Conclusion: Preliminary tests show tracking errors comparable with similar algorithms at close to real-time speeds. Extension of the work onto a GPU platform has the potential to achieve real-time performance, making tracking for therapy applications a feasible option. This work is partially funded by NIH grant R01CA

  1. Sound algorithms

    OpenAIRE

    De Götzen , Amalia; Mion , Luca; Tache , Olivier

    2007-01-01

    International audience; We call sound algorithms the categories of algorithms that deal with digital sound signal. Sound algorithms appeared in the very infancy of computer. Sound algorithms present strong specificities that are the consequence of two dual considerations: the properties of the digital sound signal itself and its uses, and the properties of auditory perception.

  2. Applying Advances in GPM Radiometer Intercalibration and Algorithm Development to a Long-Term TRMM/GPM Global Precipitation Dataset

    Science.gov (United States)

    Berg, W. K.

    2016-12-01

    The Global Precipitation Mission (GPM) Core Observatory, which was launched in February of 2014, provides a number of advances for satellite monitoring of precipitation including a dual-frequency radar, high frequency channels on the GPM Microwave Imager (GMI), and coverage over middle and high latitudes. The GPM concept, however, is about producing unified precipitation retrievals from a constellation of microwave radiometers to provide approximately 3-hourly global sampling. This involves intercalibration of the input brightness temperatures from the constellation radiometers, development of an apriori precipitation database using observations from the state-of-the-art GPM radiometer and radars, and accounting for sensor differences in the retrieval algorithm in a physically-consistent way. Efforts by the GPM inter-satellite calibration working group, or XCAL team, and the radiometer algorithm team to create unified precipitation retrievals from the GPM radiometer constellation were fully implemented into the current version 4 GPM precipitation products. These include precipitation estimates from a total of seven conical-scanning and six cross-track scanning radiometers as well as high spatial and temporal resolution global level 3 gridded products. Work is now underway to extend this unified constellation-based approach to the combined TRMM/GPM data record starting in late 1997. The goal is to create a long-term global precipitation dataset employing these state-of-the-art calibration and retrieval algorithm approaches. This new long-term global precipitation dataset will incorporate the physics provided by the combined GPM GMI and DPR sensors into the apriori database, extend prior TRMM constellation observations to high latitudes, and expand the available TRMM precipitation data to the full constellation of available conical and cross-track scanning radiometers. This combined TRMM/GPM precipitation data record will thus provide a high-quality high

  3. Algorithmic developments and qualification of the ERANOS nuclear code and data system for the characterization of fast neutron reactors

    International Nuclear Information System (INIS)

    Rimpault, G.

    2003-09-01

    In this report, the author discusses the algorithmic and methodological developments in the field of nuclear reactor physics, and more particularly the developments of the ERALIB1/ERANOS nuclear code and data system for the calculation of core critical mass and power of sodium-cooled fast neutron reactors (Phenix and Super Phenix), and of the CAPRA 4/94 core. After a brief recall of nuclear data and methods used to determine critical masses and powers, the author discusses the interpretation of start-up experiments performed on Super-Phenix. The methodology used to characterize the uncertainties of these parameters is then applied to the calculation of the Super-Phenix critical mass and power distribution. He presents the approach chosen to define the validity domain of the ERANOS form

  4. Spectral Decomposition Algorithm (SDA)

    Data.gov (United States)

    National Aeronautics and Space Administration — Spectral Decomposition Algorithm (SDA) is an unsupervised feature extraction technique similar to PCA that was developed to better distinguish spectral features in...

  5. A New Plant Intelligent Behaviour Optimisation Algorithm for Solving Vehicle Routing Problem

    OpenAIRE

    Chagwiza, Godfrey

    2018-01-01

    A new plant intelligent behaviour optimisation algorithm is developed. The algorithm is motivated by intelligent behaviour of plants and is implemented to solve benchmark vehicle routing problems of all sizes, and results were compared to those in literature. The results show that the new algorithm outperforms most of algorithms it was compared to for very large and large vehicle routing problem instances. This is attributed to the ability of the plant to use previously stored memory to respo...

  6. ON THE USE OF LYTLE’S ALGORITHM FOR SOLVING TRAVELING SALESMAN PROBLEM AT DEVELOPING SUBURBAN ROUTE

    Directory of Open Access Journals (Sweden)

    S. Kantsedal

    2012-01-01

    Full Text Available Lytle’s algorithm is described as proposed for an accurate solution of the salesman Problem. Statistical characteristics of solution duration with lytle’s algorithm of some problems and of their modifications are specified. On the basis of the results obtained the limits for the algorithm practical specification in the preparation of the route network are given.

  7. Automated delay estimation at signalized intersections : phase I concept and algorithm development.

    Science.gov (United States)

    2011-07-01

    Currently there are several methods to measure the performance of surface streets, but their capabilities in dynamically estimating vehicle delay are limited. The objective of this research is to develop a method to automate traffic delay estimation ...

  8. Development and validation of a MODIS colored dissolved organic matter (CDOM) algorithm in northwest Florida estuaries

    Science.gov (United States)

    Satellite remote sensing provides synoptic and frequent monitoring of water quality parameters that aids in determining the health of aquatic ecosystems and the development of effective management strategies. Northwest Florida estuaries are classified as optically-complex, or wat...

  9. High-resolution computational algorithms for simulating offshore wind turbines and farms: Model development and validation

    Energy Technology Data Exchange (ETDEWEB)

    Calderer, Antoni [Univ. of Minnesota, Minneapolis, MN (United States); Yang, Xiaolei [Stony Brook Univ., NY (United States); Angelidis, Dionysios [Univ. of Minnesota, Minneapolis, MN (United States); Feist, Chris [Univ. of Minnesota, Minneapolis, MN (United States); Guala, Michele [Univ. of Minnesota, Minneapolis, MN (United States); Ruehl, Kelley [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Guo, Xin [Univ. of Minnesota, Minneapolis, MN (United States); Boomsma, Aaron [Univ. of Minnesota, Minneapolis, MN (United States); Shen, Lian [Univ. of Minnesota, Minneapolis, MN (United States); Sotiropoulos, Fotis [Stony Brook Univ., NY (United States)

    2015-10-30

    The present project involves the development of modeling and analysis design tools for assessing offshore wind turbine technologies. The computational tools developed herein are able to resolve the effects of the coupled interaction of atmospheric turbulence and ocean waves on aerodynamic performance and structural stability and reliability of offshore wind turbines and farms. Laboratory scale experiments have been carried out to derive data sets for validating the computational models.

  10. Accuracy of SIAscopy for pigmented skin lesions encountered in primary care: development and validation of a new diagnostic algorithm.

    Science.gov (United States)

    Emery, Jon D; Hunter, Judith; Hall, Per N; Watson, Anthony J; Moncrieff, Marc; Walter, Fiona M

    2010-09-25

    Diagnosing pigmented skin lesions in general practice is challenging. SIAscopy has been shown to increase diagnostic accuracy for melanoma in referred populations. We aimed to develop and validate a scoring system for SIAscopic diagnosis of pigmented lesions in primary care. This study was conducted in two consecutive settings in the UK and Australia, and occurred in three stages: 1) Development of the primary care scoring algorithm (PCSA) on a sub-set of lesions from the UK sample; 2) Validation of the PCSA on a different sub-set of lesions from the same UK sample; 3) Validation of the PCSA on a new set of lesions from an Australian primary care population. Patients presenting with a pigmented lesion were recruited from 6 general practices in the UK and 2 primary care skin cancer clinics in Australia. The following data were obtained for each lesion: clinical history; SIAscan; digital photograph; and digital dermoscopy. SIAscans were interpreted by an expert and validated against histopathology where possible, or expert clinical review of all available data for each lesion. A total of 858 patients with 1,211 lesions were recruited. Most lesions were benign naevi (64.8%) or seborrhoeic keratoses (22.1%); 1.2% were melanoma. The original SIAscopic diagnostic algorithm did not perform well because of the higher prevalence of seborrhoeic keratoses and haemangiomas seen in primary care. A primary care scoring algorithm (PCSA) was developed to account for this. In the UK sample the PCSA had the following characteristics for the diagnosis of 'suspicious': sensitivity 0.50 (0.18-0.81); specificity 0.84 (0.78-0.88); PPV 0.09 (0.03-0.22); NPV 0.98 (0.95-0.99). In the Australian sample the PCSA had the following characteristics for the diagnosis of 'suspicious': sensitivity 0.44 (0.32-0.58); specificity 0.95 (0.93-0.97); PPV 0.52 (0.38-0.66); NPV 0.95 (0.92-0.96). In an analysis of lesions for which histological diagnosis was available (n = 111), the PCSA had a significantly

  11. Developing a b-tagging algorithm using soft muons at level-3 for the DO detector at Fermilab

    Energy Technology Data Exchange (ETDEWEB)

    Das, Mayukh [Louisiana Tech. U.

    2005-01-01

    The current data-taking phase of the DØ detector at Fermilab, called Run II, is designed to aid the search for the Higgs Boson. The neutral Higgs is postulated to have a mass of 117 GeV. One of the channels promising the presence of this hypothetical particle is through the decay of b-quark into a muon. The process of identifying a b-quark in a jet using muon as a reference is b-tagging with a muon tag. At the current data taking and analysis rate, it will take long to reach the process of identifying valid events. The triggering mechanism of the experiment, consisting of 3 levels of combined hardware, firmware and software writes fi physics events at the rate of 50 Hz to data disks, with Level-3 alone accounting for the reduction from 1 kHz to 50 Hz. This large rejection is achieved through algorithms implemented in the search for key physics processes. The work presented in this dissertation is the development of a fast b-tagging algorithm using central-matched muons, called L3FBTagMU. Additional tools such as the impact parameter tracks and calorimeter jets have been used to tag B jets. The dR or the differential increment in cone radius is the most significant variable introduced. Plots within thresholds of dR for both Z → bb Monte Carlo and monitor stream data show similar efficiency trends when checked against other parameters. The differential efficiencies saturate at dR within 0.5 to 0.7 range. Differential bins of 0.1 intervals project an overall efficiency of tagging a b-jet in any event is 17.25 in data. This is in good agreement with the theory. The algorithm is currently running online and offline through the DØ database repository. This work is primarily used by the b-id, B-Physics and Higgs Physics groups for their physics analysis wherein the above b-tagging efficiency serves as a crucial tool. The prospect for optimizing the physics potential using this algorithm is very promising for current and future analyses.

  12. Knowing 'something is not right' is beyond intuition: development of a clinical algorithm to enhance surveillance and assist nurses to organise and communicate clinical findings.

    Science.gov (United States)

    Brier, Jessica; Carolyn, Moalem; Haverly, Marsha; Januario, Mary Ellen; Padula, Cynthia; Tal, Ahuva; Triosh, Henia

    2015-03-01

    To develop a clinical algorithm to guide nurses' critical thinking through systematic surveillance, assessment, actions required and communication strategies. To achieve this, an international, multiphase project was initiated. Patients receive hospital care postoperatively because they require the skilled surveillance of nurses. Effective assessment of postoperative patients is essential for early detection of clinical deterioration and optimal care management. Despite the significant amount of time devoted to surveillance activities, there is lack of evidence that nurses use a consistent, systematic approach in surveillance, management and communication, potentially leading to less optimal outcomes. Several explanations for the lack of consistency have been suggested in the literature. Mixed methods approach. Retrospective chart review; semi-structured interviews conducted with expert nurses (n = 10); algorithm development. Themes developed from the semi-structured interviews, including (1) complete, systematic assessment, (2) something is not right (3) validating with others, (4) influencing factors and (5) frustration with lack of response when communicating findings were used as the basis for development of the Surveillance Algorithm for Post-Surgical Patients. The algorithm proved beneficial based on limited use in clinical settings. Further work is needed to fully test it in education and practice. The Surveillance Algorithm for Post-Surgical Patients represents the approach of expert nurses, and serves to guide less expert nurses' observations, critical thinking, actions and communication. Based on this approach, the algorithm assists nurses to develop skills promoting early detection, intervention and communication in cases of patient deterioration. © 2014 John Wiley & Sons Ltd.

  13. Development of a signal-analysis algorithm for the ZEUS transition-radiation detector under application of a neural network

    International Nuclear Information System (INIS)

    Wollschlaeger, U.

    1992-07-01

    The aim of this thesis consisted in the development of a procedure for the analysis of the data of the transition-radiation detector at ZEUS. For this a neural network was applied and first studied, which results concerning the separation power between electron an pions can be reached by this procedure. It was shown that neural nets yield within the error limits as well results as standard algorithms (total charge, cluster analysis). At an electron efficiency of 90% pion contaminations in the range 1%-2% were reached. Furthermore it could be confirmed that neural networks can be considered for the here present application field as robust in relatively insensitive against external perturbations. For the application in the experiment beside the separation power also the time-behaviour is of importance. The requirement to keep dead-times small didn't allow the application of standard method. By a simulation the time availabel for the signal analysis was estimated. For the testing of the processing time in a neural network subsequently the corresponding algorithm was implemented into an assembler code for the digital signal processor DSP56001. (orig./HSI) [de

  14. Mathematical algorithm development and parametric studies with the GEOFRAC three-dimensional stochastic model of natural rock fracture systems

    Science.gov (United States)

    Ivanova, Violeta M.; Sousa, Rita; Murrihy, Brian; Einstein, Herbert H.

    2014-06-01

    This paper presents results from research conducted at MIT during 2010-2012 on modeling of natural rock fracture systems with the GEOFRAC three-dimensional stochastic model. Following a background summary of discrete fracture network models and a brief introduction of GEOFRAC, the paper provides a thorough description of the newly developed mathematical and computer algorithms for fracture intensity, aperture, and intersection representation, which have been implemented in MATLAB. The new methods optimize, in particular, the representation of fracture intensity in terms of cumulative fracture area per unit volume, P32, via the Poisson-Voronoi Tessellation of planes into polygonal fracture shapes. In addition, fracture apertures now can be represented probabilistically or deterministically whereas the newly implemented intersection algorithms allow for computing discrete pathways of interconnected fractures. In conclusion, results from a statistical parametric study, which was conducted with the enhanced GEOFRAC model and the new MATLAB-based Monte Carlo simulation program FRACSIM, demonstrate how fracture intensity, size, and orientations influence fracture connectivity.

  15. The mass-action law based algorithm for cost-effective approach for cancer drug discovery and development.

    Science.gov (United States)

    Chou, Ting-Chao

    2011-01-01

    The mass-action law based system analysis via mathematical induction and deduction lead to the generalized theory and algorithm that allows computerized simulation of dose-effect dynamics with small size experiments using a small number of data points in vitro, in animals, and in humans. The median-effect equation of the mass-action law deduced from over 300 mechanism specific-equations has been shown to be the unified theory that serves as the common-link for complicated biomedical systems. After using the median-effect principle as the common denominator, its applications are mechanism-independent, drug unit-independent, and dynamic order-independent; and can be used generally for single drug analysis or for multiple drug combinations in constant-ratio or non-constant ratios. Since the "median" is the common link and universal reference point in biological systems, these general enabling lead to computerized quantitative bio-informatics for econo-green bio-research in broad disciplines. Specific applications of the theory, especially relevant to drug discovery, drug combination, and clinical trials, have been cited or illustrated in terms of algorithms, experimental design and computerized simulation for data analysis. Lessons learned from cancer research during the past fifty years provide a valuable opportunity to reflect, and to improve the conventional divergent approach and to introduce a new convergent avenue, based on the mass-action law principle, for the efficient cancer drug discovery and the low-cost drug development.

  16. Development and validation of an algorithm for the study of sleep using a biometric shirt in young healthy adults.

    Science.gov (United States)

    Pion-Massicotte, Joëlle; Godbout, Roger; Savard, Pierre; Roy, Jean-François

    2018-02-23

    Portable polysomnography is often too complex and encumbering for recording sleep at home. We recorded sleep using a biometric shirt (electrocardiogram sensors, respiratory inductance plethysmography bands and an accelerometer) in 21 healthy young adults recorded in a sleep laboratory for two consecutive nights, together with standard polysomnography. Polysomnographic recordings were scored using standard methods. An algorithm was developed to classify the biometric shirt recordings into rapid eye movement sleep, non-rapid eye movement sleep and wake. The algorithm was based on breathing rate and heart rate variability, body movement, and included a correction for sleep onset and offset. The overall mean percentage of agreement between the two sets of recordings was 77.4%; when non-rapid eye movement and rapid eye movement sleep epochs were grouped together, it increased to 90.8%. The overall kappa coefficient was 0.53. Five of the seven sleep variables were significantly correlated. The findings of this pilot study indicate that this simple portable system could be used to estimate the general sleep pattern of young healthy adults. © 2018 European Sleep Research Society.

  17. A filtered backprojection algorithm with characteristics of the iterative landweber algorithm

    OpenAIRE

    L. Zeng, Gengsheng

    2012-01-01

    Purpose: In order to eventually develop an analytical algorithm with noise characteristics of an iterative algorithm, this technical note develops a window function for the filtered backprojection (FBP) algorithm in tomography that behaves as an iterative Landweber algorithm.

  18. Disaggregating reserve-to-production ratios: An algorithm for United States oil and gas reserve development

    Science.gov (United States)

    Williams, Charles William

    Reserve-to-production ratios for oil and gas development are utilized by oil and gas producing states to monitor oil and gas reserve and production dynamics. These ratios are used to determine production levels for the manipulation of oil and gas prices while maintaining adequate reserves for future development. These aggregate reserve-to-production ratios do not provide information concerning development cost and the best time necessary to develop newly discovered reserves. Oil and gas reserves are a semi-finished inventory because development of the reserves must take place in order to implement production. These reserves are considered semi-finished in that they are not counted unless it is economically profitable to produce them. The development of these reserves is encouraged by profit maximization economic variables which must consider the legal, political, and geological aspects of a project. This development is comprised of a myriad of incremental operational decisions, each of which influences profit maximization. The primary purpose of this study was to provide a model for characterizing a single product multi-period inventory/production optimization problem from an unconstrained quantity of raw material which was produced and stored as inventory reserve. This optimization was determined by evaluating dynamic changes in new additions to reserves and the subsequent depletion of these reserves with the maximization of production. A secondary purpose was to determine an equation for exponential depletion of proved reserves which presented a more comprehensive representation of reserve-to-production ratio values than an inadequate and frequently used aggregate historical method. The final purpose of this study was to determine the most accurate delay time for a proved reserve to achieve maximum production. This calculated time provided a measure of the discounted cost and calculation of net present value for developing new reserves. This study concluded that

  19. Development of an algorithm for quantifying extremity biological tissue; Desenvolvimento de um algoritmo quantificador de tecido biologico de extremidade

    Energy Technology Data Exchange (ETDEWEB)

    Pavan, Ana L.M.; Miranda, Jose R.A., E-mail: analuiza@ibb.unesp.br, E-mail: jmiranda@ibb.unesp.br [Universidade Estadual Paulista Julio de Mesquita Filho (IBB/UNESP), Botucatu, SP (Brazil). Instituto de Biociencias. Dept. de Fisica e Biofisica; Pina, Diana R. de, E-mail: drpina@frnb.unesp.br [Universidade Estadual Paulista Julio de Mesquita Filho (FMB/UNESP), Botucatu, SP (Brazil). Faculdade de Medicina. Dept. de Doencas Tropicas e Diagnostico por Imagem

    2013-07-01

    The computerized radiology (CR) has become the most widely used device for image acquisition and production, since its introduction in the 80s. The detection and early diagnosis, obtained via CR, are important for the successful treatment of diseases such as arthritis, metabolic bone diseases, tumors, infections and fractures. However, the standards used for optimization of these images are based on international protocols. Therefore, it is necessary to compose radiographic techniques for CR system that provides a secure medical diagnosis, with doses as low as reasonably achievable. To this end, the aim of this work is to develop a quantifier algorithm of tissue, allowing the construction of a homogeneous end used phantom to compose such techniques. It was developed a database of computed tomography images of hand and wrist of adult patients. Using the Matlab Registered-Sign software, was developed a computational algorithm able to quantify the average thickness of soft tissue and bones present in the anatomical region under study, as well as the corresponding thickness in simulators materials (aluminium and lucite). This was possible through the application of mask and Gaussian removal technique of histograms. As a result, was obtained an average thickness of soft tissue of 18,97 mm and bone tissue of 6,15 mm, and their equivalents in materials simulators of 23,87 mm of acrylic and 1,07mm of aluminum. The results obtained agreed with the medium thickness of biological tissues of a patient's hand pattern, enabling the construction of an homogeneous phantom.

  20. From Folktales to Algorithms: Developing the Teacher's Role as Principal Storyteller in the Classroom

    Science.gov (United States)

    Daniel, A. K.

    2007-01-01

    Whilst much thought has been given to the use of storytelling as a tool in the development of children's oral skills, little has been considered with regard to the role of the teacher as a model of good practice in storytelling, as principle storyteller in the classroom. In this paper I have tried to connect a pragmatic approach to classroom…

  1. Development of a Water Treatment Plant Operation Manual Using an Algorithmic Approach.

    Science.gov (United States)

    Counts, Cary A.

    This document describes the steps to be followed in the development of a prescription manual for training of water treatment plant operators. Suggestions on how to prepare both flow and narrative prescriptions are provided for a variety of water treatment systems, including: raw water, flocculation, rapid sand filter, caustic soda feed, alum feed,…

  2. Scheduling language and algorithm development study. Volume 1: Study summary and overview

    Science.gov (United States)

    1974-01-01

    A high level computer programming language and a program library were developed to be used in writing programs for scheduling complex systems such as the space transportation system. The objectives and requirements of the study are summarized and unique features of the specified language and program library are described and related to the why of the objectives and requirements.

  3. Development Algorithm of the Technological Process of Manufacturing Gas Turbine Parts by Selective Laser Melting

    Science.gov (United States)

    Sotov, A. V.; Agapovichev, A. V.; Smelov, V. G.; Kyarimov, R. R.

    2018-01-01

    The technology of the selective laser melting (SLM) allows making products from powders of aluminum, titanium, heat-resistant alloys and stainless steels. Today the use of SLM technology develops at manufacture of the functional parts. This in turn requires development of a methodology projection of technological processes (TP) for manufacturing parts including databases of standard TP. Use of a technique will allow to exclude influence of technologist’s qualification on made products quality, and also to reduce labor input and energy consumption by development of TP due to use of the databases of standard TP integrated into a methodology. As approbation of the developed methodology the research of influence of the modes of a laser emission on a roughness of a surface of synthesized material was presented. It is established that the best values of a roughness of exemplars in the longitudinal and transversal directions make 1.98 μm and 3.59 μm respectively. These values of a roughness were received at specific density of energy 6.25 J/mm2 that corresponds to power and the speed of scanning of 200 W and 400 mm/s, respectively, and a hatch distance of 0.08 mm.

  4. Development of a thermal control algorithm using artificial neural network models for improved thermal comfort and energy efficiency in accommodation buildings

    International Nuclear Information System (INIS)

    Moon, Jin Woo; Jung, Sung Kwon

    2016-01-01

    Highlights: • An ANN model for predicting optimal start moment of the cooling system was developed. • An ANN model for predicting the amount of cooling energy consumption was developed. • An optimal control algorithm was developed employing two ANN models. • The algorithm showed the advanced thermal comfort and energy efficiency. - Abstract: The aim of this study was to develop a control algorithm to demonstrate the improved thermal comfort and building energy efficiency of accommodation buildings in the cooling season. For this, two artificial neural network (ANN)-based predictive and adaptive models were developed and employed in the algorithm. One model predicted the cooling energy consumption during the unoccupied period for different setback temperatures and the other predicted the time required for restoring current indoor temperature to the normal set-point temperature. Using numerical simulation methods, the prediction accuracy of the two ANN models and the performance of the algorithm were tested. Through the test result analysis, the two ANN models showed their prediction accuracy with an acceptable error rate when applied in the control algorithm. In addition, the two ANN models based algorithm can be used to provide a more comfortable and energy efficient indoor thermal environment than the two conventional control methods, which respectively employed a fixed set-point temperature for the entire day and a setback temperature during the unoccupied period. Therefore, the operating range was 23–26 °C during the occupied period and 25–28 °C during the unoccupied period. Based on the analysis, it can be concluded that the optimal algorithm with two predictive and adaptive ANN models can be used to design a more comfortable and energy efficient indoor thermal environment for accommodation buildings in a comprehensive manner.

  5. Computational geometry algorithms and applications

    CERN Document Server

    de Berg, Mark; Overmars, Mark; Schwarzkopf, Otfried

    1997-01-01

    Computational geometry emerged from the field of algorithms design and anal­ ysis in the late 1970s. It has grown into a recognized discipline with its own journals, conferences, and a large community of active researchers. The suc­ cess of the field as a research discipline can on the one hand be explained from the beauty of the problems studied and the solutions obtained, and, on the other hand, by the many application domains--computer graphics, geographic in­ formation systems (GIS), robotics, and others-in which geometric algorithms play a fundamental role. For many geometric problems the early algorithmic solutions were either slow or difficult to understand and implement. In recent years a number of new algorithmic techniques have been developed that improved and simplified many of the previous approaches. In this textbook we have tried to make these modem algorithmic solutions accessible to a large audience. The book has been written as a textbook for a course in computational geometry, but it can ...

  6. Development of an algorithm for phenotypic screening of carbapenemase-producing Enterobacteriaceae in the routine laboratory.

    Science.gov (United States)

    Robert, Jérôme; Pantel, Alix; Merens, Audrey; Meiller, Elodie; Lavigne, Jean-Philippe; Nicolas-Chanoine, Marie-Hélène

    2017-01-17

    Carbapenemase-producing Enterobacteriaceae (CPE) are difficult to identify among carbapenem non-susceptible Enterobacteriaceae (NSE). We designed phenotypic strategies giving priority to high sensitivity for screening putative CPE before further testing. Presence of carbapenemase-encoding genes in ertapenem NSE (MIC > 0.5 mg/l) consecutively isolated in 80 French laboratories between November 2011 and April 2012 was determined by the Check-MDR-CT103 array method. Using the Mueller-Hinton (MH) disk diffusion method, clinical diameter breakpoints of carbapenems other than ertapenem, piperazicillin+tazobactam, ticarcillin+clavulanate and cefepime as well as diameter cut-offs for these antibiotics and temocillin were evaluated alone or combined to determine their performances (sensitivity, specificity, positive and negative likelihood ratios) for identifying putative CPE among these ertapenem-NSE isolates. To increase the screening specificity, these antibiotics were also tested on cloxacillin-containing MH when carbapenem NSE isolates belonged to species producing chromosomal cephalosporinase (AmpC) but Escherichia coli. Out of the 349 ertapenem NSE, 52 (14.9%) were CPE, including 39 producing OXA-48 group carbapenemase, eight KPC and five MBL. A screening strategy based on the following diameter cut offs, ticarcillin+clavulanate <15 mm, temocillin <15 mm, meropenem or imipenem <22 mm, and cefepime <26 mm, showed 100% sensitivity and 68.1% specificity with the better likelihood ratios combination. The specificity increased when a diameter cut-off <32 mm for imipenem (76.1%) or meropenem (78.8%) further tested on cloxacillin-containing MH was added to the previous strategy for AmpC-producing isolates. The proposed strategies that allowed for increasing the likelihood of CPE among ertapenem-NSE isolates should be considered as a surrogate for carbapenemase production before further CPE confirmatory testing.

  7. Development experience and strategy for the combined algorithm on the alarm processing and diagnosis

    International Nuclear Information System (INIS)

    Chung, Hak-Yeong

    1997-01-01

    In this paper, I presented the development experience on the alarm processing and fault diagnosis which has been achieved from early 1988 to late 1995. The scope covered is the prototype stage, the development stage of on-line operator-aid system, and an intelligent human-machine interface system. In the second part, I proposed a new method (APEXS) of multi-alarm processing to select the causal alarm(s) among occurred alarms by using the time information of each occurred alarm and alarm tree knowledge and the corresponding diagnosis method based on the selected causal alarm(s) by using the prescribed qualitative model. With more knowledge base about the plant and some modification suitable for real environment, APEXS will be able to adapt to a real steam power plant. (author). 18 refs, 3 figs, 1 tab

  8. Development of a new prior knowledge based image reconstruction algorithm for the cone-beam-CT in radiation therapy

    International Nuclear Information System (INIS)

    Vaegler, Sven

    2016-01-01

    follow up reconstructed images are not appropriate considered so far. These deviations may result from changes in anatomy including tumour shrinkage and loss of weight and may result in a degraded image quality of the reconstructed images. Deformable registration methods that adapt the prior images adequately can compensate this shortcoming of PICCS. Such registration techniques, however, suffer from limited accurateness and much higher computation time for the overall reconstruction process. Therefore, the aim of this thesis was to develop a new knowledge-based reconstruction algorithm that incorporates additionally local dependent reliability information about the prior images into reconstruction algorithm. The basic idea of the new algorithm is the assumption that the prior images are composed of areas with large and of areas with small deviations. Accordingly, the areas of the prior image were assigned as variable where substantial deformations due to motion or change in structure over the time series were expected. Hence, these regions were not providing valuable structural information for the anticipated result anymore. In contrast, ''a priori'' information was assigned to structurally stationary areas where no changes were expected. Based on this composition, a weighting matrix was generated that considers the strength of these variations during reconstruction. The new algorithm was tested in different feasibility studies to common dose reduction strategies. These dose reduction strategies includes the reduction of the number of projections, the acquisition of projections with strong noise and the reduction of the acquisition space. The main aim of this work was to demonstrate the gain of image quality when prior images with major variations are used compared to standard reconstruction techniques. The studies were performed with a computer phantom, and in particular with experimental data that have been acquired with the clinical CBCT. The new reconstruction

  9. Previously unknown species of Aspergillus.

    Science.gov (United States)

    Gautier, M; Normand, A-C; Ranque, S

    2016-08-01

    The use of multi-locus DNA sequence analysis has led to the description of previously unknown 'cryptic' Aspergillus species, whereas classical morphology-based identification of Aspergillus remains limited to the section or species-complex level. The current literature highlights two main features concerning these 'cryptic' Aspergillus species. First, the prevalence of such species in clinical samples is relatively high compared with emergent filamentous fungal taxa such as Mucorales, Scedosporium or Fusarium. Second, it is clearly important to identify these species in the clinical laboratory because of the high frequency of antifungal drug-resistant isolates of such Aspergillus species. Matrix-assisted laser desorption/ionization-time of flight mass spectrometry (MALDI-TOF MS) has recently been shown to enable the identification of filamentous fungi with an accuracy similar to that of DNA sequence-based methods. As MALDI-TOF MS is well suited to the routine clinical laboratory workflow, it facilitates the identification of these 'cryptic' Aspergillus species at the routine mycology bench. The rapid establishment of enhanced filamentous fungi identification facilities will lead to a better understanding of the epidemiology and clinical importance of these emerging Aspergillus species. Based on routine MALDI-TOF MS-based identification results, we provide original insights into the key interpretation issues of a positive Aspergillus culture from a clinical sample. Which ubiquitous species that are frequently isolated from air samples are rarely involved in human invasive disease? Can both the species and the type of biological sample indicate Aspergillus carriage, colonization or infection in a patient? Highly accurate routine filamentous fungi identification is central to enhance the understanding of these previously unknown Aspergillus species, with a vital impact on further improved patient care. Copyright © 2016 European Society of Clinical Microbiology and

  10. Algorithmic cryptanalysis

    CERN Document Server

    Joux, Antoine

    2009-01-01

    Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic

  11. Calculation methods and algorithms development of dynamic loadings on NPP secondary circuit equipment at shock and pulse actions

    International Nuclear Information System (INIS)

    Kuznetsov, D.V.; Kormilitsyn, V.M.; Proskuryakov, K.N.

    2010-01-01

    Calculation results of acoustic parameters fluctuations in low-pressure regenerative heating system of NPP with WWER-1000 type reactor were presented. The spectral structure of acoustic fluctuations was shown to depend on configuration of secondary circuit equipment, its geometrical sizes and operation mode. Estimations of natural oscillations frequencies of working medium pressure in the secondary circuit equipment were resulted. The developed calculation methods and algorithms are intended for revealing and prevention of initiation conditions of vibrations resonances in elements of the secondary circuit equipment with acoustic oscillations in working medium, both under operating conditions and in the design stage of the second circuit of NPP with WWER-1000 type reactor. Analysis of pass-band dependence on operation mode was carried out to solve the given problem [ru

  12. Development of a multi-objective PBIL evolutionary algorithm applied to a nuclear reactor core reload optimization problem

    International Nuclear Information System (INIS)

    Machado, Marcelo D.; Dchirru, Roberto

    2005-01-01

    The nuclear reactor core reload optimization problem consists in finding a pattern of partially burned-up and fresh fuels that optimizes the plant's next operation cycle. This optimization problem has been traditionally solved using an expert's knowledge, but recently artificial intelligence techniques have also been applied successfully. The artificial intelligence optimization techniques generally have a single objective. However, most real-world engineering problems, including nuclear core reload optimization, have more than one objective (multi-objective) and these objectives are usually conflicting. The aim of this work is to develop a tool to solve multi-objective problems based on the Population-Based Incremental Learning (PBIL) algorithm. The new tool is applied to solve the Angra 1 PWR core reload optimization problem with the purpose of creating a Pareto surface, so that a pattern selected from this surface can be applied for the plant's next operation cycle. (author)

  13. Development of a Two-Phase Flow Analysis Code based on a Unstructured-Mesh SIMPLE Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jong Tae; Park, Ik Kyu; Cho, Heong Kyu; Yoon, Han Young; Kim, Kyung Doo; Jeong, Jae Jun

    2008-09-15

    For analyses of multi-phase flows in a water-cooled nuclear power plant, a three-dimensional SIMPLE-algorithm based hydrodynamic solver CUPID-S has been developed. As governing equations, it adopts a two-fluid three-field model for the two-phase flows. The three fields represent a continuous liquid, a dispersed droplets, and a vapour field. The governing equations are discretized by a finite volume method on an unstructured grid to handle the geometrical complexity of the nuclear reactors. The phasic momentum equations are coupled and solved with a sparse block Gauss-Seidel matrix solver to increase a numerical stability. The pressure correction equation derived by summing the phasic volume fraction equations is applied on the unstructured mesh in the context of a cell-centered co-located scheme. This paper presents the numerical method and the preliminary results of the calculations.

  14. The development of flux-split algorithms for flows with non-equilibrium thermodynamics and chemical reactions

    Science.gov (United States)

    Grossman, B.; Cinella, P.

    1988-01-01

    A finite-volume method for the numerical computation of flows with nonequilibrium thermodynamics and chemistry is presented. A thermodynamic model is described which simplifies the coupling between the chemistry and thermodynamics and also results in the retention of the homogeneity property of the Euler equations (including all the species continuity and vibrational energy conservation equations). Flux-splitting procedures are developed for the fully coupled equations involving fluid dynamics, chemical production and thermodynamic relaxation processes. New forms of flux-vector split and flux-difference split algorithms are embodied in a fully coupled, implicit, large-block structure, including all the species conservation and energy production equations. Several numerical examples are presented, including high-temperature shock tube and nozzle flows. The methodology is compared to other existing techniques, including spectral and central-differenced procedures, and favorable comparisons are shown regarding accuracy, shock-capturing and convergence rates.

  15. Graph 500 on OpenSHMEM: Using a Practical Survey of Past Work to Motivate Novel Algorithmic Developments

    Energy Technology Data Exchange (ETDEWEB)

    Grossman, Max [Rice Univ., Houston, TX (United States); Pritchard Jr., Howard Porter [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Budimlic, Zoran [Rice Univ., Houston, TX (United States); Sarkar, Vivek [Rice Univ., Houston, TX (United States)

    2016-12-22

    Graph500 [14] is an effort to offer a standardized benchmark across large-scale distributed platforms which captures the behavior of common communicationbound graph algorithms. Graph500 differs from other large-scale benchmarking efforts (such as HPL [6] or HPGMG [7]) primarily in the irregularity of its computation and data access patterns. The core computational kernel of Graph500 is a breadth-first search (BFS) implemented on an undirected graph. The output of Graph500 is a spanning tree of the input graph, usually represented by a predecessor mapping for every node in the graph. The Graph500 benchmark defines several pre-defined input sizes for implementers to test against. This report summarizes investigation into implementing the Graph500 benchmark on OpenSHMEM, and focuses on first building a strong and practical understanding of the strengths and limitations of past work before proposing and developing novel extensions.

  16. Development and evaluation of an algorithm-based tool for Medication Management in nursing homes: the AMBER study protocol.

    Science.gov (United States)

    Erzkamp, Susanne; Rose, Olaf

    2018-04-20

    Residents of nursing homes are susceptible to risks from medication. Medication Reviews (MR) can increase clinical outcomes and the quality of medication therapy. Limited resources and barriers between healthcare practitioners are potential obstructions to performing MR in nursing homes. Focusing on frequent and relevant problems can support pharmacists in the provision of pharmaceutical care services. This study aims to develop and evaluate an algorithm-based tool that facilitates the provision of Medication Management in clinical practice. This study is subdivided into three phases. In phase I, semistructured interviews with healthcare practitioners and patients will be performed, and a mixed methods approach will be chosen. Qualitative content analysis and the rating of the aspects concerning the frequency and relevance of problems in the medication process in nursing homes will be performed. In phase II, a systematic review of the current literature on problems and interventions will be conducted. The findings will be narratively presented. The results of both phases will be combined to develop an algorithm for MRs. For further refinement of the aspects detected, a Delphi survey will be conducted. In conclusion, a tool for clinical practice will be created. In phase III, the tool will be tested on MRs in nursing homes. In addition, effectiveness, acceptance, feasibility and reproducibility will be assessed. The primary outcome of phase III will be the reduction of drug-related problems (DRPs), which will be detected using the tool. The secondary outcomes will be the proportion of DRPs, the acceptance of pharmaceutical recommendations and the expenditure of time using the tool and inter-rater reliability. This study intervention is approved by the local Ethics Committee. The findings of the study will be presented at national and international scientific conferences and will be published in peer-reviewed journals. DRKS00010995. © Article author(s) (or their

  17. Development of an Outdoor Temperature Based Control Algorithm for Residential Mechanical Ventilation Control

    Energy Technology Data Exchange (ETDEWEB)

    Less, Brennan [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Walker, Iain [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Tang, Yihuan [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2014-08-01

    The Incremental Ventilation Energy (IVE) model developed in this study combines the output of simple air exchange models with a limited set of housing characteristics to estimate the associated change in energy demand of homes. The IVE model was designed specifically to enable modellers to use existing databases of housing characteristics to determine the impact of ventilation policy change on a population scale. The IVE model estimates of energy change when applied to US homes with limited parameterisation are shown to be comparable to the estimates of a well-validated, complex residential energy model.

  18. Development of Subspace-based Hybrid Monte Carlo-Deterministric Algorithms for Reactor Physics Calculations

    International Nuclear Information System (INIS)

    Abdel-Khalik, Hany S.; Zhang, Qiong

    2014-01-01

    The development of hybrid Monte-Carlo-Deterministic (MC-DT) approaches, taking place over the past few decades, have primarily focused on shielding and detection applications where the analysis requires a small number of responses, i.e. at the detector locations(s). This work further develops a recently introduced global variance reduction approach, denoted by the SUBSPACE approach is designed to allow the use of MC simulation, currently limited to benchmarking calculations, for routine engineering calculations. By way of demonstration, the SUBSPACE approach is applied to assembly level calculations used to generate the few-group homogenized cross-sections. These models are typically expensive and need to be executed in the order of 10 3 - 10 5 times to properly characterize the few-group cross-sections for downstream core-wide calculations. Applicability to k-eigenvalue core-wide models is also demonstrated in this work. Given the favorable results obtained in this work, we believe the applicability of the MC method for reactor analysis calculations could be realized in the near future.

  19. Development of Real-Time Precise Positioning Algorithm Using GPS L1 Carrier Phase Data

    Directory of Open Access Journals (Sweden)

    Jeong-Ho Joh

    2002-12-01

    Full Text Available We have developed Real-time Phase DAta Processor(RPDAP for GPS L1 carrier. And also, we tested the RPDAP's positioning accuracy compared with results of real time kinematic(RTK positioning. While quality of the conventional L1 RTK positioning highly depend on receiving condition, the RPDAP can gives more stable positioning result because of different set of common GPS satellites, which searched by elevation mask angle and signal strength. In this paper, we demonstrated characteristics of the RPDAP compared with the L1 RTK technique. And we discussed several improvement ways to apply the RPDAP to precise real-time positioning using low-cost GPS receiver. With correcting the discussed weak points in near future, the RPDAP will be used in the field of precise real-time application, such as precise car navigation and precise personal location services.

  20. A fuzzy hill-climbing algorithm for the development of a compact associative classifier

    Science.gov (United States)

    Mitra, Soumyaroop; Lam, Sarah S.

    2012-02-01

    Classification, a data mining technique, has widespread applications including medical diagnosis, targeted marketing, and others. Knowledge discovery from databases in the form of association rules is one of the important data mining tasks. An integrated approach, classification based on association rules, has drawn the attention of the data mining community over the last decade. While attention has been mainly focused on increasing classifier accuracies, not much efforts have been devoted towards building interpretable and less complex models. This paper discusses the development of a compact associative classification model using a hill-climbing approach and fuzzy sets. The proposed methodology builds the rule-base by selecting rules which contribute towards increasing training accuracy, thus balancing classification accuracy with the number of classification association rules. The results indicated that the proposed associative classification model can achieve competitive accuracies on benchmark datasets with continuous attributes and lend better interpretability, when compared with other rule-based systems.

  1. Weak and Strong Convergence of an Algorithm for the Split Common Fixed-Point of Asymptotically Quasi-Nonexpansive Operators

    Directory of Open Access Journals (Sweden)

    Yazheng Dang

    2013-01-01

    Full Text Available Inspired by the Moudafi (2010, we propose an algorithm for solving the split common fixed-point problem for a wide class of asymptotically quasi-nonexpansive operators and the weak and strong convergence of the algorithm are shown under some suitable conditions in Hilbert spaces. The algorithm and its convergence results improve and develop previous results for split feasibility problems.

  2. Design of a test system for the development of advanced video chips and software algorithms.

    Science.gov (United States)

    Falkinger, Marita; Kranzfelder, Michael; Wilhelm, Dirk; Stemp, Verena; Koepf, Susanne; Jakob, Judith; Hille, Andreas; Endress, Wolfgang; Feussner, Hubertus; Schneider, Armin

    2015-04-01

    Visual deterioration is a crucial point in minimally invasive surgery impeding surgical performance. Modern image processing technologies appear to be promising approaches for further image optimization by digital elimination of disturbing particles. To make them mature for clinical application, an experimental test environment for evaluation of possible image interferences would be most helpful. After a comprehensive review of the literature (MEDLINE, IEEE, Google Scholar), a test bed for generation of artificial surgical smoke and mist was evolved. Smoke was generated by a fog machine and mist produced by a nebulizer. The size of resulting droplets was measured microscopically and compared with biological smoke (electrocautery) and mist (ultrasound dissection) emerging during minimally invasive surgical procedures. The particles resulting from artificial generation are in the range of the size of biological droplets. For surgical smoke, the droplet dimension produced by the fog machine was 4.19 µm compared with 4.65 µm generated by electrocautery during a surgical procedure. The size of artificial mist produced by the nebulizer ranged between 45.38 and 48.04 µm compared with the range between 30.80 and 56.27 µm that was generated during minimally invasive ultrasonic dissection. A suitable test bed for artificial smoke and mist generation was developed revealing almost identical droplet characteristics as produced during minimally invasive surgical procedures. The possibility to generate image interferences comparable to those occurring during laparoscopy (electrocautery and ultrasound dissection) provides a basis for the future development of image processing technologies for clinical applications. © The Author(s) 2014.

  3. Development of a Quasi-3D Multiscale Modeling Framework: Motivation, basic algorithm and preliminary results

    Directory of Open Access Journals (Sweden)

    Joon-Hee Jung

    2010-11-01

    Full Text Available A new framework for modeling the atmosphere, which we call the quasi-3D (Q3D multi-scale modeling framework (MMF, is developed with the objective of including cloud-scale three-dimensional effects in a GCM without necessarily using a global cloud-resolving model (CRM. It combines a GCM with a Q3D CRM that has the horizontal domain consisting of two perpendicular sets of channels, each of which contains a locally 3D grid-point array. For computing efficiency, the widths of the channels are chosen to be narrow. Thus, it is crucial to select a proper lateral boundary condition to realistically simulate the statistics of cloud and cloud-associated processes. Among the various possibilities, a periodic lateral boundary condition is chosen for the deviations from background fields that are obtained by interpolations from the GCM grid points. Since the deviations tend to vanish as the GCM grid size approaches that of the CRM, the whole system of the Q3D MMF can converge to a fully 3D global CRM. Consequently, the horizontal resolution of the GCM can be freely chosen depending on the objective of application, without changing the formulation of model physics. To evaluate the newly developed Q3D CRM in an efficient way, idealized experiments have been performed using a small horizontal domain. In these tests, the Q3D CRM uses only one pair of perpendicular channels with only two grid points across each channel. Comparing the simulation results with those of a fully 3D CRM, it is concluded that the Q3D CRM can reproduce most of the important statistics of the 3D solutions, including the vertical distributions of cloud water and precipitants, vertical transports of potential temperature and water vapor, and the variances and covariances of dynamical variables. The main improvement from a corresponding 2D simulation appears in the surface fluxes and the vorticity transports that cause the mean wind to change. A comparison with a simulation using a coarse

  4. Total algorithms

    NARCIS (Netherlands)

    Tel, G.

    We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of

  5. Development and Validation of Case-Finding Algorithms for the Identification of Patients with ANCA-Associated Vasculitis in Large Healthcare Administrative Databases

    Science.gov (United States)

    Sreih, Antoine G.; Annapureddy, Narender; Springer, Jason; Casey, George; Byram, Kevin; Cruz, Andy; Estephan, Maya; Frangiosa, Vince; George, Michael D.; Liu, Mei; Parker, Adam; Sangani, Sapna; Sharim, Rebecca; Merkel, Peter A.

    2016-01-01

    Purpose To develop and validate case-finding algorithms for granulomatosis with polyangiitis (Wegener’s, GPA), microscopic polyangiitis (MPA), and eosinophilic granulomatosis with polyangiitis (Churg-Strauss, EGPA). Methods 250 patients per disease were randomly selected from 2 large healthcare systems using the International Classification of Diseases version 9 (ICD9) codes for GPA/EGPA (446.4) and MPA (446.0). 16 case-finding algorithms were constructed using a combination of ICD9 code, encounter type (inpatient or outpatient), physician specialty, use of immunosuppressive medications, and the anti-neutrophil cytoplasmic antibody (ANCA) type. Algorithms with the highest average positive predictive value (PPV) were validated in a third healthcare system. Results An algorithm excluding patients with eosinophilia or asthma and including the encounter type and physician specialty had the highest PPV for GPA (92.4%). An algorithm including patients with eosinophilia and asthma and the physician specialty had the highest PPV for EGPA (100%). An algorithm including patients with one of the following diagnoses: alveolar hemorrhage, interstitial lung disease, glomerulonephritis, acute or chronic kidney disease, the encounter type, physician specialty, and immunosuppressive medications had the highest PPV for MPA (76.2%). When validated in a third healthcare system, these algorithms had high PPV (85.9% for GPA, 85.7% for EGPA, and 61.5% for MPA). Adding the ANCA type increased the PPV to 94.4%, 100%, and 81.2% for GPA, EGPA, and MPA respectively. Conclusion Case-finding algorithms accurately identify patients with GPA, EGPA, and MPA in administrative databases. These algorithms can be used to assemble population-based cohorts and facilitate future research in epidemiology, drug safety, and comparative effectiveness. PMID:27804171

  6. Development of a screening algorithm for Alzheimer's disease using categorical verbal fluency.

    Directory of Open Access Journals (Sweden)

    Yeon Kyung Chi

    Full Text Available We developed a weighted composite score of the categorical verbal fluency test (CVFT that can more easily and widely screen Alzheimer's disease (AD than the mini-mental status examination (MMSE. We administered the CVFT using animal category and MMSE to 423 community-dwelling mild probable AD patients and their age- and gender-matched cognitively normal controls. To enhance the diagnostic accuracy for AD of the CVFT, we obtained a weighted composite score from subindex scores of the CVFT using a logistic regression model: logit (case  = 1.160+0.474× gender +0.003× age +0.226× education level - 0.089× first-half score - 0.516× switching score -0.303× clustering score +0.534× perseveration score. The area under the receiver operating curve (AUC for AD of this composite score AD was 0.903 (95% CI = 0.883 - 0.923, and was larger than that of the age-, gender- and education-adjusted total score of the CVFT (p<0.001. In 100 bootstrapped re-samples, the composite score consistently showed better diagnostic accuracy, sensitivity and specificity for AD than the total score. Although AUC for AD of the CVFT composite score was slightly smaller than that of the MMSE (0.930, p = 0.006, the CVFT composite score may be a good alternative to the MMSE for screening AD since it is much briefer, cheaper, and more easily applicable over phone or internet than the MMSE.

  7. Pulseq-Graphical Programming Interface: Open source visual environment for prototyping pulse sequences and integrated magnetic resonance imaging algorithm development.

    Science.gov (United States)

    Ravi, Keerthi Sravan; Potdar, Sneha; Poojar, Pavan; Reddy, Ashok Kumar; Kroboth, Stefan; Nielsen, Jon-Fredrik; Zaitsev, Maxim; Venkatesan, Ramesh; Geethanath, Sairam

    2018-03-11

    To provide a single open-source platform for comprehensive MR algorithm development inclusive of simulations, pulse sequence design and deployment, reconstruction, and image analysis. We integrated the "Pulseq" platform for vendor-independent pulse programming with Graphical Programming Interface (GPI), a scientific development environment based on Python. Our integrated platform, Pulseq-GPI, permits sequences to be defined visually and exported to the Pulseq file format for execution on an MR scanner. For comparison, Pulseq files using either MATLAB only ("MATLAB-Pulseq") or Python only ("Python-Pulseq") were generated. We demonstrated three fundamental sequences on a 1.5 T scanner. Execution times of the three variants of implementation were compared on two operating systems. In vitro phantom images indicate equivalence with the vendor supplied implementations and MATLAB-Pulseq. The examples demonstrated in this work illustrate the unifying capability of Pulseq-GPI. The execution times of all the three implementations were fast (a few seconds). The software is capable of user-interface based development and/or command line programming. The tool demonstrated here, Pulseq-GPI, integrates the open-source simulation, reconstruction and analysis capabilities of GPI Lab with the pulse sequence design and deployment features of Pulseq. Current and future work includes providing an ISMRMRD interface and incorporating Specific Absorption Ratio and Peripheral Nerve Stimulation computations. Copyright © 2018 Elsevier Inc. All rights reserved.

  8. The development of interactive multimedia based on auditory, intellectually, repetition in repetition algorithm learning to increase learning outcome

    Science.gov (United States)

    Munir; Sutarno, H.; Aisyah, N. S.

    2018-05-01

    This research aims to find out how the development of interactive multimedia based on auditory, intellectually, and repetition can improve student learning outcomes. This interactive multimedia is developed through 5 stages. Analysis stages include the study of literature, questionnaire, interviews and observations. The design phase is done by the database design, flowchart, storyboards and repetition algorithm material while the development phase is done by the creation of web-based framework. Presentation material is adapted to the model of learning such as auditory, intellectually, repetition. Auditory points are obtained by recording the narrative material that presented by a variety of intellectual points. Multimedia as a product is validated by material and media experts. Implementation phase conducted on grade XI-TKJ2 SMKN 1 Garut. Based on index’s gain, an increasing of student learning outcomes in this study is 0.46 which is fair due to interest of student in using interactive multimedia. While the multimedia assessment earned 84.36% which is categorized as very well.

  9. Development of Internet algorithms and some calculations of power plant COP

    Science.gov (United States)

    Ustjuzhanin, E. E.; Ochkov, V. F.; Znamensky, V. E.

    2017-11-01

    The authors have analyzed Internet resources containing information on some thermodynamic properties of technically important substances (the water, the air etc.). There are considered databases those possess such resources and are hosted in organizations (Joint Institute for High Temperatures (Russian Academy of Sciences), Standartinform (Russia), National Institute of Standards and Technology (USA), Institute for Thermal Physics (Siberian Branch of the Russian Academy of Sciences), etc.). Currently, a typical form is an Internet resource that includes a text file, for example, it is a file containing tabulated properties, R = (ρ, s, h…), here ρ - the density, s - the entropy, h - the enthalpy of a substance. It is known a small number of Internet resources those have the following characteristic. The resource allows a customer to realize a number of options, for example: i) to enter the input data, Y = (p, T), here p - the pressure, T - the temperature, ii) to calculate R property using “an exe-file” program, iii) to copy the result X = (p, T, ρ, h, s, …). Recently, some researchers (including the authors of this report) have requested a software (SW) that is designed for R property calculations and has a form of an open interactive (OI) Internet resource (“a client function”, “template”). A computing part of OI resource is linked: 1) with a formula, which is applied to calculate R property, 2) with a Mathcad program, Code_1(R,Y). An interactive part of OI resource is based on Informatics and Internet technologies. We have proposed some methods and tools those are related to this part and let us: a) to post OI resource on a remote server, b) to link a client PC with the remote server, c) to implement a number of options to clients. Among these options, there are: i) to calculate R property at given Y arguments, ii) to copy mathematical formulas, iii) to copy Code_1(R,Y) as a whole. We have developed some OI - resources those are focused on

  10. On the Tengiz petroleum deposit previous study

    International Nuclear Information System (INIS)

    Nysangaliev, A.N.; Kuspangaliev, T.K.

    1997-01-01

    Tengiz petroleum deposit previous study is described. Some consideration about structure of productive formation, specific characteristic properties of petroleum-bearing collectors are presented. Recommendation on their detail study and using of experience on exploration and development of petroleum deposit which have analogy on most important geological and industrial parameters are given. (author)

  11. Group leaders optimization algorithm

    Science.gov (United States)

    Daskin, Anmer; Kais, Sabre

    2011-03-01

    We present a new global optimization algorithm in which the influence of the leaders in social groups is used as an inspiration for the evolutionary technique which is designed into a group architecture. To demonstrate the efficiency of the method, a standard suite of single and multi-dimensional optimization functions along with the energies and the geometric structures of Lennard-Jones clusters are given as well as the application of the algorithm on quantum circuit design problems. We show that as an improvement over previous methods, the algorithm scales as N 2.5 for the Lennard-Jones clusters of N-particles. In addition, an efficient circuit design is shown for a two-qubit Grover search algorithm which is a quantum algorithm providing quadratic speedup over the classical counterpart.

  12. Efficient conjugate gradient algorithms for computation of the manipulator forward dynamics

    Science.gov (United States)

    Fijany, Amir; Scheid, Robert E.

    1989-01-01

    The applicability of conjugate gradient algorithms for computation of the manipulator forward dynamics is investigated. The redundancies in the previously proposed conjugate gradient algorithm are analyzed. A new version is developed which, by avoiding these redundancies, achieves a significantly greater efficiency. A preconditioned conjugate gradient algorithm is also presented. A diagonal matrix whose elements are the diagonal elements of the inertia matrix is proposed as the preconditioner. In order to increase the computational efficiency, an algorithm is developed which exploits the synergism between the computation of the diagonal elements of the inertia matrix and that required by the conjugate gradient algorithm.

  13. Development of an algorithm simulator of the planar radioactive source for dosimetric evaluations in accidents with radiopharmaceuticals used in nuclear medicine

    International Nuclear Information System (INIS)

    Claudino, Gutemberg L. Sales; Vieira, Jose Wilson; Leal Neto, Viriato; Lima, Fernando R. Andrade

    2013-01-01

    Objective of this work is to develop an algorithm simulator for dosimetric evaluation of accidents that may happen in Nuclear Medicine using PDF NT (Probability Density Functions). A software was developed using C# and WPF technology, in the integrated environment of Microsoft Visual Studio to organize and present the dosimetric results

  14. Derivation of a regional active-optical reflectance sensor corn algorithm

    Science.gov (United States)

    Active-optical reflectance sensor (AORS) algorithms developed for in-season corn (Zea mays L.) N management have traditionally been derived using sub-regional scale information. However, studies have shown these previously developed AORS algorithms are not consistently accurate when used on a region...

  15. Developing the fuzzy c-means clustering algorithm based on maximum entropy for multitarget tracking in a cluttered environment

    Science.gov (United States)

    Chen, Xiao; Li, Yaan; Yu, Jing; Li, Yuxing

    2018-01-01

    For fast and more effective implementation of tracking multiple targets in a cluttered environment, we propose a multiple targets tracking (MTT) algorithm called maximum entropy fuzzy c-means clustering joint probabilistic data association that combines fuzzy c-means clustering and the joint probabilistic data association (PDA) algorithm. The algorithm uses the membership value to express the probability of the target originating from measurement. The membership value is obtained through fuzzy c-means clustering objective function optimized by the maximum entropy principle. When considering the effect of the public measurement, we use a correction factor to adjust the association probability matrix to estimate the state of the target. As this algorithm avoids confirmation matrix splitting, it can solve the high computational load problem of the joint PDA algorithm. The results of simulations and analysis conducted for tracking neighbor parallel targets and cross targets in a different density cluttered environment show that the proposed algorithm can realize MTT quickly and efficiently in a cluttered environment. Further, the performance of the proposed algorithm remains constant with increasing process noise variance. The proposed algorithm has the advantages of efficiency and low computational load, which can ensure optimum performance when tracking multiple targets in a dense cluttered environment.

  16. Development of a control algorithm for teleoperation of DFDF(IMEF/M6 hot cell) maintenance equipment

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Chae Youn; Kwon, Hyuk Jo; Kim, Hak Duck; Jun, Ji Myung; Oh, Hee Geun [Chonbuk National University, Chonju (Korea)

    2002-03-01

    Teleoperation has been used for separating operators from the working environment. Thus, it is usually used to perform a work in an inaccessible place such as space, deep sea, etc. Also, it is used to perform a work in an accessible but a very poor working environment such as explosive, poison gas, radioactive area, etc. It is one of the advanced technology-intensive research areas. It has potentially big economical and industrial value. There is a tendency to avoid working in a difficult, dirty or dangerous place, particularly, in a high radioactive area since there always exist a possibility to be in a very dangerous situation. Thus, developing and utilizing of a teleoperation system will minimize the possibility to be exposed in such a extreme situation directly. Recently, there has been many researches on reflecting force information occurring in teleoperation to the operator in addition to visual information. The reflected force information is used to control the teleoperation system bilaterally. It will contribute a lot to improve teleoperation's safety and working efficiency. This study developed a bilateral force reflecting control algorithm. It may be used as a key technology of a teleoperation system for maintaining, repairing and dismantling facilities exposed in a high radioactive. 42 refs., 71 figs., 12 tabs. (Author)

  17. Multicore and GPU algorithms for Nussinov RNA folding

    Science.gov (United States)

    2014-01-01

    Background One segment of a RNA sequence might be paired with another segment of the same RNA sequence due to the force of hydrogen bonds. This two-dimensional structure is called the RNA sequence's secondary structure. Several algorithms have been proposed to predict an RNA sequence's secondary structure. These algorithms are referred to as RNA folding algorithms. Results We develop cache efficient, multicore, and GPU algorithms for RNA folding using Nussinov's algorithm. Conclusions Our cache efficient algorithm provides a speedup between 1.6 and 3.0 relative to a naive straightforward single core code. The multicore version of the cache efficient single core algorithm provides a speedup, relative to the naive single core algorithm, between 7.5 and 14.0 on a 6 core hyperthreaded CPU. Our GPU algorithm for the NVIDIA C2050 is up to 1582 times as fast as the naive single core algorithm and between 5.1 and 11.2 times as fast as the fastest previously known GPU algorithm for Nussinov RNA folding. PMID:25082539

  18. Development of a stereolithography (STL input and computer numerical control (CNC output algorithm for an entry-level 3-D printer

    Directory of Open Access Journals (Sweden)

    Brown, Andrew

    2014-08-01

    Full Text Available This paper presents a prototype Stereolithography (STL file format slicing and tool-path generation algorithm, which serves as a data front-end for a Rapid Prototyping (RP entry- level three-dimensional (3-D printer. Used mainly in Additive Manufacturing (AM, 3-D printers are devices that apply plastic, ceramic, and metal, layer by layer, in all three dimensions on a flat surface (X, Y, and Z axis. 3-D printers, unfortunately, cannot print an object without a special algorithm that is required to create the Computer Numerical Control (CNC instructions for printing. An STL algorithm therefore forms a critical component for Layered Manufacturing (LM, also referred to as RP. The purpose of this study was to develop an algorithm that is capable of processing and slicing an STL file or multiple files, resulting in a tool-path, and finally compiling a CNC file for an entry-level 3- D printer. The prototype algorithm was implemented for an entry-level 3-D printer that utilises the Fused Deposition Modelling (FDM process or Solid Freeform Fabrication (SFF process; an AM technology. Following an experimental method, the full data flow path for the prototype algorithm was developed, starting with STL data files, and then processing the STL data file into a G-code file format by slicing the model and creating a tool-path. This layering method is used by most 3-D printers to turn a 2-D object into a 3-D object. The STL algorithm developed in this study presents innovative opportunities for LM, since it allows engineers and architects to transform their ideas easily into a solid model in a fast, simple, and cheap way. This is accomplished by allowing STL models to be sliced rapidly, effectively, and without error, and finally to be processed and prepared into a G-code print file.

  19. New Optimization Algorithms in Physics

    CERN Document Server

    Hartmann, Alexander K

    2004-01-01

    Many physicists are not aware of the fact that they can solve their problems by applying optimization algorithms. Since the number of such algorithms is steadily increasing, many new algorithms have not been presented comprehensively until now. This presentation of recently developed algorithms applied in physics, including demonstrations of how they work and related results, aims to encourage their application, and as such the algorithms selected cover concepts and methods from statistical physics to optimization problems emerging in theoretical computer science.

  20. Rapid fish stock depletion in previously unexploited seamounts: the ...

    African Journals Online (AJOL)

    Rapid fish stock depletion in previously unexploited seamounts: the case of Beryx splendens from the Sierra Leone Rise (Gulf of Guinea) ... A spectral analysis and red-noise spectra procedure (REDFIT) algorithm was used to identify the red-noise spectrum from the gaps in the observed time-series of catch per unit effort by ...

  1. Explaining algorithms using metaphors

    CERN Document Server

    Forišek, Michal

    2013-01-01

    There is a significant difference between designing a new algorithm, proving its correctness, and teaching it to an audience. When teaching algorithms, the teacher's main goal should be to convey the underlying ideas and to help the students form correct mental models related to the algorithm. This process can often be facilitated by using suitable metaphors. This work provides a set of novel metaphors identified and developed as suitable tools for teaching many of the 'classic textbook' algorithms taught in undergraduate courses worldwide. Each chapter provides exercises and didactic notes fo

  2. Fast geometric algorithms

    International Nuclear Information System (INIS)

    Noga, M.T.

    1984-01-01

    This thesis addresses a number of important problems that fall within the framework of the new discipline of Computational Geometry. The list of topics covered includes sorting and selection, convex hull algorithms, the L 1 hull, determination of the minimum encasing rectangle of a set of points, the Euclidean and L 1 diameter of a set of points, the metric traveling salesman problem, and finding the superrange of star-shaped and monotype polygons. The main theme of all the work was to develop a set of very fast state-of-the-art algorithms that supersede any rivals in terms of speed and ease of implementation. In some cases existing algorithms were refined; for others new techniques were developed that add to the present database of fast adaptive geometric algorithms. What emerges is a collection of techniques that is successful at merging modern tools developed in analysis of algorithms with those of classical geometry

  3. Development of an integrative algorithm for the treatment of various stages of full-thickness burns of the first commissure of the hand.

    Science.gov (United States)

    Yuste, Valentin; Delgado, Julio; Agullo, Alberto; Sampietro, Jose Mauel

    2017-06-01

    Burns of the first commissure of the hand can evolve into an adduction contracture of the thumb. We decided to conduct a review of the existing literature on the treatment of full-thickness burns of the first commissure in order to develop a treatment algorithm that integrates the various currently available procedures. A search of the existing literature was conducted, focusing on the treatment of a burn of the first commissure in its chronic and acute phases. A total of 29 relevant articles were selected; 24 focused exclusively on the chronic contracture stage, while 3 focused exclusively on the acute burn stage, and 2 articles studied both stages. A therapeutic algorithm for full-thickness burns of the first commissure of the hand was developed. With this algorithm we sought to relate each degree and stage of the burn with a treatment. Copyright © 2017 Elsevier Ltd and ISBI. All rights reserved.

  4. Development and Evaluation of Vectorised and Multi-Core Event Reconstruction Algorithms within the CMS Software Framework

    Science.gov (United States)

    Hauth, T.; Innocente and, V.; Piparo, D.

    2012-12-01

    The processing of data acquired by the CMS detector at LHC is carried out with an object-oriented C++ software framework: CMSSW. With the increasing luminosity delivered by the LHC, the treatment of recorded data requires extraordinary large computing resources, also in terms of CPU usage. A possible solution to cope with this task is the exploitation of the features offered by the latest microprocessor architectures. Modern CPUs present several vector units, the capacity of which is growing steadily with the introduction of new processor generations. Moreover, an increasing number of cores per die is offered by the main vendors, even on consumer hardware. Most recent C++ compilers provide facilities to take advantage of such innovations, either by explicit statements in the programs sources or automatically adapting the generated machine instructions to the available hardware, without the need of modifying the existing code base. Programming techniques to implement reconstruction algorithms and optimised data structures are presented, that aim to scalable vectorization and parallelization of the calculations. One of their features is the usage of new language features of the C++11 standard. Portions of the CMSSW framework are illustrated which have been found to be especially profitable for the application of vectorization and multi-threading techniques. Specific utility components have been developed to help vectorization and parallelization. They can easily become part of a larger common library. To conclude, careful measurements are described, which show the execution speedups achieved via vectorised and multi-threaded code in the context of CMSSW.

  5. Management of temporary urinary retention after arthroscopic knee surgery in low-dose spinal anesthesia: development of a simple algorithm.

    Science.gov (United States)

    Luger, Thomas J; Garoscio, Ivo; Rehder, Peter; Oberladstätter, Jürgen; Voelckel, Wolfgang

    2008-06-01

    In practice, trauma and orthopedic surgery during spinal anesthesia are often performed with routine urethral catheterization of the bladder to prevent an overdistention of the bladder. However, use of a catheter has inherent risks. Ultrasound examination of the bladder (Bladderscan) can precisely determine the bladder volume. Thus, the aim of this study was to identify parameters indicative of urinary retention after low-dose spinal anesthesia and to develop a simple algorithm for patient care. This prospective pilot study approved by the Ethics Committee enrolled 45 patients after obtaining their written informed consent. Patients who underwent arthroscopic knee surgery received low-dose spinal anesthesia with 1.4 ml 0.5% bupivacaine at level L3/L4. Bladder volume was measured by urinary bladder scanning at baseline, at the end of surgery and up to 4 h later. The incidence of spontaneous urination versus catheterization was assessed and the relative risk for catheterization was calculated. Mann-Whitney test, chi(2) test with Fischer Exact test and the relative odds ratio were performed as appropriate. *P 300 ml postoperatively had a 6.5-fold greater likelihood for urinary retention. In the management of patients with short-lasting spinal anesthesia for arthroscopic knee surgery we recommend monitoring bladder volume by Bladderscan instead of routine catheterization. Anesthesiologists or nurses under protocol should assess bladder volume preoperatively and at the end of surgery. If bladder volume is >300 ml, catheterization should be performed in the OR. Patients with a bladder volume of 500 ml.

  6. PRGPred: A platform for prediction of domains of resistance gene analogue (RGA in Arecaceae developed using machine learning algorithms

    Directory of Open Access Journals (Sweden)

    MATHODIYIL S. MANJULA

    2015-12-01

    Full Text Available Plant disease resistance genes (R-genes are responsible for initiation of defense mechanism against various phytopathogens. The majority of plant R-genes are members of very large multi-gene families, which encode structurally related proteins containing nucleotide binding site domains (NBS and C-terminal leucine rich repeats (LRR. Other classes possess' an extracellular LRR domain, a transmembrane domain and sometimes, an intracellular serine/threonine kinase domain. R-proteins work in pathogen perception and/or the activation of conserved defense signaling networks. In the present study, sequences representing resistance gene analogues (RGAs of coconut, arecanut, oil palm and date palm were collected from NCBI, sorted based on domains and assembled into a database. The sequences were analyzed in PRINTS database to find out the conserved domains and their motifs present in the RGAs. Based on these domains, we have also developed a tool to predict the domains of palm R-genes using various machine learning algorithms. The model files were selected based on the performance of the best classifier in training and testing. All these information is stored and made available in the online ‘PRGpred' database and prediction tool.

  7. Algorithm development and simulation outcomes for hypoxic head and neck cancer radiotherapy using a Monte Carlo cell division model

    International Nuclear Information System (INIS)

    Harriss, W.M.; Bezak, E.; Yeoh, E.

    2010-01-01

    Full text: A temporal Monte Carlo tumour model, 'Hyp-RT'. sim ulating hypoxic head and neck cancer has been updated and extended to model radiothcrapy. The aim is to providc a convenient radiobio logical tool for clinicians to evaluate radiotherapy treatment schedules based on many individual tumour properties including oxygenation. FORTRAN95 and JA YA havc been utilised to develop the efficient algorithm, which can propagate 108 cells. Epithelial cell kill is affected by dose, oxygenation and proliferativc status. Accelerated repopulation (AR) has been modelled by increasing the symmetrical stem cell division probability, and reoxygenation (ROx) has been modelled using random incremental boosts of oxygen to the cell po ulation throughout therapy. Results The stem cell percentage and the degree of hypoxia dominate tumour growth rate. For conventional radiotherapy. 15-25% more dose was required for a hypox ic versus oxic tumours, depending on the time of AR onset (0-3 weeks after thc start of treatment). ROx of hypoxic tumours resulted in tumoUJ: sensitisation and therefore a dose reduction, of up to 35%, varying with the time of onset. Fig. I shows results for all combinations of AR and ROx onset times for the moderate hypoxia case. Conclusions In hypoxic tumours, accelerated repopulation and reoxy genation affect ccll kill in the same manner as when the effects are modelled individually. however the degree of the effect is altered and therefore the combined result is difficult to predict. providing evidence for the usefulness of computer models. Simulations have quantitatively

  8. Development of Smart Ventilation Control Algorithms for Humidity Control in High-Performance Homes in Humid U.S. Climates

    Energy Technology Data Exchange (ETDEWEB)

    Less, Brennan [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Walker, Iain [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ticci, Sara [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2017-04-11

    Past field research and simulation studies have shown that high performance homes experience elevated indoor humidity levels for substantial portions of the year in humid climates. This is largely the result of lower sensible cooling loads, which reduces the moisture removed by the cooling system. These elevated humidity levels lead to concerns about occupant comfort, health and building durability. Use of mechanical ventilation at rates specified in ASHRAE Standard 62.2-2013 are often cited as an additional contributor to humidity problems in these homes. Past research has explored solutions, including supplemental dehumidification, cooling system operational enhancements and ventilation system design (e.g., ERV, supply, exhaust, etc.). This project’s goal is to develop and demonstrate (through simulations) smart ventilation strategies that can contribute to humidity control in high performance homes. These strategies must maintain IAQ via equivalence with ASHRAE Standard 62.2-2013. To be acceptable they must not result in excessive energy use. Smart controls will be compared with dehumidifier energy and moisture performance. This work explores the development and performance of smart algorithms for control of mechanical ventilation systems, with the objective of reducing high humidity in modern high performance residences. Simulations of DOE Zero-Energy Ready homes were performed using the REGCAP simulation tool. Control strategies were developed and tested using the Residential Integrated Ventilation (RIVEC) controller, which tracks pollutant exposure in real-time and controls ventilation to provide an equivalent exposure on an annual basis to homes meeting ASHRAE 62.2-2013. RIVEC is used to increase or decrease the real-time ventilation rate to reduce moisture transport into the home or increase moisture removal. This approach was implemented for no-, one- and two-sensor strategies, paired with a variety of control approaches in six humid climates (Miami

  9. Development of novel algorithm and real-time monitoring ambulatory system using Bluetooth module for fall detection in the elderly.

    Science.gov (United States)

    Hwang, J Y; Kang, J M; Jang, Y W; Kim, H

    2004-01-01

    Novel algorithm and real-time ambulatory monitoring system for fall detection in elderly people is described. Our system is comprised of accelerometer, tilt sensor and gyroscope. For real-time monitoring, we used Bluetooth. Accelerometer measures kinetic force, tilt sensor and gyroscope estimates body posture. Also, we suggested algorithm using signals which obtained from the system attached to the chest for fall detection. To evaluate our system and algorithm, we experimented on three people aged over 26 years. The experiment of four cases such as forward fall, backward fall, side fall and sit-stand was repeated ten times and the experiment in daily life activity was performed one time to each subject. These experiments showed that our system and algorithm could distinguish between falling and daily life activity. Moreover, the accuracy of fall detection is 96.7%. Our system is especially adapted for long-time and real-time ambulatory monitoring of elderly people in emergency situation.

  10. Maize sugary enhancer1 (se1) is a presence-absence variant of a previously uncharacterized gene and development of educational videos to raise the profile of plant breeding and improve curricula

    Science.gov (United States)

    Haro von Mogel, Karl J.

    Carbohydrate metabolism is a biologically, economically, and culturally important process in crop plants. Humans have selected many crop species such as maize (Zea mays L.) in ways that have resulted in changes to carbohydrate metabolic pathways, and understanding the underlying genetics of this pathway is therefore exceedingly important. A previously uncharacterized starch metabolic pathway mutant, sugary enhancer1 (se1), is a recessive modifier of sugary1 (su1) sweet corn that increases the sugar content while maintaining an appealing creamy texture. This allele has been incorporated into many sweet corn varieties since its discovery in the 1970s, however, testing for the presence of this allele has been difficult. A genetic stock was developed that allowed the presence of se1 to be visually scored in segregating ears, which were used to genetically map se1 to the deletion of a single gene model located on the distal end of the long arm of chromosome 2. An analysis of homology found that this gene is specific to monocots, and the gene is expressed in the endosperm and developing leaf. The se1 allele increased water soluble polysaccharide (WSP) and decreased amylopectin in maize endosperm, but there was no overall effect on starch content in mature leaves due to se1. This discovery will lead to a greater understanding of starch metabolism, and the marker developed will assist in breeding. There is a present need for increased training for plant breeders to meet the growing needs of the human population. To raise the profile of plant breeding among young students, a series of videos called Fields of Study was developed. These feature interviews with plant breeders who talk about what they do as plant breeders and what they enjoy about their chosen profession. To help broaden the education of students in college biology courses, and assist with the training of plant breeders, a second video series, Pollination Methods was developed. Each video focuses on one or two

  11. Development and investigation of an inverse problem solution algorithm for determination of Ap stars magnetic field geometry

    International Nuclear Information System (INIS)

    Piskunov, N.E.

    1985-01-01

    Mathematical formulation of the inverse problem of determination of magnetic field geometry from the polarization profiles of spectral lines is gven. The solving algorithm is proposed. A set of model calculations has shown the effectiveness of the algorithm, the high precision of magnetic star model parameters obtained and also the advantages of the inverse problem method over the commonly used method of interpretation of effective field curves

  12. The Development of Geo-KOMPSAT-2A (GK-2A) Convective Initiation Algorithm over the Korea peninsular

    Science.gov (United States)

    Kim, H. S.; Chung, S. R.; Lee, B. I.; Baek, S.; Jeon, E.

    2016-12-01

    The rapid development of convection can bring heavy rainfall that suffers a great deal of damages to society as well as threatens human life. The high accurate forecast of the strong convection is essentially demanded to prevent those disasters from the severe weather. Since a geostationary satellite is the most suitable instrument for monitoring the single cloud's lifecycle from its formation to extinction, it has been attempted to capture the precursor signals of convection clouds by satellite. Keeping pace with the launch of Geo-KOMPSAT-2A (GK-2A) in 2018, we planned to produce convective initiation (CI) defined as the indicator of potential cloud objects to bring heavy precipitation within two hours. The CI algorithm for GK-2A is composed of four stages. The beginning is to subtract mature cloud pixels, a sort of convective cloud mask by visible (VIS) albedo and infrared (IR) brightness temperature thresholds. Then, the remained immature cloud pixels are clustered as a cloud object by watershed techniques. Each clustering object is undergone 'Interest Fields' tests for IR data that reflect cloud microphysical properties at the current and their temporal changes; the cloud depth, updraft strength and production of glaciations. All thresholds of 'Interest fields' were optimized for Korean-type convective clouds. Based on scores from tests, it is decided whether the cloud object would develop as a convective cell or not. Here we show the result of case study in this summer over the Korea peninsular by using Himawari-8 VIS and IR data. Radar echo and data were used for validation. This study suggests that CI products of GK-2A would contribute to enhance accuracy of the very short range forecast over the Korea peninsular.

  13. Developing Benthic Class Specific, Chlorophyll-a Retrieving Algorithms for Optically-Shallow Water Using SeaWiFS

    Directory of Open Access Journals (Sweden)

    Tara Blakey

    2016-10-01

    Full Text Available This study evaluated the ability to improve Sea-Viewing Wide Field-of-View Sensor (SeaWiFS chl-a retrieval from optically shallow coastal waters by applying algorithms specific to the pixels’ benthic class. The form of the Ocean Color (OC algorithm was assumed for this study. The operational atmospheric correction producing Level 2 SeaWiFS data was retained since the focus of this study was on establishing the benefit from the alternative specification of the bio-optical algorithm. Benthic class was determined through satellite image-based classification methods. Accuracy of the chl-a algorithms evaluated was determined through comparison with coincident in situ measurements of chl-a. The regionally-tuned models that were allowed to vary by benthic class produced more accurate estimates of chl-a than the single, unified regionally-tuned model. Mean absolute percent difference was approximately 70% for the regionally-tuned, benthic class-specific algorithms. Evaluation of the residuals indicated the potential for further improvement to chl-a estimation through finer characterization of benthic environments. Atmospheric correction procedures specialized to coastal environments were recognized as areas for future improvement as these procedures would improve both classification and algorithm tuning.

  14. Comparison of switching control algorithms effective in restricting the switching in the neighborhood of the origin

    International Nuclear Information System (INIS)

    Joung, JinWook; Chung, Lan; Smyth, Andrew W

    2010-01-01

    The active interaction control (AIC) system consisting of a primary structure, an auxiliary structure and an interaction element was proposed to protect the primary structure against earthquakes and winds. The objective of the AIC system in reducing the responses of the primary structure is fulfilled by activating or deactivating the switching between the engagement and the disengagement of the primary and auxiliary structures through the interaction element. The status of the interaction element is controlled by switching control algorithms. The previously developed switching control algorithms require an excessive amount of switching, which is inefficient. In this paper, the excessive amount of switching is restricted by imposing an appropriately designed switching boundary region, where switching is prohibited, on pre-designed engagement–disengagement conditions. Two different approaches are used in designing the newly proposed AID-off and AID-off 2 algorithms. The AID-off 2 algorithm is designed to affect deactivated switching regions explicitly, unlike the AID-off algorithm, which follows the same procedure of designing the engagement–disengagement conditions of the previously developed algorithms, by using the current status of the AIC system. Both algorithms are shown to be effective in reducing the amount of switching times triggered from the previously developed AID algorithm under an appropriately selected control sampling period for different earthquakes, but the AID-off 2 algorithm outperforms the AID-off algorithm in reducing the number of switching times

  15. An algorithm to assess methodological quality of nutrition and mortality cross-sectional surveys: development and application to surveys conducted in Darfur, Sudan.

    Science.gov (United States)

    Prudhon, Claudine; de Radiguès, Xavier; Dale, Nancy; Checchi, Francesco

    2011-11-09

    Nutrition and mortality surveys are the main tools whereby evidence on the health status of populations affected by disasters and armed conflict is quantified and monitored over time. Several reviews have consistently revealed a lack of rigor in many surveys. We describe an algorithm for analyzing nutritional and mortality survey reports to identify a comprehensive range of errors that may result in sampling, response, or measurement biases and score quality. We apply the algorithm to surveys conducted in Darfur, Sudan. We developed an algorithm based on internationally agreed upon methods and best practices. Penalties are attributed for a list of errors, and an overall score is built from the summation of penalties accrued by the survey as a whole. To test the algorithm reproducibility, it was independently applied by three raters on 30 randomly selected survey reports. The algorithm was further applied to more than 100 surveys conducted in Darfur, Sudan. The Intra Class Correlation coefficient was 0.79 for mortality surveys and 0.78 for nutrition surveys. The overall median quality score and range of about 100 surveys conducted in Darfur were 0.60 (0.12-0.93) and 0.675 (0.23-0.86) for mortality and nutrition surveys, respectively. They varied between the organizations conducting the surveys, with no major trend over time. Our study suggests that it is possible to systematically assess quality of surveys and reveals considerable problems with the quality of nutritional and particularly mortality surveys conducted in the Darfur crisis.

  16. Study on the Development of an Optimal Heat Supply Control Algorithm for Group Energy Apartment Buildings According to the Variation of Outdoor Air Temperature

    Directory of Open Access Journals (Sweden)

    Dong-Kurl Kwak

    2012-05-01

    Full Text Available In the present study, we have developed an optimal heat supply algorithm which minimizes the heat loss through the distribution pipe line in a group energy apartment. Heating load variation of a group energy apartment building according to the outdoor air temperature was predicted by a correlation obtained from calorimetry measurements of all households in the apartment building. Supply water temperature and mass flow rate were simultaneously controlled to minimize the heat loss rate through the distribution pipe line. A group heating apartment building located in Hwaseong city, Korea, which has 1473 households, was selected as the object building to test the present heat supply algorithm. Compared to the original heat supply system, the present system adopting the proposed control algorithm reduced the heat loss rate by 10.4%.

  17. Development and test results of a flight management algorithm for fuel conservative descents in a time-based metered traffic environment

    Science.gov (United States)

    Knox, C. E.; Cannon, D. G.

    1980-01-01

    A simple flight management descent algorithm designed to improve the accuracy of delivering an airplane in a fuel-conservative manner to a metering fix at a time designated by air traffic control was developed and flight tested. This algorithm provides a three dimensional path with terminal area time constraints (four dimensional) for an airplane to make an idle thrust, clean configured (landing gear up, flaps zero, and speed brakes retracted) descent to arrive at the metering fix at a predetermined time, altitude, and airspeed. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard pressure and temperature effects. The flight management descent algorithm is described. The results of the flight tests flown with the Terminal Configured Vehicle airplane are presented.

  18. Improving GPU-accelerated adaptive IDW interpolation algorithm using fast kNN search.

    Science.gov (United States)

    Mei, Gang; Xu, Nengxiong; Xu, Liangliang

    2016-01-01

    This paper presents an efficient parallel Adaptive Inverse Distance Weighting (AIDW) interpolation algorithm on modern Graphics Processing Unit (GPU). The presented algorithm is an improvement of our previous GPU-accelerated AIDW algorithm by adopting fast k-nearest neighbors (kNN) search. In AIDW, it needs to find several nearest neighboring data points for each interpolated point to adaptively determine the power parameter; and then the desired prediction value of the interpolated point is obtained by weighted interpolating using the power parameter. In this work, we develop a fast kNN search approach based on the space-partitioning data structure, even grid, to improve the previous GPU-accelerated AIDW algorithm. The improved algorithm is composed of the stages of kNN search and weighted interpolating. To evaluate the performance of the improved algorithm, we perform five groups of experimental tests. The experimental results indicate: (1) the improved algorithm can achieve a speedup of up to 1017 over the corresponding serial algorithm; (2) the improved algorithm is at least two times faster than our previous GPU-accelerated AIDW algorithm; and (3) the utilization of fast kNN search can significantly improve the computational efficiency of the entire GPU-accelerated AIDW algorithm.

  19. Some computational challenges of developing efficient parallel algorithms for data-dependent computations in thermal-hydraulics supercomputer applications

    International Nuclear Information System (INIS)

    Woodruff, S.B.

    1994-01-01

    The Transient Reactor Analysis Code (TRAC), which features a two-fluid treatment of thermal-hydraulics, is designed to model transients in water reactors and related facilities. One of the major computational costs associated with TRAC and similar codes is calculating constitutive coefficients. Although the formulations for these coefficients are local, the costs are flow-regime- or data-dependent; i.e., the computations needed for a given spatial node often vary widely as a function of time. Consequently, a fixed, uniform assignment of nodes to prallel processors will result in degraded computational efficiency due to the poor load balancing. A standard method for treating data-dependent models on vector architectures has been to use gather operations (or indirect adressing) to sort the nodes into subsets that (temporarily) share a common computational model. However, this method is not effective on distributed memory data parallel architectures, where indirect adressing involves expensive communication overhead. Another serious problem with this method involves software engineering challenges in the areas of maintainability and extensibility. For example, an implementation that was hand-tuned to achieve good computational efficiency would have to be rewritten whenever the decision tree governing the sorting was modified. Using an example based on the calculation of the wall-to-liquid and wall-to-vapor heat-transfer coefficients for three nonboiling flow regimes, we describe how the use of the Fortran 90 WHERE construct and automatic inlining of functions can be used to ameliorate this problem while improving both efficiency and software engineering. Unfortunately, a general automatic solution to the load-balancing problem associated with data-dependent computations is not yet available for massively parallel architectures. We discuss why developers should either wait for such solutions or consider alternative numerical algorithms, such as a neural network

  20. SEBAL-A: A Remote Sensing ET Algorithm that Accounts for Advection with Limited Data. Part I: Development and Validation

    Directory of Open Access Journals (Sweden)

    Mcebisi Mkhwanazi

    2015-11-01

    Full Text Available The Surface Energy Balance Algorithm for Land (SEBAL is one of the remote sensing (RS models that are increasingly being used to determine evapotranspiration (ET. SEBAL is a widely used model, mainly due to the fact that it requires minimum weather data, and also no prior knowledge of surface characteristics is needed. However, it has been observed that it underestimates ET under advective conditions due to its disregard of advection as another source of energy available for evaporation. A modified SEBAL model was therefore developed in this study. An advection component, which is absent in the original SEBAL, was introduced such that the energy available for evapotranspiration was a sum of net radiation and advected heat energy. The improved SEBAL model was termed SEBAL-Advection or SEBAL-A. An important aspect of the improved model is the estimation of advected energy using minimal weather data. While other RS models would require hourly weather data to be able to account for advection (e.g., METRIC, SEBAL-A only requires daily averages of limited weather data, making it appropriate even in areas where weather data at short time steps may not be available. In this study, firstly, the original SEBAL model was evaluated under advective and non-advective conditions near Rocky Ford in southeastern Colorado, a semi-arid area where afternoon advection is common occurrence. The SEBAL model was found to incur large errors when there was advection (which was indicated by higher wind speed and warm and dry air. SEBAL-A was then developed and validated in the same area under standard surface conditions, which were described as healthy alfalfa with height of 40–60 cm, without water-stress. ET values estimated using the original and modified SEBAL were compared to large weighing lysimeter-measured ET values. When the SEBAL ET was compared to SEBAL-A ET values, the latter showed improved performance, with the ET Mean Bias Error (MBE reduced from −17

  1. Algebraic Algorithm Design and Local Search

    National Research Council Canada - National Science Library

    Graham, Robert

    1996-01-01

    .... Algebraic techniques have been applied successfully to algorithm synthesis by the use of algorithm theories and design tactics, an approach pioneered in the Kestrel Interactive Development System (KIDS...

  2. Geometric approximation algorithms

    CERN Document Server

    Har-Peled, Sariel

    2011-01-01

    Exact algorithms for dealing with geometric objects are complicated, hard to implement in practice, and slow. Over the last 20 years a theory of geometric approximation algorithms has emerged. These algorithms tend to be simple, fast, and more robust than their exact counterparts. This book is the first to cover geometric approximation algorithms in detail. In addition, more traditional computational geometry techniques that are widely used in developing such algorithms, like sampling, linear programming, etc., are also surveyed. Other topics covered include approximate nearest-neighbor search, shape approximation, coresets, dimension reduction, and embeddings. The topics covered are relatively independent and are supplemented by exercises. Close to 200 color figures are included in the text to illustrate proofs and ideas.

  3. Phenotyping for patient safety: algorithm development for electronic health record based automated adverse event and medical error detection in neonatal intensive care.

    Science.gov (United States)

    Li, Qi; Melton, Kristin; Lingren, Todd; Kirkendall, Eric S; Hall, Eric; Zhai, Haijun; Ni, Yizhao; Kaiser, Megan; Stoutenborough, Laura; Solti, Imre

    2014-01-01

    Although electronic health records (EHRs) have the potential to provide a foundation for quality and safety algorithms, few studies have measured their impact on automated adverse event (AE) and medical error (ME) detection within the neonatal intensive care unit (NICU) environment. This paper presents two phenotyping AE and ME detection algorithms (ie, IV infiltrations, narcotic medication oversedation and dosing errors) and describes manual annotation of airway management and medication/fluid AEs from NICU EHRs. From 753 NICU patient EHRs from 2011, we developed two automatic AE/ME detection algorithms, and manually annotated 11 classes of AEs in 3263 clinical notes. Performance of the automatic AE/ME detection algorithms was compared to trigger tool and voluntary incident reporting results. AEs in clinical notes were double annotated and consensus achieved under neonatologist supervision. Sensitivity, positive predictive value (PPV), and specificity are reported. Twelve severe IV infiltrates were detected. The algorithm identified one more infiltrate than the trigger tool and eight more than incident reporting. One narcotic oversedation was detected demonstrating 100% agreement with the trigger tool. Additionally, 17 narcotic medication MEs were detected, an increase of 16 cases over voluntary incident reporting. Automated AE/ME detection algorithms provide higher sensitivity and PPV than currently used trigger tools or voluntary incident-reporting systems, including identification of potential dosing and frequency errors that current methods are unequipped to detect. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  4. Development and analysis of an economizer control strategy algorithm to promote an opportunity for energy savings in air conditioning installations

    Energy Technology Data Exchange (ETDEWEB)

    Neto, Jose H.M.; Azevedo, Walter L. [Centro Federal de Educacao Tecnologica de Minas Gerais (CEFET), Belo Horizonte, MG (Brazil). Dept. de Engenharia Mecanica]. E-mail: henrique@daem.des.cefetmg.br

    2000-07-01

    This work presents an algorithm control strategy denominated enthalpy economizer. The objective of this algorithm strategy is to determine the adequate fractions of outside and return air flowrates entering a cooling coil based on the analysis of the outside, return and supply air enthalpies, rather than on the analysis of the dry bulb temperatures. The proposed algorithm predicts the actual opening position of the outside and return air dampers in order to provide the lower mixing air enthalpy. First, the psychometrics properties of the outside and return air are calculated from actual measurements of the dry and wet bulb temperatures. Then, three distinct cases are analyzed: the enthalpy of the outside air is lower than the enthalpy of the supply air (free cooling); the enthalpy of the outside air is higher than the enthalpy of the return air; the enthalpy of the outside air is lower than the enthalpy of the return air and higher than the temperature of the supply air. Different outside air conditions were selected in order to represent typical weather data of Brazilians cities, as well as typical return air conditions. It was found that the enthalpy control strategy could promote an opportunity for energy savings mainly during mild nights and wintertime periods as well as during warm afternoons and summertime periods, depending on the outside air relative humidity. The proposed algorithm works well and can be integrated in some commercial automation software to reduce energy consumption and electricity demand. (author)

  5. Scheduling language and algorithm development study. Volume 1, phase 2: Design considerations for a scheduling and resource allocation system

    Science.gov (United States)

    Morrell, R. A.; Odoherty, R. J.; Ramsey, H. R.; Reynolds, C. C.; Willoughby, J. K.; Working, R. D.

    1975-01-01

    Data and analyses related to a variety of algorithms for solving typical large-scale scheduling and resource allocation problems are presented. The capabilities and deficiencies of various alternative problem solving strategies are discussed from the viewpoint of computer system design.

  6. Developing cross entropy genetic algorithm for solving Two-Dimensional Loading Heterogeneous Fleet Vehicle Routing Problem (2L-HFVRP)

    Science.gov (United States)

    Paramestha, D. L.; Santosa, B.

    2018-04-01