WorldWideScience

Sample records for newly developed algorithm

  1. PCTFPeval: a web tool for benchmarking newly developed algorithms for predicting cooperative transcription factor pairs in yeast.

    Science.gov (United States)

    Lai, Fu-Jou; Chang, Hong-Tsun; Wu, Wei-Sheng

    2015-01-01

    Computational identification of cooperative transcription factor (TF) pairs helps understand the combinatorial regulation of gene expression in eukaryotic cells. Many advanced algorithms have been proposed to predict cooperative TF pairs in yeast. However, it is still difficult to conduct a comprehensive and objective performance comparison of different algorithms because of lacking sufficient performance indices and adequate overall performance scores. To solve this problem, in our previous study (published in BMC Systems Biology 2014), we adopted/proposed eight performance indices and designed two overall performance scores to compare the performance of 14 existing algorithms for predicting cooperative TF pairs in yeast. Most importantly, our performance comparison framework can be applied to comprehensively and objectively evaluate the performance of a newly developed algorithm. However, to use our framework, researchers have to put a lot of effort to construct it first. To save researchers time and effort, here we develop a web tool to implement our performance comparison framework, featuring fast data processing, a comprehensive performance comparison and an easy-to-use web interface. The developed tool is called PCTFPeval (Predicted Cooperative TF Pair evaluator), written in PHP and Python programming languages. The friendly web interface allows users to input a list of predicted cooperative TF pairs from their algorithm and select (i) the compared algorithms among the 15 existing algorithms, (ii) the performance indices among the eight existing indices, and (iii) the overall performance scores from two possible choices. The comprehensive performance comparison results are then generated in tens of seconds and shown as both bar charts and tables. The original comparison results of each compared algorithm and each selected performance index can be downloaded as text files for further analyses. Allowing users to select eight existing performance indices and 15

  2. Development of hybrid artificial intelligent based handover decision algorithm

    Directory of Open Access Journals (Sweden)

    A.M. Aibinu

    2017-04-01

    Full Text Available The possibility of seamless handover remains a mirage despite the plethora of existing handover algorithms. The underlying factor responsible for this has been traced to the Handover decision module in the Handover process. Hence, in this paper, the development of novel hybrid artificial intelligent handover decision algorithm has been developed. The developed model is made up of hybrid of Artificial Neural Network (ANN based prediction model and Fuzzy Logic. On accessing the network, the Received Signal Strength (RSS was acquired over a period of time to form a time series data. The data was then fed to the newly proposed k-step ahead ANN-based RSS prediction system for estimation of prediction model coefficients. The synaptic weights and adaptive coefficients of the trained ANN was then used to compute the k-step ahead ANN based RSS prediction model coefficients. The predicted RSS value was later codified as Fuzzy sets and in conjunction with other measured network parameters were fed into the Fuzzy logic controller in order to finalize handover decision process. The performance of the newly developed k-step ahead ANN based RSS prediction algorithm was evaluated using simulated and real data acquired from available mobile communication networks. Results obtained in both cases shows that the proposed algorithm is capable of predicting ahead the RSS value to about ±0.0002 dB. Also, the cascaded effect of the complete handover decision module was also evaluated. Results obtained show that the newly proposed hybrid approach was able to reduce ping-pong effect associated with other handover techniques.

  3. Development of information preserving data compression algorithm for CT images

    International Nuclear Information System (INIS)

    Kobayashi, Yoshio

    1989-01-01

    Although digital imaging techniques in radiology develop rapidly, problems arise in archival storage and communication of image data. This paper reports on a new information preserving data compression algorithm for computed tomographic (CT) images. This algorithm consists of the following five processes: 1. Pixels surrounding the human body showing CT values smaller than -900 H.U. are eliminated. 2. Each pixel is encoded by its numerical difference from its neighboring pixel along a matrix line. 3. Difference values are encoded by a newly designed code rather than the natural binary code. 4. Image data, obtained with the above process, are decomposed into bit planes. 5. The bit state transitions in each bit plane are encoded by run length coding. Using this new algorithm, the compression ratios of brain, chest, and abdomen CT images are 4.49, 4.34. and 4.40 respectively. (author)

  4. STAR Algorithm Integration Team - Facilitating operational algorithm development

    Science.gov (United States)

    Mikles, V. J.

    2015-12-01

    The NOAA/NESDIS Center for Satellite Research and Applications (STAR) provides technical support of the Joint Polar Satellite System (JPSS) algorithm development and integration tasks. Utilizing data from the S-NPP satellite, JPSS generates over thirty Environmental Data Records (EDRs) and Intermediate Products (IPs) spanning atmospheric, ocean, cryosphere, and land weather disciplines. The Algorithm Integration Team (AIT) brings technical expertise and support to product algorithms, specifically in testing and validating science algorithms in a pre-operational environment. The AIT verifies that new and updated algorithms function in the development environment, enforces established software development standards, and ensures that delivered packages are functional and complete. AIT facilitates the development of new JPSS-1 algorithms by implementing a review approach based on the Enterprise Product Lifecycle (EPL) process. Building on relationships established during the S-NPP algorithm development process and coordinating directly with science algorithm developers, the AIT has implemented structured reviews with self-contained document suites. The process has supported algorithm improvements for products such as ozone, active fire, vegetation index, and temperature and moisture profiles.

  5. Combinatorial algorithms

    CERN Document Server

    Hu, T C

    2002-01-01

    Newly enlarged, updated second edition of a valuable text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discusses binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. 153 black-and-white illus. 23 tables.Newly enlarged, updated second edition of a valuable, widely used text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discussed are binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. New to this edition: Chapter 9

  6. Sizing Performance of the Newly Developed Eddy Current System

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Chan Hee; Lee, Hee Jong; Yoo, Hyun Ju; Moon, Gyoon Young; Lee, Tae Hoon [Korea Hydro and Nuclear Power Co., Ltd., Daejeon (Korea, Republic of)

    2013-10-15

    This paper describes the comparison results of sizing performance for two systems. The KHNP developed a new eddy current testing system for the inspection of steam generator tubing in domestic nuclear power plants. The equivalency assessment of the newly developed system with the EPRI-qualified system was already carried out. In this paper, the comparisons of depth-sizing performance for the artificial flaws between two systems were performed. The results show that the newly developed system is in good agreement with the qualified system. Therefore, it is expected that the newly developed eddy current system can be used for the inspection of steam generator tubing in nuclear power plants. There are some non-destructive examination (NDE) methods for the inspection of components in nuclear power plants, such as ultrasonic, radiographic, eddy current testing, etc. The eddy current testing is widely used for the inspection of steam generator (SG) tubing because it offers a relatively low cost approach for high speed, large scale testing of metallic materials in high pressure and temperature engineering systems. The Korea Hydro and Nuclear Power Co., Ltd. (KHNP) developed an eddy current testing system for the inspection of steam generator tubing in nuclear power plants. This system includes not only hardware but software such as the frequency generator and data acquisition-analysis program. The foreign eddy current system developed by ZETEC is currently used for the inspection of steam generator tubing in domestic nuclear power plants. The equivalency assessment between two systems was already carried out in accordance with the EPRI steam generator examination guidelines.

  7. Recent developments in structure-preserving algorithms for oscillatory differential equations

    CERN Document Server

    Wu, Xinyuan

    2018-01-01

    The main theme of this book is recent progress in structure-preserving algorithms for solving initial value problems of oscillatory differential equations arising in a variety of research areas, such as astronomy, theoretical physics, electronics, quantum mechanics and engineering. It systematically describes the latest advances in the development of structure-preserving integrators for oscillatory differential equations, such as structure-preserving exponential integrators, functionally fitted energy-preserving integrators, exponential Fourier collocation methods, trigonometric collocation methods, and symmetric and arbitrarily high-order time-stepping methods. Most of the material presented here is drawn from the recent literature. Theoretical analysis of the newly developed schemes shows their advantages in the context of structure preservation. All the new methods introduced in this book are proven to be highly effective compared with the well-known codes in the scientific literature. This book also addre...

  8. Mathematical algorithm development and parametric studies with the GEOFRAC three-dimensional stochastic model of natural rock fracture systems

    Science.gov (United States)

    Ivanova, Violeta M.; Sousa, Rita; Murrihy, Brian; Einstein, Herbert H.

    2014-06-01

    This paper presents results from research conducted at MIT during 2010-2012 on modeling of natural rock fracture systems with the GEOFRAC three-dimensional stochastic model. Following a background summary of discrete fracture network models and a brief introduction of GEOFRAC, the paper provides a thorough description of the newly developed mathematical and computer algorithms for fracture intensity, aperture, and intersection representation, which have been implemented in MATLAB. The new methods optimize, in particular, the representation of fracture intensity in terms of cumulative fracture area per unit volume, P32, via the Poisson-Voronoi Tessellation of planes into polygonal fracture shapes. In addition, fracture apertures now can be represented probabilistically or deterministically whereas the newly implemented intersection algorithms allow for computing discrete pathways of interconnected fractures. In conclusion, results from a statistical parametric study, which was conducted with the enhanced GEOFRAC model and the new MATLAB-based Monte Carlo simulation program FRACSIM, demonstrate how fracture intensity, size, and orientations influence fracture connectivity.

  9. A newly developed snack effective for enhancing bone volume

    Directory of Open Access Journals (Sweden)

    Hayashi Hidetaka

    2009-07-01

    Full Text Available Abstract Background The incidence of primary osteoporosis is higher in Japan than in USA and European countries. Recently, the importance of preventive medicine has been gradually recognized in the field of orthopaedic surgery with a concept that peak bone mass should be increased in childhood as much as possible for the prevention of osteoporosis. Under such background, we have developed a new bean snack with an aim to improve bone volume loss. In this study, we examined the effects of a newly developed snack on bone volume and density in osteoporosis model mice. Methods Orchiectomy (ORX and ovariectomy (OVX were performed for C57BL/6J mice of twelve-week-old (Jackson Laboratory, Bar Harbar, ME, USA were used in this experiment. We prepared and given three types of powder diet e.g.: normal calcium diet (NCD, Ca: 0.9%, Clea Japan Co., Tokyo, Japan, low calcium diet (LCD, Ca: 0.63%, Clea Japan Co., and special diet (SCD, Ca: 0.9%. Eighteen weeks after surgery, all the animals were sacrified and prepared for histomorphometric analysis to quantify bone density and bone mineral content. Results As a result of histomorphometric examination, SCD was revealed to enhance bone volume irrespective of age and sex. The bone density was increased significantly in osteoporosis model mice fed the newly developmental snack as compared with the control mice. The bone mineral content was also enhanced significantly. These phenomena were revealed in both sexes. Conclusion It is shown that the newly developed bean snack is highly effective for the improvement of bone volume loss irrespective of sex. We demonstrated that newly developmental snack supplements may be a useful preventive measure for Japanese whose bone mineral density values are less than the ideal condition.

  10. Mapping subsurface in proximity to newly-developed sinkhole along roadway.

    Science.gov (United States)

    2013-02-01

    MS&T acquired electrical resistivity tomography profiles in immediate proximity to a newly-developed sinkhole in Nixa Missouri : The sinkhole has closed a well-traveled municipal roadway and threatens proximal infrastructure. The intent of this inves...

  11. The Chandra Source Catalog: Algorithms

    Science.gov (United States)

    McDowell, Jonathan; Evans, I. N.; Primini, F. A.; Glotfelty, K. J.; McCollough, M. L.; Houck, J. C.; Nowak, M. A.; Karovska, M.; Davis, J. E.; Rots, A. H.; Siemiginowska, A. L.; Hain, R.; Evans, J. D.; Anderson, C. S.; Bonaventura, N. R.; Chen, J. C.; Doe, S. M.; Fabbiano, G.; Galle, E. C.; Gibbs, D. G., II; Grier, J. D.; Hall, D. M.; Harbo, P. N.; He, X.; Lauer, J.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Plummer, D. A.; Refsdal, B. L.; Sundheim, B. A.; Tibbetts, M. S.; van Stone, D. W.; Winkelman, S. L.; Zografou, P.

    2009-09-01

    Creation of the Chandra Source Catalog (CSC) required adjustment of existing pipeline processing, adaptation of existing interactive analysis software for automated use, and development of entirely new algorithms. Data calibration was based on the existing pipeline, but more rigorous data cleaning was applied and the latest calibration data products were used. For source detection, a local background map was created including the effects of ACIS source readout streaks. The existing wavelet source detection algorithm was modified and a set of post-processing scripts used to correct the results. To analyse the source properties we ran the SAO Traceray trace code for each source to generate a model point spread function, allowing us to find encircled energy correction factors and estimate source extent. Further algorithms were developed to characterize the spectral, spatial and temporal properties of the sources and to estimate the confidence intervals on count rates and fluxes. Finally, sources detected in multiple observations were matched, and best estimates of their merged properties derived. In this paper we present an overview of the algorithms used, with more detailed treatment of some of the newly developed algorithms presented in companion papers.

  12. Multisensor data fusion algorithm development

    Energy Technology Data Exchange (ETDEWEB)

    Yocky, D.A.; Chadwick, M.D.; Goudy, S.P.; Johnson, D.K.

    1995-12-01

    This report presents a two-year LDRD research effort into multisensor data fusion. We approached the problem by addressing the available types of data, preprocessing that data, and developing fusion algorithms using that data. The report reflects these three distinct areas. First, the possible data sets for fusion are identified. Second, automated registration techniques for imagery data are analyzed. Third, two fusion techniques are presented. The first fusion algorithm is based on the two-dimensional discrete wavelet transform. Using test images, the wavelet algorithm is compared against intensity modulation and intensity-hue-saturation image fusion algorithms that are available in commercial software. The wavelet approach outperforms the other two fusion techniques by preserving spectral/spatial information more precisely. The wavelet fusion algorithm was also applied to Landsat Thematic Mapper and SPOT panchromatic imagery data. The second algorithm is based on a linear-regression technique. We analyzed the technique using the same Landsat and SPOT data.

  13. Detection of surface algal blooms using the newly developed algorithm surface algal bloom index (SABI)

    Science.gov (United States)

    Alawadi, Fahad

    2010-10-01

    Quantifying ocean colour properties has evolved over the past two decades from being able to merely detect their biological activity to the ability to estimate chlorophyll concentration using optical satellite sensors like MODIS and MERIS. The production of chlorophyll spatial distribution maps is a good indicator of plankton biomass (primary production) and is useful for the tracing of oceanographic currents, jets and blooms, including harmful algal blooms (HABs). Depending on the type of HABs involved and the environmental conditions, if their concentration rises above a critical threshold, it can impact the flora and fauna of the aquatic habitat through the introduction of the so called "red tide" phenomenon. The estimation of chlorophyll concentration is derived from quantifying the spectral relationship between the blue and the green bands reflected from the water column. This spectral relationship is employed in the standard ocean colour chlorophyll-a (Chlor-a) product, but is incapable of detecting certain macro-algal species that float near to or at the water surface in the form of dense filaments or mats. The ability to accurately identify algal formations that sometimes appear as oil spill look-alikes in satellite imagery, contributes towards the reduction of false-positive incidents arising from oil spill monitoring operations. Such algal formations that occur in relatively high concentrations may experience, as in land vegetation, what is known as the "red-edge" effect. This phenomena occurs at the highest reflectance slope between the maximum absorption in the red due to the surrounding ocean water and the maximum reflectance in the infra-red due to the photosynthetic pigments present in the surface algae. A new algorithm termed the surface algal bloom index (SABI), has been proposed to delineate the spatial distributions of floating micro-algal species like for example cyanobacteria or exposed inter-tidal vegetation like seagrass. This algorithm was

  14. Scalable algorithms for contact problems

    CERN Document Server

    Dostál, Zdeněk; Sadowská, Marie; Vondrák, Vít

    2016-01-01

    This book presents a comprehensive and self-contained treatment of the authors’ newly developed scalable algorithms for the solutions of multibody contact problems of linear elasticity. The brand new feature of these algorithms is theoretically supported numerical scalability and parallel scalability demonstrated on problems discretized by billions of degrees of freedom. The theory supports solving multibody frictionless contact problems, contact problems with possibly orthotropic Tresca’s friction, and transient contact problems. It covers BEM discretization, jumping coefficients, floating bodies, mortar non-penetration conditions, etc. The exposition is divided into four parts, the first of which reviews appropriate facets of linear algebra, optimization, and analysis. The most important algorithms and optimality results are presented in the third part of the volume. The presentation is complete, including continuous formulation, discretization, decomposition, optimality results, and numerical experimen...

  15. Development of a Mobile Robot Test Platform and Methods for Validation of Prognostics-Enabled Decision Making Algorithms

    Directory of Open Access Journals (Sweden)

    Jose R. Celaya

    2013-01-01

    Full Text Available As fault diagnosis and prognosis systems in aerospace applications become more capable, the ability to utilize information supplied by them becomes increasingly important. While certain types of vehicle health data can be effectively processed and acted upon by crew or support personnel, others, due to their complexity or time constraints, require either automated or semi-automated reasoning. Prognostics-enabled Decision Making (PDM is an emerging research area that aims to integrate prognostic health information and knowledge about the future operating conditions into the process of selecting subsequent actions for the system. The newly developed PDM algorithms require suitable software and hardware platforms for testing under realistic fault scenarios. The paper describes the development of such a platform, based on the K11 planetary rover prototype. A variety of injectable fault modes are being investigated for electrical, mechanical, and power subsystems of the testbed, along with methods for data collection and processing. In addition to the hardware platform, a software simulator with matching capabilities has been developed. The simulator allows for prototyping and initial validation of the algorithms prior to their deployment on the K11. The simulator is also available to the PDM algorithms to assist with the reasoning process. A reference set of diagnostic, prognostic, and decision making algorithms is also described, followed by an overview of the current test scenarios and the results of their execution on the simulator.

  16. Development of a hybrid energy storage sizing algorithm associated with the evaluation of power management in different driving cycles

    International Nuclear Information System (INIS)

    Masoud, Masih Tehrani; Mohammad Reza, Ha'iri Yazdi; Esfahanian, Vahid; Sagha, Hossein

    2012-01-01

    In this paper, a hybrid energy storage sizing algorithm for electric vehicles is developed to achieve a semi optimum cost effective design. Using the developed algorithm, a driving cycle is divided into its micro-trips and the power and energy demands in each micro trip are determined. The battery size is estimated because the battery fulfills the power demands. Moreover, the ultra capacitor (UC) energy (or the number of UC modules) is assessed because the UC delivers the maximum energy demands of the different micro trips of a driving cycle. Finally, a design factor, which shows the power of the hybrid energy storage control strategy, is utilized to evaluate the newly designed control strategies. Using the developed algorithm, energy saving loss, driver satisfaction criteria, and battery life criteria are calculated using a feed forward dynamic modeling software program and are utilized for comparison among different energy storage candidates. This procedure is applied to the hybrid energy storage sizing of a series hybrid electric city bus in Manhattan and to the Tehran driving cycle. Results show that a higher aggressive driving cycle (Manhattan) requires more expensive energy storage system and more sophisticated energy management strategy

  17. High acceptability of a newly developed urological practical skills training program.

    NARCIS (Netherlands)

    Vries, A.H. de; Luijk, S.J. van; Scherpbier, A.J.J.A.; Hendrikx, A.J.M.; Koldewijn, E.L.; Wagner, C.; Schout, B.M.A.

    2015-01-01

    Background: Benefits of simulation training are widely recognized, but its structural implementation into urological curricula remains challenging. This study aims to gain insight into current and ideal urological practical skills training and presents the outline of a newly developed skills

  18. High acceptability of a newly developed urological practical skills training program

    NARCIS (Netherlands)

    de Vries, A.H.; van Luijk, S.J.; Scherpbier, A.J.J.A.; Hendrikx, A.J.M.; Koldewijn, E.L.; Wagner, C.; Schout, B.M.A.

    2015-01-01

    Background: Benefits of simulation training are widely recognized, but its structural implementation into urological curricula remains challenging. This study aims to gain insight into current and ideal urological practical skills training and presents the outline of a newly developed skills

  19. Performance and development plans for the Inner Detector trigger algorithms at ATLAS

    CERN Document Server

    Martin-haugh, Stewart; The ATLAS collaboration

    2015-01-01

    A description of the design and performance of the newly re-implemented tracking algorithms for the ATLAS trigger for LHC Run 2, to commence in spring 2015, is presented. The ATLAS High Level Trigger (HLT) has been restructured to run as a more flexible single stage process, rather than the two separate Level 2 and Event Filter stages used during Run 1. To make optimal use of this new scenario, a new tracking strategy has been implemented for Run 2. This new strategy will use a Fast Track Finder (FTF) algorithm to directly seed the subsequent Precision Tracking, and will result in improved track parameter resolution and significantly faster execution times than achieved during Run 1 and with better efficiency. The performance and timing of the algorithms for electron and tau track triggers are presented. The profiling infrastructure, constructed to provide prompt feedback from the optimisation, is described, including the methods used to monitor the relative performance improvements as the code evolves. The o...

  20. Performance and development plans for the Inner Detector trigger algorithms at ATLAS

    CERN Document Server

    Martin-haugh, Stewart; The ATLAS collaboration

    2015-01-01

    A description of the design and performance of the newly re-implemented tracking algorithms for the ATLAS trigger for LHC Run 2, to commence in spring 2015, is presented. The ATLAS High Level Trigger (HLT) has been restructured to run as a more flexible single stage process, rather than the two separate Level 2 and Event Filter stages used during Run 1. To make optimal use of this new scenario, a new tracking strategy has been implemented for Run 2. This new strategy will use a FastTrackFinder algorithm to directly seed the subsequent Precision Tracking, and will result in improved track parameter resolution and significantly faster execution times than achieved during Run 1 and with better efficiency. The timings of the algorithms for electron and tau track triggers are presented. The profiling infrastructure, constructed to provide prompt feedback from the optimisation, is described, including the methods used to monitor the relative performance improvements as the code evolves. The online deployment and co...

  1. Algorithms for Academic Search and Recommendation Systems

    DEFF Research Database (Denmark)

    Amolochitis, Emmanouil

    2014-01-01

    are part of a developed Movie Recommendation system, the first such system to be commercially deployed in Greece by a major Triple Play services provider. In the third part of the work we present the design of a quantitative association rule mining algorithm. The introduced mining algorithm processes......In this work we present novel algorithms for academic search, recommendation and association rules mining. In the first part of the work we introduce a novel hierarchical heuristic scheme for re-ranking academic publications. The scheme is based on the hierarchical combination of a custom...... implementation of the term frequency heuristic, a time-depreciated citation score and a graph-theoretic computed score that relates the paper’s index terms with each other. On the second part we describe the design of hybrid recommender ensemble (user, item and content based). The newly introduced algorithms...

  2. Determination of Pavement Rehabilitation Activities through a Permutation Algorithm

    Directory of Open Access Journals (Sweden)

    Sangyum Lee

    2013-01-01

    Full Text Available This paper presents a mathematical programming model for optimal pavement rehabilitation planning. The model maximized the rehabilitation area through a newly developed permutation algorithm, based on the procedures outlined in the harmony search (HS algorithm. Additionally, the proposed algorithm was based on an optimal solution method for the problem of multilocation rehabilitation activities on pavement structure, using empirical deterioration and rehabilitation effectiveness models, according to a limited maintenance budget. Thus, nonlinear pavement performance and rehabilitation activity decision models were used to maximize the objective functions of the rehabilitation area within a limited budget, through the permutation algorithm. Our results showed that the heuristic permutation algorithm provided a good optimum in terms of maximizing the rehabilitation area, compared with a method of the worst-first maintenance currently used in Seoul.

  3. DEVELOPMENT OF A NEW ALGORITHM FOR KEY AND S-BOX GENERATION IN BLOWFISH ALGORITHM

    Directory of Open Access Journals (Sweden)

    TAYSEER S. ATIA

    2014-08-01

    Full Text Available Blowfish algorithm is a block cipher algorithm, its strong, simple algorithm used to encrypt data in block of size 64-bit. Key and S-box generation process in this algorithm require time and memory space the reasons that make this algorithm not convenient to be used in smart card or application requires changing secret key frequently. In this paper a new key and S-box generation process was developed based on Self Synchronization Stream Cipher (SSS algorithm where the key generation process for this algorithm was modified to be used with the blowfish algorithm. Test result shows that the generation process requires relatively slow time and reasonably low memory requirement and this enhance the algorithm and gave it the possibility for different usage.

  4. Parallel implementation of D-Phylo algorithm for maximum likelihood clusters.

    Science.gov (United States)

    Malik, Shamita; Sharma, Dolly; Khatri, Sunil Kumar

    2017-03-01

    This study explains a newly developed parallel algorithm for phylogenetic analysis of DNA sequences. The newly designed D-Phylo is a more advanced algorithm for phylogenetic analysis using maximum likelihood approach. The D-Phylo while misusing the seeking capacity of k -means keeps away from its real constraint of getting stuck at privately conserved motifs. The authors have tested the behaviour of D-Phylo on Amazon Linux Amazon Machine Image(Hardware Virtual Machine)i2.4xlarge, six central processing unit, 122 GiB memory, 8  ×  800 Solid-state drive Elastic Block Store volume, high network performance up to 15 processors for several real-life datasets. Distributing the clusters evenly on all the processors provides us the capacity to accomplish a near direct speed if there should arise an occurrence of huge number of processors.

  5. To develop a universal gamut mapping algorithm

    International Nuclear Information System (INIS)

    Morovic, J.

    1998-10-01

    When a colour image from one colour reproduction medium (e.g. nature, a monitor) needs to be reproduced on another (e.g. on a monitor or in print) and these media have different colour ranges (gamuts), it is necessary to have a method for mapping between them. If such a gamut mapping algorithm can be used under a wide range of conditions, it can also be incorporated in an automated colour reproduction system and considered to be in some sense universal. In terms of preliminary work, a colour reproduction system was implemented, for which a new printer characterisation model (including grey-scale correction) was developed. Methods were also developed for calculating gamut boundary descriptors and for calculating gamut boundaries along given lines from them. The gamut mapping solution proposed in this thesis is a gamut compression algorithm developed with the aim of being accurate and universally applicable. It was arrived at by way of an evolutionary gamut mapping development strategy for the purposes of which five test images were reproduced between a CRT and printed media obtained using an inkjet printer. Initially, a number of previously published algorithms were chosen and psychophysically evaluated whereby an important characteristic of this evaluation was that it also considered the performance of algorithms for individual colour regions within the test images used. New algorithms were then developed on their basis, subsequently evaluated and this process was repeated once more. In this series of experiments the new GCUSP algorithm, which consists of a chroma-dependent lightness compression followed by a compression towards the lightness of the reproduction cusp on the lightness axis, gave the most accurate and stable performance overall. The results of these experiments were also useful for improving the understanding of some gamut mapping factors - in particular gamut difference. In addition to looking at accuracy, the pleasantness of reproductions obtained

  6. Research on magnetorheological damper suspension with permanent magnet and magnetic valve based on developed FOA-optimal control algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Xiao, Ping; Gao, Hong [Anhui Polytechnic University, Wuhu (China); Niu, Limin [Anhui University of Technology, Maanshan (China)

    2017-07-15

    Due to the fail safe problem, it was difficult for the existing Magnetorheological damper (MD) to be widely applied in automotive suspensions. Therefore, permanent magnets and magnetic valves were introduced to existing MDs so that fail safe problem could be solved by the magnets and damping force could be adjusted easily by the magnetic valve. Thus, a new Magnetorheological damper with permanent magnet and magnetic valve (MDPMMV) was developed and MDPMMV suspension was studied. First of all, mechanical structure of existing magnetorheological damper applied in automobile suspensions was redesigned, comprising a permanent magnet and a magnetic valve. In addition, prediction model of damping force was built based on electromagnetics theory and Bingham model. Experimental research was onducted on the newly designed damper and goodness of fit between experiment results and simulated ones by models was high. On this basis, a quarter suspension model was built. Then, fruit Fly optimization algorithm (FOA)-optimal control algorithm suitable for automobile suspension was designed based on developing normal FOA. Finally, simulation experiments and bench tests with input surface of pulse road and B road were carried out and the results indicated that working erformance of MDPMMV suspension based on FOA-optimal control algorithm was good.

  7. DIDADTIC TOOLS FOR THE STUDENTS’ ALGORITHMIC THINKING DEVELOPMENT

    Directory of Open Access Journals (Sweden)

    T. P. Pushkaryeva

    2017-01-01

    Full Text Available Introduction. Modern engineers must possess high potential of cognitive abilities, in particular, the algorithmic thinking (AT. In this regard, the training of future experts (university graduates of technical specialities has to provide the knowledge of principles and ways of designing of various algorithms, abilities to analyze them, and to choose the most optimal variants for engineering activity implementation. For full formation of AT skills it is necessary to consider all channels of psychological perception and cogitative processing of educational information: visual, auditory, and kinesthetic.The aim of the present research is theoretical basis of design, development and use of resources for successful development of AT during the educational process of training in programming.Methodology and research methods. Methodology of the research involves the basic thesis of cognitive psychology and information approach while organizing the educational process. The research used methods: analysis; modeling of cognitive processes; designing training tools that take into account the mentality and peculiarities of information perception; diagnostic efficiency of the didactic tools. Results. The three-level model for future engineers training in programming aimed at development of AT skills was developed. The model includes three components: aesthetic, simulative, and conceptual. Stages to mastering a new discipline are allocated. It is proved that for development of AT skills when training in programming it is necessary to use kinesthetic tools at the stage of mental algorithmic maps formation; algorithmic animation and algorithmic mental maps at the stage of algorithmic model and conceptual images formation. Kinesthetic tools for development of students’ AT skills when training in algorithmization and programming are designed. Using of kinesthetic training simulators in educational process provide the effective development of algorithmic style of

  8. An ATR architecture for algorithm development and testing

    Science.gov (United States)

    Breivik, Gøril M.; Løkken, Kristin H.; Brattli, Alvin; Palm, Hans C.; Haavardsholm, Trym

    2013-05-01

    A research platform with four cameras in the infrared and visible spectral domains is under development at the Norwegian Defence Research Establishment (FFI). The platform will be mounted on a high-speed jet aircraft and will primarily be used for image acquisition and for development and test of automatic target recognition (ATR) algorithms. The sensors on board produce large amounts of data, the algorithms can be computationally intensive and the data processing is complex. This puts great demands on the system architecture; it has to run in real-time and at the same time be suitable for algorithm development. In this paper we present an architecture for ATR systems that is designed to be exible, generic and efficient. The architecture is module based so that certain parts, e.g. specific ATR algorithms, can be exchanged without affecting the rest of the system. The modules are generic and can be used in various ATR system configurations. A software framework in C++ that handles large data ows in non-linear pipelines is used for implementation. The framework exploits several levels of parallelism and lets the hardware processing capacity be fully utilised. The ATR system is under development and has reached a first level that can be used for segmentation algorithm development and testing. The implemented system consists of several modules, and although their content is still limited, the segmentation module includes two different segmentation algorithms that can be easily exchanged. We demonstrate the system by applying the two segmentation algorithms to infrared images from sea trial recordings.

  9. Development and validation of a novel algorithm based on the ECG magnet response for rapid identification of any unknown pacemaker.

    Science.gov (United States)

    Squara, Fabien; Chik, William W; Benhayon, Daniel; Maeda, Shingo; Latcu, Decebal Gabriel; Lacaze-Gadonneix, Jonathan; Tibi, Thierry; Thomas, Olivier; Cooper, Joshua M; Duthoit, Guillaume

    2014-08-01

    Pacemaker (PM) interrogation requires correct manufacturer identification. However, an unidentified PM is a frequent occurrence, requiring time-consuming steps to identify the device. The purpose of this study was to develop and validate a novel algorithm for PM manufacturer identification, using the ECG response to magnet application. Data on the magnet responses of all recent PM models (≤15 years) from the 5 major manufacturers were collected. An algorithm based on the ECG response to magnet application to identify the PM manufacturer was subsequently developed. Patients undergoing ECG during magnet application in various clinical situations were prospectively recruited in 7 centers. The algorithm was applied in the analysis of every ECG by a cardiologist blinded to PM information. A second blinded cardiologist analyzed a sample of randomly selected ECGs in order to assess the reproducibility of the results. A total of 250 ECGs were analyzed during magnet application. The algorithm led to the correct single manufacturer choice in 242 ECGs (96.8%), whereas 7 (2.8%) could only be narrowed to either 1 of 2 manufacturer possibilities. Only 2 (0.4%) incorrect manufacturer identifications occurred. The algorithm identified Medtronic and Sorin Group PMs with 100% sensitivity and specificity, Biotronik PMs with 100% sensitivity and 99.5% specificity, and St. Jude and Boston Scientific PMs with 92% sensitivity and 100% specificity. The results were reproducible between the 2 blinded cardiologists with 92% concordant findings. Unknown PM manufacturers can be accurately identified by analyzing the ECG magnet response using this newly developed algorithm. Copyright © 2014 Heart Rhythm Society. Published by Elsevier Inc. All rights reserved.

  10. Development of an adjoint sensitivity field-based treatment-planning technique for the use of newly designed directional LDR sources in brachytherapy.

    Science.gov (United States)

    Chaswal, V; Thomadsen, B R; Henderson, D L

    2012-02-21

    The development and application of an automated 3D greedy heuristic (GH) optimization algorithm utilizing the adjoint sensitivity fields for treatment planning to assess the advantage of directional interstitial prostate brachytherapy is presented. Directional and isotropic dose kernels generated using Monte Carlo simulations based on Best Industries model 2301 I-125 source are utilized for treatment planning. The newly developed GH algorithm is employed for optimization of the treatment plans for seven interstitial prostate brachytherapy cases using mixed sources (directional brachytherapy) and using only isotropic sources (conventional brachytherapy). All treatment plans resulted in V100 > 98% and D90 > 45 Gy for the target prostate region. For the urethra region, the D10(Ur), D90(Ur) and V150(Ur) and for the rectum region the V100cc, D2cc, D90(Re) and V90(Re) all are reduced significantly when mixed sources brachytherapy is used employing directional sources. The simulations demonstrated that the use of directional sources in the low dose-rate (LDR) brachytherapy of the prostate clearly benefits in sparing the urethra and the rectum sensitive structures from overdose. The time taken for a conventional treatment plan is less than three seconds, while the time taken for a mixed source treatment plan is less than nine seconds, as tested on an Intel Core2 Duo 2.2 GHz processor with 1GB RAM. The new 3D GH algorithm is successful in generating a feasible LDR brachytherapy treatment planning solution with an extra degree of freedom, i.e. directionality in very little time.

  11. Development of an adjoint sensitivity field-based treatment-planning technique for the use of newly designed directional LDR sources in brachytherapy

    Science.gov (United States)

    Chaswal, V.; Thomadsen, B. R.; Henderson, D. L.

    2012-02-01

    The development and application of an automated 3D greedy heuristic (GH) optimization algorithm utilizing the adjoint sensitivity fields for treatment planning to assess the advantage of directional interstitial prostate brachytherapy is presented. Directional and isotropic dose kernels generated using Monte Carlo simulations based on Best Industries model 2301 I-125 source are utilized for treatment planning. The newly developed GH algorithm is employed for optimization of the treatment plans for seven interstitial prostate brachytherapy cases using mixed sources (directional brachytherapy) and using only isotropic sources (conventional brachytherapy). All treatment plans resulted in V100 > 98% and D90 > 45 Gy for the target prostate region. For the urethra region, the D10Ur, D90Ur and V150Ur and for the rectum region the V100cc, D2cc, D90Re and V90Re all are reduced significantly when mixed sources brachytherapy is used employing directional sources. The simulations demonstrated that the use of directional sources in the low dose-rate (LDR) brachytherapy of the prostate clearly benefits in sparing the urethra and the rectum sensitive structures from overdose. The time taken for a conventional treatment plan is less than three seconds, while the time taken for a mixed source treatment plan is less than nine seconds, as tested on an Intel Core2 Duo 2.2 GHz processor with 1GB RAM. The new 3D GH algorithm is successful in generating a feasible LDR brachytherapy treatment planning solution with an extra degree of freedom, i.e. directionality in very little time.

  12. Critical function monitoring system algorithm development

    International Nuclear Information System (INIS)

    Harmon, D.L.

    1984-01-01

    Accurate critical function status information is a key to operator decision-making during events threatening nuclear power plant safety. The Critical Function Monitoring System provides continuous critical function status monitoring by use of algorithms which mathematically represent the processes by which an operating staff would determine critical function status. This paper discusses in detail the systematic design methodology employed to develop adequate Critical Function Monitoring System algorithms

  13. Cubic scaling algorithms for RPA correlation using interpolative separable density fitting

    Science.gov (United States)

    Lu, Jianfeng; Thicke, Kyle

    2017-12-01

    We present a new cubic scaling algorithm for the calculation of the RPA correlation energy. Our scheme splits up the dependence between the occupied and virtual orbitals in χ0 by use of Cauchy's integral formula. This introduces an additional integral to be carried out, for which we provide a geometrically convergent quadrature rule. Our scheme also uses the newly developed Interpolative Separable Density Fitting algorithm to further reduce the computational cost in a way analogous to that of the Resolution of Identity method.

  14. Applied economic model development algorithm for electronics company

    Directory of Open Access Journals (Sweden)

    Mikhailov I.

    2017-01-01

    Full Text Available The purpose of this paper is to report about received experience in the field of creating the actual methods and algorithms that help to simplify development of applied decision support systems. It reports about an algorithm, which is a result of two years research and have more than one-year practical verification. In a case of testing electronic components, the time of the contract conclusion is crucial point to make the greatest managerial mistake. At this stage, it is difficult to achieve a realistic assessment of time-limit and of wage-fund for future work. The creation of estimating model is possible way to solve this problem. In the article is represented an algorithm for creation of those models. The algorithm is based on example of the analytical model development that serves for amount of work estimation. The paper lists the algorithm’s stages and explains their meanings with participants’ goals. The implementation of the algorithm have made possible twofold acceleration of these models development and fulfilment of management’s requirements. The resulting models have made a significant economic effect. A new set of tasks was identified to be further theoretical study.

  15. Improving a newly developed patient-reported outcome for thyroid patients, using cognitive interviewing

    DEFF Research Database (Denmark)

    Watt, Torquil; Rasmussen, Ase Krogh; Groenvold, Mogens

    2008-01-01

    Objective To improve a newly developed patient-reported outcome measure for thyroid patients using cognitive interviewing. Methods Thirty-one interviews using immediate retrospective and expansive probing were conducted among patients with non-toxic goiter (n = 4), nodular toxic goiter (n = 5) Gr...

  16. Newly Developed Ceramic Membranes for Dehydration and Separation of Organic Mixtures by Pervaporation

    NARCIS (Netherlands)

    Gemert, van R.W.; Cuperus, F.P.

    1995-01-01

    Polymeric pervaporation membranes sometimes show great variety in performance when they are alternately used for different solvent mixtures. In addition, membrane stability in time is a problem in case of some solvents. Therefore, newly developed ceramic silica membranes with a 'dense' top layer

  17. Development and evaluation of thermal model reduction algorithms for spacecraft

    Science.gov (United States)

    Deiml, Michael; Suderland, Martin; Reiss, Philipp; Czupalla, Markus

    2015-05-01

    This paper is concerned with the topic of the reduction of thermal models of spacecraft. The work presented here has been conducted in cooperation with the company OHB AG, formerly Kayser-Threde GmbH, and the Institute of Astronautics at Technische Universität München with the goal to shorten and automatize the time-consuming and manual process of thermal model reduction. The reduction of thermal models can be divided into the simplification of the geometry model for calculation of external heat flows and radiative couplings and into the reduction of the underlying mathematical model. For simplification a method has been developed which approximates the reduced geometry model with the help of an optimization algorithm. Different linear and nonlinear model reduction techniques have been evaluated for their applicability in reduction of the mathematical model. Thereby the compatibility with the thermal analysis tool ESATAN-TMS is of major concern, which restricts the useful application of these methods. Additional model reduction methods have been developed, which account to these constraints. The Matrix Reduction method allows the approximation of the differential equation to reference values exactly expect for numerical errors. The summation method enables a useful, applicable reduction of thermal models that can be used in industry. In this work a framework for model reduction of thermal models has been created, which can be used together with a newly developed graphical user interface for the reduction of thermal models in industry.

  18. Development of Educational Support System for Algorithm using Flowchart

    Science.gov (United States)

    Ohchi, Masashi; Aoki, Noriyuki; Furukawa, Tatsuya; Takayama, Kanta

    Recently, an information technology is indispensable for the business and industrial developments. However, it has been a social problem that the number of software developers has been insufficient. To solve the problem, it is necessary to develop and implement the environment for learning the algorithm and programming language. In the paper, we will describe the algorithm study support system for a programmer using the flowchart. Since the proposed system uses Graphical User Interface(GUI), it will become easy for a programmer to understand the algorithm in programs.

  19. The Development Needs of Newly Appointed Senior School Leaders in the Western Cape South Africa: A Case Study

    Directory of Open Access Journals (Sweden)

    Nelius Jansen van Vuuren

    2017-12-01

    Full Text Available The essential role that senior school leaders play in school leadership teams to ensure effective strategic leadership in schools has been the subject of intense discussion for many years. Crucial to this debate is the establishment of professional learning and leadership approaches for newly appointed senior school leaders. Recommendations for policy and practice highlight the importance of appropriate, multifaceted, developmental support initiatives for newly appointed school leaders. In many countries, including South Africa, a teaching qualification and, in most cases, extensive teaching experience is the only requirement for being appointed as a senior school leader in a school. This tends to suggest that no further professional development is required for newly appointed school leaders, the problem addressed in this paper. This paper reports on the main findings of the perceived development needs of newly appointed senior school leaders in the Western Cape, South Africa, and suggests that school leaders occupy a unique and specialist role in education, which requires relevant and specific preparation to support effective leadership. The respondents of this study report a lack of contextualised training and support before and after their appointment in their new roles creating unique development needs. This paper, therefore, employs a mixed-method approach to gather data to understand the perceived needs of twenty newly appointed senior school leaders in the Western Cape, South Africa.

  20. Developing an Enhanced Lightning Jump Algorithm for Operational Use

    Science.gov (United States)

    Schultz, Christopher J.; Petersen, Walter A.; Carey, Lawrence D.

    2009-01-01

    Overall Goals: 1. Build on the lightning jump framework set through previous studies. 2. Understand what typically occurs in nonsevere convection with respect to increases in lightning. 3. Ultimately develop a lightning jump algorithm for use on the Geostationary Lightning Mapper (GLM). 4 Lightning jump algorithm configurations were developed (2(sigma), 3(sigma), Threshold 10 and Threshold 8). 5 algorithms were tested on a population of 47 nonsevere and 38 severe thunderstorms. Results indicate that the 2(sigma) algorithm performed best over the entire thunderstorm sample set with a POD of 87%, a far of 35%, a CSI of 59% and a HSS of 75%.

  1. Parameters identification of hydraulic turbine governing system using improved gravitational search algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Chaoshun Li; Jianzhong Zhou [College of Hydroelectric Digitization Engineering, Huazhong University of Science and Technology, Wuhan 430074 (China)

    2011-01-15

    Parameter identification of hydraulic turbine governing system (HTGS) is crucial in precise modeling of hydropower plant and provides support for the analysis of stability of power system. In this paper, a newly developed optimization algorithm, called gravitational search algorithm (GSA), is introduced and applied in parameter identification of HTGS, and the GSA is improved by combination of the search strategy of particle swarm optimization. Furthermore, a new weighted objective function is proposed in the identification frame. The improved gravitational search algorithm (IGSA), together with genetic algorithm, particle swarm optimization and GSA, is employed in parameter identification experiments and the procedure is validated by comparing experimental and simulated results. Consequently, IGSA is shown to locate more precise parameter values than the compared methods with higher efficiency. (author)

  2. Parameters identification of hydraulic turbine governing system using improved gravitational search algorithm

    International Nuclear Information System (INIS)

    Li Chaoshun; Zhou Jianzhong

    2011-01-01

    Parameter identification of hydraulic turbine governing system (HTGS) is crucial in precise modeling of hydropower plant and provides support for the analysis of stability of power system. In this paper, a newly developed optimization algorithm, called gravitational search algorithm (GSA), is introduced and applied in parameter identification of HTGS, and the GSA is improved by combination of the search strategy of particle swarm optimization. Furthermore, a new weighted objective function is proposed in the identification frame. The improved gravitational search algorithm (IGSA), together with genetic algorithm, particle swarm optimization and GSA, is employed in parameter identification experiments and the procedure is validated by comparing experimental and simulated results. Consequently, IGSA is shown to locate more precise parameter values than the compared methods with higher efficiency.

  3. Development of a Novel Locomotion Algorithm for Snake Robot

    International Nuclear Information System (INIS)

    Khan, Raisuddin; Billah, Md Masum; Watanabe, Mitsuru; Shafie, A A

    2013-01-01

    A novel algorithm for snake robot locomotion is developed and analyzed in this paper. Serpentine is one of the renowned locomotion for snake robot in disaster recovery mission to overcome narrow space navigation. Several locomotion for snake navigation, such as concertina or rectilinear may be suitable for narrow spaces, but is highly inefficient if the same type of locomotion is used even in open spaces resulting friction reduction which make difficulties for snake movement. A novel locomotion algorithm has been proposed based on the modification of the multi-link snake robot, the modifications include alterations to the snake segments as well elements that mimic scales on the underside of the snake body. Snake robot can be able to navigate in the narrow space using this developed locomotion algorithm. The developed algorithm surmount the others locomotion limitation in narrow space navigation

  4. Improvement of Parallel Algorithm for MATRA Code

    International Nuclear Information System (INIS)

    Kim, Seong-Jin; Seo, Kyong-Won; Kwon, Hyouk; Hwang, Dae-Hyun

    2014-01-01

    The feasibility study to parallelize the MATRA code was conducted in KAERI early this year. As a result, a parallel algorithm for the MATRA code has been developed to decrease a considerably required computing time to solve a bigsize problem such as a whole core pin-by-pin problem of a general PWR reactor and to improve an overall performance of the multi-physics coupling calculations. It was shown that the performance of the MATRA code was greatly improved by implementing the parallel algorithm using MPI communication. For problems of a 1/8 core and whole core for SMART reactor, a speedup was evaluated as about 10 when the numbers of used processor were 25. However, it was also shown that the performance deteriorated as the axial node number increased. In this paper, the procedure of a communication between processors is optimized to improve the previous parallel algorithm.. To improve the performance deterioration of the parallelized MATRA code, the communication algorithm between processors was newly presented. It was shown that the speedup was improved and stable regardless of the axial node number

  5. Validating self-reported mobile phone use in adults using a newly developed smartphone application

    NARCIS (Netherlands)

    Goedhart, Geertje; Kromhout, Hans; Wiart, Joe; Vermeulen, Roel

    2015-01-01

    OBJECTIVE: Interpretation of epidemiological studies on health effects from mobile phone use is hindered by uncertainties in the exposure assessment. We used a newly developed smartphone application (app) to validate self-reported mobile phone use and behaviour among adults. METHODS: 107

  6. Micro-computed tomography newly developed for in vivo small animal imaging

    International Nuclear Information System (INIS)

    Arai, Yoshinori; Ninomiya, Tadashi; Kato, Takafumi; Masuda, Yuji

    2005-01-01

    The aim of this paper is to report a newly developed micro-computed tomography system for in vivo use. The system was composed of a micro-focus X-ray tube and an image intensifier (I.I.), both of which rotated around the object stage. A guinea pig and a rat were examined. The anesthetized animal was set on the secure object stage. Images of the head of the guinea pig and the tibia knee joint of the rat were taken. In addition, an image of the rat's tail was taken. The reconstruction and the image viewing were carried out using I-View software. The voxel matrix was 512 x 512 x 384. The voxel sizes ranged from 10 x 10 x 10 μm to 100 x 100 x 100 μm. The exposure time was 17 s, and the reconstruction time was 150 s. The head of the guinea pig and the tibia/knee joint of the rat were observed clearly under 100-μm and 30μm voxels, respectively. The trabecular bone of the tail was also observed clearly under a 10 μm voxel. The newly developed micro-computed tomography system makes it possible to obtain images of anesthetized animals set on a secure object stage. Clear bone images of the small animals could be obtained within a short time. (author)

  7. Model based development of engine control algorithms

    NARCIS (Netherlands)

    Dekker, H.J.; Sturm, W.L.

    1996-01-01

    Model based development of engine control systems has several advantages. The development time and costs are strongly reduced because much of the development and optimization work is carried out by simulating both engine and control system. After optimizing the control algorithm it can be executed

  8. B ampersand W PWR advanced control system algorithm development

    International Nuclear Information System (INIS)

    Winks, R.W.; Wilson, T.L.; Amick, M.

    1992-01-01

    This paper discusses algorithm development of an Advanced Control System for the B ampersand W Pressurized Water Reactor (PWR) nuclear power plant. The paper summarizes the history of the project, describes the operation of the algorithm, and presents transient results from a simulation of the plant and control system. The history discusses the steps in the development process and the roles played by the utility owners, B ampersand W Nuclear Service Company (BWNS), Oak Ridge National Laboratory (ORNL), and the Foxboro Company. The algorithm description is a brief overview of the features of the control system. The transient results show that operation of the algorithm in a normal power maneuvering mode and in a moderately large upset following a feedwater pump trip

  9. Assessing College Students' Perceptions of a Case Teacher's Pedagogical Content Knowledge Using a Newly Developed Instrument

    Science.gov (United States)

    Jang, Syh-Jong

    2011-01-01

    Ongoing professional development for college teachers has been much emphasized. However, previous research on learning environments has seldom addressed college students' perceptions of teachers' PCK. This study aimed to evaluate college students' perceptions of a physics teacher's PCK development using a newly developed instrument and workshop…

  10. Solving Optimization Problems via Vortex Optimization Algorithm and Cognitive Development Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Ahmet Demir

    2017-01-01

    Full Text Available In the fields which require finding the most appropriate value, optimization became a vital approach to employ effective solutions. With the use of optimization techniques, many different fields in the modern life have found solutions to their real-world based problems. In this context, classical optimization techniques have had an important popularity. But after a while, more advanced optimization problems required the use of more effective techniques. At this point, Computer Science took an important role on providing software related techniques to improve the associated literature. Today, intelligent optimization techniques based on Artificial Intelligence are widely used for optimization problems. The objective of this paper is to provide a comparative study on the employment of classical optimization solutions and Artificial Intelligence solutions for enabling readers to have idea about the potential of intelligent optimization techniques. At this point, two recently developed intelligent optimization algorithms, Vortex Optimization Algorithm (VOA and Cognitive Development Optimization Algorithm (CoDOA, have been used to solve some multidisciplinary optimization problems provided in the source book Thomas' Calculus 11th Edition and the obtained results have compared with classical optimization solutions. 

  11. Applicability of newly developed 610MPa class heavy thickness high strength steel to boiler pressure vessel

    Energy Technology Data Exchange (ETDEWEB)

    Katayama, Norihiko; Kaihara, Shoichiro; Ishii, Jun [Ishikawajima-Harima Heavy Industries Corp., Yokohama (Japan); Kajigaya, Ichiro [Ishikawajima-Harima Heavy Industries Corp., Tokyo (Japan); Totsuka, Takehiro; Miyazaki, Takashi [Ishikawajima-Harima Heavy Industries Corp., Aioi (Japan)

    1995-11-01

    Construction of a 350 MW Class PFBC (Pressurized Fluidized Bed Combustion) boiler plant is under planning in Japan. Design temperature and pressure of the vessel are maximum 350 C and 1.69 MPa, respectively. As the plate thickness of the vessel exceeds over 100 mm, high strength steel plate of good weldability and less susceptible to reheat cracking was required and developed. The steel was aimed to satisfy the tensile strength over 610 MPa at 350 C after postweld heat treatment (PWHT), with good notch toughness. The authors investigated the welding performances of the newly developed steel by using 150 mm-thick plate welded by pulsed-MAG and SAW methods. It was confirmed that the newly developed steel and its welds possess sufficient strength and toughness after PWHT, and applicable to the actual pressure vessel.

  12. Fundamental Characteristics of the Newly Developed ATA™ Membrane Dialyzer.

    Science.gov (United States)

    Sunohara, Takashi; Masuda, Toshiaki

    2017-01-01

    Dialysis membranes are often made from synthetic polymers, such as polysulfone. However, membranes made from cellulose triacetate have superior biocompatibility and have been used since the 1980s. On-line hemodiafiltration treatment accompanied by massive fluid replacement is increasingly being used in Europe and Japan, but cellulose triacetate is not suitable for this treatment. Our newly developed asymmetric triacetate membrane, the ATA™ membrane, substantially improved the filtration properties and blood compatibility because of the asymmetric structure and smooth surface of this cellulose acetate membrane. Key Message: The ATA membrane maintains its high permeability even after massive filtration and shows less temporal variation in its permeation performance, lower protein adsorption, and superior biocompatibility compared with conventional membranes. © 2017 S. Karger AG, Basel.

  13. The Performance and Development of the Inner Detector Trigger Algorithms at ATLAS for LHC Run 2

    CERN Document Server

    Sowden, Benjamin Charles; The ATLAS collaboration

    2015-01-01

    A description of the design and performance of the newly reimplemented tracking algorithms for the ATLAS trigger for LHC Run 2, to commence in spring 2015, is provided. The ATLAS High Level Trigger (HLT) has been restructured to run as a more flexible single stage process, rather than the two separate Level 2 and Event Filter stages used during Run 1. To make optimal use of this new scenario, a new tracking strategy has been implemented for Run 2 for the HLT. This new strategy will use a Fast Track Finder (FTF) algorithm to directly seed the subsequent Precision Tracking, and will result in improved track parameter resolution and significantly faster execution times than achieved during Run 1 but with no significant reduction in efficiency. The performance and timing of the algorithms for numerous physics signatures in the trigger are presented. The profiling infrastructure, constructed to provide prompt feedback from the optimisation, is described, including the methods used to monitor the relative performan...

  14. Verification test for on-line diagnosis algorithm based on noise analysis

    International Nuclear Information System (INIS)

    Tamaoki, T.; Naito, N.; Tsunoda, T.; Sato, M.; Kameda, A.

    1980-01-01

    An on-line diagnosis algorithm was developed and its verification test was performed using a minicomputer. This algorithm identifies the plant state by analyzing various system noise patterns, such as power spectral densities, coherence functions etc., in three procedure steps. Each obtained noise pattern is examined by using the distances from its reference patterns prepared for various plant states. Then, the plant state is identified by synthesizing each result with an evaluation weight. This weight is determined automatically from the reference noise patterns prior to on-line diagnosis. The test was performed with 50 MW (th) Steam Generator noise data recorded under various controller parameter values. The algorithm performance was evaluated based on a newly devised index. The results obtained with one kind of weight showed the algorithm efficiency under the proper selection of noise patterns. Results for another kind of weight showed the robustness of the algorithm to this selection. (orig.)

  15. Application of backtracking algorithm to depletion calculations

    International Nuclear Information System (INIS)

    Wu Mingyu; Wang Shixi; Yang Yong; Zhang Qiang; Yang Jiayin

    2013-01-01

    Based on the theory of linear chain method for analytical depletion calculations, the burnup matrix is decoupled by the divide and conquer strategy and the linear chain with Markov characteristic is formed. The density, activity and decay heat of every nuclide in the chain then can be calculated by analytical solutions. Every possible reaction path of the nuclide must be considered during the linear chain establishment process. To confirm the calculation precision and efficiency, the algorithm which can cover all the reaction paths and search the paths automatically according to the problem description and precision restrictions should be found. Through analysis and comparison of several kinds of searching algorithms, the backtracking algorithm was selected to establish and calculate the linear chains in searching process using depth first search (DFS) method, forming an algorithm which can solve the depletion problem adaptively and with high fidelity. The complexity of the solution space and time was analyzed by taking into account depletion process and the characteristics of the backtracking algorithm. The newly developed depletion program was coupled with Monte Carlo program MCMG-Ⅱ to calculate the benchmark burnup problem of the first core of China Experimental Fast Reactor (CEFR) and the preliminary verification and validation of the program were performed. (authors)

  16. Genetic algorithms - A new technique for solving the neutron spectrum unfolding problem

    International Nuclear Information System (INIS)

    Freeman, David W.; Edwards, D. Ray; Bolon, Albert E.

    1999-01-01

    A new technique utilizing genetic algorithms has been applied to the Bonner sphere neutron spectrum unfolding problem. Genetic algorithms are part of a relatively new field of 'evolutionary' solution techniques that mimic living systems with computer-simulated 'chromosome' solutions. Solutions mate and mutate to create better solutions. Several benchmark problems, considered representative of radiation protection environments, have been evaluated using the newly developed UMRGA code which implements the genetic algorithm unfolding technique. The results are compared with results from other well-established unfolding codes. The genetic algorithm technique works remarkably well and produces solutions with relatively high spectral qualities. UMRGA appears to be a superior technique in the absence of a priori data - it does not rely on 'lucky' guesses of input spectra. Calculated personnel doses associated with the unfolded spectra match benchmark values within a few percent

  17. PHL10/460: Cancerfacts.com - Vertical Portal with Newly Developed Health Profiler

    OpenAIRE

    Lenz, C; Brucksch, M

    1999-01-01

    Introduction Unlike general health portals such as WebMD and Drkoop.com that cover everything from the flu to heart disease, Silicon Valley-based cancerfacts.com is a so-called vertical portal. It covers only one small vertical niche of health care: cancer, and in particular, prostate cancer. As a value-added proprietary technology, the company offers its newly developed profile engine to health information retrievers. Methods Users are enabled to insert their specific medical information - r...

  18. Image processing algorithm of computer-aided diagnosis in lung cancer screening by CT

    International Nuclear Information System (INIS)

    Yamamoto, Shinji

    2004-01-01

    In this paper, an image processing algorithm for computer-aided diagnosis of lung cancer by X-ray CT is described, which has been developed by my research group for these 10 years or so. CT lung images gathered at the mass screening stage are almost all normal, and lung cancer nodules will be found as the rate of less than 10%. To pick up such a very rare nodules with the high accuracy, a very sensitive detection algorithm is requested which is detectable local and very slight variation of the image. On the contrary, such a sensitive detection algorithm introduces a bad effect that a lot of normal shadows will be detected as abnormal shadows. In this paper I describe how to compromise this complicated subject and realize a practical computer-aided diagnosis tool by the image processing algorithm developed by my research group. Especially, I will mainly focus my description to the principle and characteristics of the Quoit filter which is newly developed as a high sensitive filter by my group. (author)

  19. Hybrid Model Based on Genetic Algorithms and SVM Applied to Variable Selection within Fruit Juice Classification

    Directory of Open Access Journals (Sweden)

    C. Fernandez-Lozano

    2013-01-01

    Full Text Available Given the background of the use of Neural Networks in problems of apple juice classification, this paper aim at implementing a newly developed method in the field of machine learning: the Support Vector Machines (SVM. Therefore, a hybrid model that combines genetic algorithms and support vector machines is suggested in such a way that, when using SVM as a fitness function of the Genetic Algorithm (GA, the most representative variables for a specific classification problem can be selected.

  20. Development and Application of a Portable Health Algorithms Test System

    Science.gov (United States)

    Melcher, Kevin J.; Fulton, Christopher E.; Maul, William A.; Sowers, T. Shane

    2007-01-01

    This paper describes the development and initial demonstration of a Portable Health Algorithms Test (PHALT) System that is being developed by researchers at the NASA Glenn Research Center (GRC). The PHALT System was conceived as a means of evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT System allows systems health management algorithms to be developed in a graphical programming environment; to be tested and refined using system simulation or test data playback; and finally, to be evaluated in a real-time hardware-in-the-loop mode with a live test article. In this paper, PHALT System development is described through the presentation of a functional architecture, followed by the selection and integration of hardware and software. Also described is an initial real-time hardware-in-the-loop demonstration that used sensor data qualification algorithms to diagnose and isolate simulated sensor failures in a prototype Power Distribution Unit test-bed. Success of the initial demonstration is highlighted by the correct detection of all sensor failures and the absence of any real-time constraint violations.

  1. Battery algorithm verification and development using hardware-in-the-loop testing

    Science.gov (United States)

    He, Yongsheng; Liu, Wei; Koch, Brain J.

    Battery algorithms play a vital role in hybrid electric vehicles (HEVs), plug-in hybrid electric vehicles (PHEVs), extended-range electric vehicles (EREVs), and electric vehicles (EVs). The energy management of hybrid and electric propulsion systems needs to rely on accurate information on the state of the battery in order to determine the optimal electric drive without abusing the battery. In this study, a cell-level hardware-in-the-loop (HIL) system is used to verify and develop state of charge (SOC) and power capability predictions of embedded battery algorithms for various vehicle applications. Two different batteries were selected as representative examples to illustrate the battery algorithm verification and development procedure. One is a lithium-ion battery with a conventional metal oxide cathode, which is a power battery for HEV applications. The other is a lithium-ion battery with an iron phosphate (LiFePO 4) cathode, which is an energy battery for applications in PHEVs, EREVs, and EVs. The battery cell HIL testing provided valuable data and critical guidance to evaluate the accuracy of the developed battery algorithms, to accelerate battery algorithm future development and improvement, and to reduce hybrid/electric vehicle system development time and costs.

  2. Battery algorithm verification and development using hardware-in-the-loop testing

    Energy Technology Data Exchange (ETDEWEB)

    He, Yongsheng [General Motors Global Research and Development, 30500 Mound Road, MC 480-106-252, Warren, MI 48090 (United States); Liu, Wei; Koch, Brain J. [General Motors Global Vehicle Engineering, Warren, MI 48090 (United States)

    2010-05-01

    Battery algorithms play a vital role in hybrid electric vehicles (HEVs), plug-in hybrid electric vehicles (PHEVs), extended-range electric vehicles (EREVs), and electric vehicles (EVs). The energy management of hybrid and electric propulsion systems needs to rely on accurate information on the state of the battery in order to determine the optimal electric drive without abusing the battery. In this study, a cell-level hardware-in-the-loop (HIL) system is used to verify and develop state of charge (SOC) and power capability predictions of embedded battery algorithms for various vehicle applications. Two different batteries were selected as representative examples to illustrate the battery algorithm verification and development procedure. One is a lithium-ion battery with a conventional metal oxide cathode, which is a power battery for HEV applications. The other is a lithium-ion battery with an iron phosphate (LiFePO{sub 4}) cathode, which is an energy battery for applications in PHEVs, EREVs, and EVs. The battery cell HIL testing provided valuable data and critical guidance to evaluate the accuracy of the developed battery algorithms, to accelerate battery algorithm future development and improvement, and to reduce hybrid/electric vehicle system development time and costs. (author)

  3. Newly developed semi-empirical formulas for (p, α) at 17.9 MeV and ...

    Indian Academy of Sciences (India)

    Home; Journals; Pramana – Journal of Physics; Volume 74; Issue 6. Newly developed semi-empirical formulas for (, ) at 17.9 MeV and (, ) at 22.3 MeV reaction cross-sections. Eyyup Tel Abdullah Aydin E Gamze Aydin Abdullah Kaplan Ömer Yavaş İskender A Reyhancan. Research Articles Volume 74 Issue 6 June ...

  4. Development of radio frequency interference detection algorithms for passive microwave remote sensing

    Science.gov (United States)

    Misra, Sidharth

    Radio Frequency Interference (RFI) signals are man-made sources that are increasingly plaguing passive microwave remote sensing measurements. RFI is of insidious nature, with some signals low power enough to go undetected but large enough to impact science measurements and their results. With the launch of the European Space Agency (ESA) Soil Moisture and Ocean Salinity (SMOS) satellite in November 2009 and the upcoming launches of the new NASA sea-surface salinity measuring Aquarius mission in June 2011 and soil-moisture measuring Soil Moisture Active Passive (SMAP) mission around 2015, active steps are being taken to detect and mitigate RFI at L-band. An RFI detection algorithm was designed for the Aquarius mission. The algorithm performance was analyzed using kurtosis based RFI ground-truth. The algorithm has been developed with several adjustable location dependant parameters to control the detection statistics (false-alarm rate and probability of detection). The kurtosis statistical detection algorithm has been compared with the Aquarius pulse detection method. The comparative study determines the feasibility of the kurtosis detector for the SMAP radiometer, as a primary RFI detection algorithm in terms of detectability and data bandwidth. The kurtosis algorithm has superior detection capabilities for low duty-cycle radar like pulses, which are more prevalent according to analysis of field campaign data. Most RFI algorithms developed have generally been optimized for performance with individual pulsed-sinusoidal RFI sources. A new RFI detection model is developed that takes into account multiple RFI sources within an antenna footprint. The performance of the kurtosis detection algorithm under such central-limit conditions is evaluated. The SMOS mission has a unique hardware system, and conventional RFI detection techniques cannot be applied. Instead, an RFI detection algorithm for SMOS is developed and applied in the angular domain. This algorithm compares

  5. Algorithmic tools for interpreting vital signs.

    Science.gov (United States)

    Rathbun, Melina C; Ruth-Sahd, Lisa A

    2009-07-01

    Today's complex world of nursing practice challenges nurse educators to develop teaching methods that promote critical thinking skills and foster quick problem solving in the novice nurse. Traditional pedagogies previously used in the classroom and clinical setting are no longer adequate to prepare nursing students for entry into practice. In addition, educators have expressed frustration when encouraging students to apply newly learned theoretical content to direct the care of assigned patients in the clinical setting. This article presents algorithms as an innovative teaching strategy to guide novice student nurses in the interpretation and decision making related to vital sign assessment in an acute care setting.

  6. The newly developed Toyota plug-in hybrid system

    Energy Technology Data Exchange (ETDEWEB)

    Takaoka, Toshifumi; Ichinose, Hiroki [Toyota Motor Corporation (Japan)

    2010-07-01

    Toyota has been introducing several hybrid vehicles (HV) as a countermeasure to the automobile's concerns, like CO2 reduction, energy security, and emission reduction in urban areas. A next step towards an even more effective solution for these concerns is a plug-in hybrid vehicle (PHV). This vehicle combines the advantages of electric vehicles (EV), which use clean electric energy, and HV, with it's high environmental potential and user- friendliness comparable to conventional vehicles, such as a long cruising range. This paper describes a newly developed plug-in hybrid system and its vehicle performance. This system uses a Li-ion battery with high energy density and has an affordable EV range without sacrificing cabin space. The vehicle achieves a CO2 emission of 59g/km and meets the most stringent emission regulations in the world. The new PHV is a forerunner of the large-scale mass production PHV two years later. PHVs have the potential to become popular as a realistic solution towards sustainable mobility by renewable electricity usage in the future. (orig.)

  7. Development of morphing algorithms for Histfactory using information geometry

    Energy Technology Data Exchange (ETDEWEB)

    Bandyopadhyay, Anjishnu; Brock, Ian [University of Bonn (Germany); Cranmer, Kyle [New York University (United States)

    2016-07-01

    Many statistical analyses are based on likelihood fits. In any likelihood fit we try to incorporate all uncertainties, both systematic and statistical. We generally have distributions for the nominal and ±1 σ variations of a given uncertainty. Using that information, Histfactory morphs the distributions for any arbitrary value of the given uncertainties. In this talk, a new morphing algorithm will be presented, which is based on information geometry. The algorithm uses the information about the difference between various probability distributions. Subsequently, we map this information onto geometrical structures and develop the algorithm on the basis of different geometrical properties. Apart from varying all nuisance parameters together, this algorithm can also probe both small (< 1 σ) and large (> 2 σ) variations. It will also be shown how this algorithm can be used for interpolating other forms of probability distributions.

  8. How Schools Can Promote Healthy Development for Newly Arrived Immigrant and Refugee Adolescents: Research Priorities

    Science.gov (United States)

    McNeely, Clea A.; Morland, Lyn; Doty, S. Benjamin; Meschke, Laurie L.; Awad, Summer; Husain, Altaf; Nashwan, Ayat

    2017-01-01

    Background: The US education system must find creative and effective ways to foster the healthy development of the approximately 2 million newly arrived immigrant and refugee adolescents, many of whom contend with language barriers, limited prior education, trauma, and discrimination. We identify research priorities for promoting the school…

  9. A computer literacy scale for newly enrolled nursing college students: development and validation.

    Science.gov (United States)

    Lin, Tung-Cheng

    2011-12-01

    Increasing application and use of information systems and mobile technologies in the healthcare industry require increasing nurse competency in computer use. Computer literacy is defined as basic computer skills, whereas computer competency is defined as the computer skills necessary to accomplish job tasks. Inadequate attention has been paid to computer literacy and computer competency scale validity. This study developed a computer literacy scale with good reliability and validity and investigated the current computer literacy of newly enrolled students to develop computer courses appropriate to students' skill levels and needs. This study referenced Hinkin's process to develop a computer literacy scale. Participants were newly enrolled first-year undergraduate students, with nursing or nursing-related backgrounds, currently attending a course entitled Information Literacy and Internet Applications. Researchers examined reliability and validity using confirmatory factor analysis. The final version of the developed computer literacy scale included six constructs (software, hardware, multimedia, networks, information ethics, and information security) and 22 measurement items. Confirmatory factor analysis showed that the scale possessed good content validity, reliability, convergent validity, and discriminant validity. This study also found that participants earned the highest scores for the network domain and the lowest score for the hardware domain. With increasing use of information technology applications, courses related to hardware topic should be increased to improve nurse problem-solving abilities. This study recommends that emphases on word processing and network-related topics may be reduced in favor of an increased emphasis on database, statistical software, hospital information systems, and information ethics.

  10. Newly developed chitosan-silver hybrid nanoparticles: biosafety and apoptosis induction in HepG2 cells

    International Nuclear Information System (INIS)

    El-Sherbiny, Ibrahim M.; Salih, Ehab; Yassin, Abdelrahman M.; Hafez, Elsayed E.

    2016-01-01

    The present study reports the biosafety assessment, the exact molecular effects, and apoptosis induction of newly developed chitosan-silver hybrid nanoparticles (Cs–Ag NPs) in HepG2 cells. The investigated hybrid NPs were green synthesized using Cs/grape leaves aqueous extract (Cs/GLE) or Cs/GLE NPs as reducing and stabilizing agents. The successful formation of Cs/GLE NPs and Cs–Ag hybrid NPs has been confirmed by UV–Vis spectrophotometry, FTIR spectroscopy, XRD, and HRTEM. From the TEM analysis, the prepared Cs/GLE NPs are uniform and spherical with an average size of 150 nm, and the AgNPs (5–10 nm) were formed mainly on their surface. The UV–Vis spectra of Cs–Ag NPs showed a surface plasmon resonance (SPR) peak at about 450 nm confirming their formation. The synthesized Cs–Ag NPs were found to be crystalline as shown by XRD patterns with fcc phase oriented along the (111), (200), (220), and (311) planes. The cytotoxicity patterns, the antiproliferative activities, and the possible mechanisms of anticancer activity at molecular level of the newly developed Cs–Ag hybrid NPs were investigated. Cytotoxicity patterns of all the preparations demonstrated that the nontoxic treatment concentrations are ranged from 0.39 to 50 %, and many of the newly prepared Cs–Ag hybrid NPs showed high anticancer activities against HpG2 cells, and induced cellular apoptosis by downregulating BCL2 gene and upregulating P53.Graphical Abstract

  11. Newly developed chitosan-silver hybrid nanoparticles: biosafety and apoptosis induction in HepG2 cells

    Energy Technology Data Exchange (ETDEWEB)

    El-Sherbiny, Ibrahim M., E-mail: ielsherbiny@Zewailcity.edu.eg; Salih, Ehab [Zewail City of Science and Technology, Center for Materials Science (Egypt); Yassin, Abdelrahman M. [Genetic Engineering and Biotechnology Research Institute, City of Scientific Research and Technology Applications, Biopharmaceutical Product Research Department (Egypt); Hafez, Elsayed E. [City of Scientific Research and Technology Applications, Plant Protection and Biomolecular Diagnosis Department (Egypt)

    2016-07-15

    The present study reports the biosafety assessment, the exact molecular effects, and apoptosis induction of newly developed chitosan-silver hybrid nanoparticles (Cs–Ag NPs) in HepG2 cells. The investigated hybrid NPs were green synthesized using Cs/grape leaves aqueous extract (Cs/GLE) or Cs/GLE NPs as reducing and stabilizing agents. The successful formation of Cs/GLE NPs and Cs–Ag hybrid NPs has been confirmed by UV–Vis spectrophotometry, FTIR spectroscopy, XRD, and HRTEM. From the TEM analysis, the prepared Cs/GLE NPs are uniform and spherical with an average size of 150 nm, and the AgNPs (5–10 nm) were formed mainly on their surface. The UV–Vis spectra of Cs–Ag NPs showed a surface plasmon resonance (SPR) peak at about 450 nm confirming their formation. The synthesized Cs–Ag NPs were found to be crystalline as shown by XRD patterns with fcc phase oriented along the (111), (200), (220), and (311) planes. The cytotoxicity patterns, the antiproliferative activities, and the possible mechanisms of anticancer activity at molecular level of the newly developed Cs–Ag hybrid NPs were investigated. Cytotoxicity patterns of all the preparations demonstrated that the nontoxic treatment concentrations are ranged from 0.39 to 50 %, and many of the newly prepared Cs–Ag hybrid NPs showed high anticancer activities against HpG2 cells, and induced cellular apoptosis by downregulating BCL2 gene and upregulating P53.Graphical Abstract.

  12. Mind the Gaps: Controversies about Algorithms, Learning and Trendy Knowledge

    Science.gov (United States)

    Argenton, Gerald

    2017-01-01

    This article critically explores the ways by which the Web could become a more learning-oriented medium in the age of, but also in spite of, the newly bred algorithmic cultures. The social dimension of algorithms is reported in literature as being a socio-technological entanglement that has a powerful influence on users' practices and their lived…

  13. Comparison of switching control algorithms effective in restricting the switching in the neighborhood of the origin

    International Nuclear Information System (INIS)

    Joung, JinWook; Chung, Lan; Smyth, Andrew W

    2010-01-01

    The active interaction control (AIC) system consisting of a primary structure, an auxiliary structure and an interaction element was proposed to protect the primary structure against earthquakes and winds. The objective of the AIC system in reducing the responses of the primary structure is fulfilled by activating or deactivating the switching between the engagement and the disengagement of the primary and auxiliary structures through the interaction element. The status of the interaction element is controlled by switching control algorithms. The previously developed switching control algorithms require an excessive amount of switching, which is inefficient. In this paper, the excessive amount of switching is restricted by imposing an appropriately designed switching boundary region, where switching is prohibited, on pre-designed engagement–disengagement conditions. Two different approaches are used in designing the newly proposed AID-off and AID-off 2 algorithms. The AID-off 2 algorithm is designed to affect deactivated switching regions explicitly, unlike the AID-off algorithm, which follows the same procedure of designing the engagement–disengagement conditions of the previously developed algorithms, by using the current status of the AIC system. Both algorithms are shown to be effective in reducing the amount of switching times triggered from the previously developed AID algorithm under an appropriately selected control sampling period for different earthquakes, but the AID-off 2 algorithm outperforms the AID-off algorithm in reducing the number of switching times

  14. Algorithm development for Maxwell's equations for computational electromagnetism

    Science.gov (United States)

    Goorjian, Peter M.

    1990-01-01

    A new algorithm has been developed for solving Maxwell's equations for the electromagnetic field. It solves the equations in the time domain with central, finite differences. The time advancement is performed implicitly, using an alternating direction implicit procedure. The space discretization is performed with finite volumes, using curvilinear coordinates with electromagnetic components along those directions. Sample calculations are presented of scattering from a metal pin, a square and a circle to demonstrate the capabilities of the new algorithm.

  15. Different Pathophysiological Phenotypes among Newly Diagnosed Type 2 Diabetes Patients

    DEFF Research Database (Denmark)

    Stidsen, Jacob

    2013-01-01

    Type 2 diabetes (T2D) can be considered a syndrome with several different pathophysiological mechanisms leading to hyperglycemia. Nonetheless, T2D is treated according to algorithms as if it was one disease entity. Methods: We investigated the prevalence of different pathophysiological phenotypes...... or secondary diabetes), classic obesity-associated insulin resistant diabetes ( f-P-C-peptide >= 568 pmol/l) and a normoinsulinopenic group (333 age of our new T2D patients was 61 years (range 21-95 years), 57% were men. We found that 3.0% newly diagnosed T2D patients...... suffered from LADA, 3.9% from secondary diabetes, 6.0% from steroid induced diabetes 5.9% had insulinopenic diabetes, whereas 56.7% presented the classic obesity-associated insulin-resistant phenotype. 24.6% was classified as normoinsulinopenic patients. Conclusion: We conclude that newly diagnosed T2D...

  16. AeroADL: applying the integration of the Suomi-NPP science algorithms with the Algorithm Development Library to the calibration and validation task

    Science.gov (United States)

    Houchin, J. S.

    2014-09-01

    A common problem for the off-line validation of the calibration algorithms and algorithm coefficients is being able to run science data through the exact same software used for on-line calibration of that data. The Joint Polar Satellite System (JPSS) program solved part of this problem by making the Algorithm Development Library (ADL) available, which allows the operational algorithm code to be compiled and run on a desktop Linux workstation using flat file input and output. However, this solved only part of the problem, as the toolkit and methods to initiate the processing of data through the algorithms were geared specifically toward the algorithm developer, not the calibration analyst. In algorithm development mode, a limited number of sets of test data are staged for the algorithm once, and then run through the algorithm over and over as the software is developed and debugged. In calibration analyst mode, we are continually running new data sets through the algorithm, which requires significant effort to stage each of those data sets for the algorithm without additional tools. AeroADL solves this second problem by providing a set of scripts that wrap the ADL tools, providing both efficient means to stage and process an input data set, to override static calibration coefficient look-up-tables (LUT) with experimental versions of those tables, and to manage a library containing multiple versions of each of the static LUT files in such a way that the correct set of LUTs required for each algorithm are automatically provided to the algorithm without analyst effort. Using AeroADL, The Aerospace Corporation's analyst team has demonstrated the ability to quickly and efficiently perform analysis tasks for both the VIIRS and OMPS sensors with minimal training on the software tools.

  17. Development and validation of an algorithm for laser application in wound treatment

    Directory of Open Access Journals (Sweden)

    Diequison Rite da Cunha

    2017-12-01

    Full Text Available ABSTRACT Objective: To develop and validate an algorithm for laser wound therapy. Method: Methodological study and literature review. For the development of the algorithm, a review was performed in the Health Sciences databases of the past ten years. The algorithm evaluation was performed by 24 participants, nurses, physiotherapists, and physicians. For data analysis, the Cronbach’s alpha coefficient and the chi-square test for independence was used. The level of significance of the statistical test was established at 5% (p<0.05. Results: The professionals’ responses regarding the facility to read the algorithm indicated: 41.70%, great; 41.70%, good; 16.70%, regular. With regard the algorithm being sufficient for supporting decisions related to wound evaluation and wound cleaning, 87.5% said yes to both questions. Regarding the participants’ opinion that the algorithm contained enough information to support their decision regarding the choice of laser parameters, 91.7% said yes. The questionnaire presented reliability using the Cronbach’s alpha coefficient test (α = 0.962. Conclusion: The developed and validated algorithm showed reliability for evaluation, wound cleaning, and use of laser therapy in wounds.

  18. A newly development RIA for thyroid hormone autoantibodies (THAAb)

    International Nuclear Information System (INIS)

    Li Fengying; Gu Liqiong; Chen Xiayin; Jin Yan; Chen Shuxian; Zhang Qun; Qiu Hongxia; Yang Jingren; Zhao Yongju; Chen Mingdao

    2004-01-01

    Objective: To report a newly developed RIA for THAAb from this laboratory. Methods: The tested serum samples were cultured with labelled thyroid hormone analogous ( 125 I T 3 , 125 I T 4 ) for 16 hours. Antigen-antibody complex was precipitated with anti-human IgG (immune precipitation method) and radio-activity determined. Results: The mean positive rate of THAAb in healthy euthyroid controls (n=186) was only 1.07%. The mean positive rate in patients with thyroid disorders was 14.4% (mean rate 13.5% in hyperthyroid subjects, n=118 and mean rate 15.2% in hypothyroid subjects, n=72). The serum THAAb titer could be markedly lowered after adding non-labelled thyroid hormones (P 3 and FT 4 would be significantly lowered (P 3 , FT 4 levels. In patients with positive THAAb (about 14.4% in patients with all thyroid disorders), the FT 3 , FT 4 levels were best determined after PEG precipitation. (authors)

  19. Muscle Attenuation Is Associated With Newly Developed Hypertension in Men of African Ancestry.

    Science.gov (United States)

    Zhao, Qian; Zmuda, Joseph M; Kuipers, Allison L; Bunker, Clareann H; Patrick, Alan L; Youk, Ada O; Miljkovic, Iva

    2017-05-01

    Increased ectopic adipose tissue infiltration in skeletal muscle is associated with insulin resistance and diabetes mellitus. We evaluated whether change in skeletal muscle adiposity predicts subsequent development of hypertension in men of African ancestry, a population sample understudied in previous studies. In the Tobago Health Study, a prospective longitudinal study among men of African ancestry (age range 40-91 years), calf intermuscular adipose tissue, and skeletal muscle attenuation were measured with computed tomography. Hypertension was defined as a systolic blood pressure ≥140 mm Hg, or a diastolic blood pressure ≥90 mm Hg, or receiving antihypertensive medications. Logistic regression was performed with adjustment for age, insulin resistance, baseline and 6-year change in body mass index, baseline and 6-year change in waist circumference, and other potential confounding factors. Among 746 normotensive men at baseline, 321 (43%) developed hypertension during the mean 6.2 years of follow-up. Decreased skeletal muscle attenuation was associated with newly developed hypertension after adjustment for baseline and 6-year change of body mass index (odds ratio [95% confidence interval] per SD, 1.3 [1.0-1.6]) or baseline and 6-year change of waist circumference (odds ratio [95% confidence interval] per SD, 1.3 [1.0-1.6]). No association was observed between increased intermuscular adipose tissue and hypertension. Our novel findings show that decreased muscle attenuation is associated with newly developed hypertension among men of African ancestry, independent of general and central adiposity and insulin resistance. Further studies are needed to adjust for inflammation, visceral and other ectopic adipose tissue depots, and to confirm our findings in other population samples. © 2017 American Heart Association, Inc.

  20. Minimum Probability of Error-Based Equalization Algorithms for Fading Channels

    Directory of Open Access Journals (Sweden)

    Janos Levendovszky

    2007-06-01

    Full Text Available Novel channel equalizer algorithms are introduced for wireless communication systems to combat channel distortions resulting from multipath propagation. The novel algorithms are based on newly derived bounds on the probability of error (PE and guarantee better performance than the traditional zero forcing (ZF or minimum mean square error (MMSE algorithms. The new equalization methods require channel state information which is obtained by a fast adaptive channel identification algorithm. As a result, the combined convergence time needed for channel identification and PE minimization still remains smaller than the convergence time of traditional adaptive algorithms, yielding real-time equalization. The performance of the new algorithms is tested by extensive simulations on standard mobile channels.

  1. An improved algorithm for finding all minimal paths in a network

    International Nuclear Information System (INIS)

    Bai, Guanghan; Tian, Zhigang; Zuo, Ming J.

    2016-01-01

    Minimal paths (MPs) play an important role in network reliability evaluation. In this paper, we report an efficient recursive algorithm for finding all MPs in two-terminal networks, which consist of a source node and a sink node. A linked path structure indexed by nodes is introduced, which accepts both directed and undirected form of networks. The distance between each node and the sink node is defined, and a simple recursive algorithm is presented for labeling the distance for each node. Based on the distance between each node and the sink node, additional conditions for backtracking are incorporated to reduce the number of search branches. With the newly introduced linked node structure, the distances between each node and the sink node, and the additional backtracking conditions, an improved backtracking algorithm for searching for all MPs is developed. In addition, the proposed algorithm can be adapted to search for all minimal paths for each source–sink pair in networks consisting of multiple source nodes and/or multiple sink nodes. Through computational experiments, it is demonstrated that the proposed algorithm is more efficient than existing algorithms when the network size is not too small. The proposed algorithm becomes more advantageous as the size of the network grows. - Highlights: • A linked path structure indexed by nodes is introduced to represent networks. • Additional conditions for backtracking are proposed based on the distance of each node. • An efficient algorithm is developed to find all MPs for two-terminal networks. • The computational efficiency of the algorithm for two-terminal networks is investigated. • The computational efficiency of the algorithm for multi-terminal networks is investigated.

  2. Improved core protection calculator system algorithm

    International Nuclear Information System (INIS)

    Yoon, Tae Young; Park, Young Ho; In, Wang Kee; Bae, Jong Sik; Baeg, Seung Yeob

    2009-01-01

    Core Protection Calculator System (CPCS) is a digitized core protection system which provides core protection functions based on two reactor core operation parameters, Departure from Nucleate Boiling Ratio (DNBR) and Local Power Density (LPD). It generates a reactor trip signal when the core condition exceeds the DNBR or LPD design limit. It consists of four independent channels which adapted a two out of four trip logic. CPCS algorithm improvement for the newly designed core protection calculator system, RCOPS (Reactor COre Protection System), is described in this paper. New features include the improvement of DNBR algorithm for thermal margin, the addition of pre trip alarm generation for auxiliary trip function, VOPT (Variable Over Power Trip) prevention during RPCS (Reactor Power Cutback System) actuation and the improvement of CEA (Control Element Assembly) signal checking algorithm. To verify the improved CPCS algorithm, CPCS algorithm verification tests, 'Module Test' and 'Unit Test', would be performed on RCOPS single channel facility. It is expected that the improved CPCS algorithm will increase DNBR margin and enhance the plant availability by reducing unnecessary reactor trips

  3. Development and Evaluation of Algorithms for Breath Alcohol Screening.

    Science.gov (United States)

    Ljungblad, Jonas; Hök, Bertil; Ekström, Mikael

    2016-04-01

    Breath alcohol screening is important for traffic safety, access control and other areas of health promotion. A family of sensor devices useful for these purposes is being developed and evaluated. This paper is focusing on algorithms for the determination of breath alcohol concentration in diluted breath samples using carbon dioxide to compensate for the dilution. The examined algorithms make use of signal averaging, weighting and personalization to reduce estimation errors. Evaluation has been performed by using data from a previously conducted human study. It is concluded that these features in combination will significantly reduce the random error compared to the signal averaging algorithm taken alone.

  4. Motion Cueing Algorithm Development: Human-Centered Linear and Nonlinear Approaches

    Science.gov (United States)

    Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.

    2005-01-01

    While the performance of flight simulator motion system hardware has advanced substantially, the development of the motion cueing algorithm, the software that transforms simulated aircraft dynamics into realizable motion commands, has not kept pace. Prior research identified viable features from two algorithms: the nonlinear "adaptive algorithm", and the "optimal algorithm" that incorporates human vestibular models. A novel approach to motion cueing, the "nonlinear algorithm" is introduced that combines features from both approaches. This algorithm is formulated by optimal control, and incorporates a new integrated perception model that includes both visual and vestibular sensation and the interaction between the stimuli. Using a time-varying control law, the matrix Riccati equation is updated in real time by a neurocomputing approach. Preliminary pilot testing resulted in the optimal algorithm incorporating a new otolith model, producing improved motion cues. The nonlinear algorithm vertical mode produced a motion cue with a time-varying washout, sustaining small cues for longer durations and washing out large cues more quickly compared to the optimal algorithm. The inclusion of the integrated perception model improved the responses to longitudinal and lateral cues. False cues observed with the NASA adaptive algorithm were absent. The neurocomputing approach was crucial in that the number of presentations of an input vector could be reduced to meet the real time requirement without degrading the quality of the motion cues.

  5. An algorithm for hyperspectral remote sensing of aerosols: 1. Development of theoretical framework

    International Nuclear Information System (INIS)

    Hou, Weizhen; Wang, Jun; Xu, Xiaoguang; Reid, Jeffrey S.; Han, Dong

    2016-01-01

    This paper describes the first part of a series of investigations to develop algorithms for simultaneous retrieval of aerosol parameters and surface reflectance from a newly developed hyperspectral instrument, the GEOstationary Trace gas and Aerosol Sensor Optimization (GEO-TASO), by taking full advantage of available hyperspectral measurement information in the visible bands. We describe the theoretical framework of an inversion algorithm for the hyperspectral remote sensing of the aerosol optical properties, in which major principal components (PCs) for surface reflectance is assumed known, and the spectrally dependent aerosol refractive indices are assumed to follow a power-law approximation with four unknown parameters (two for real and two for imaginary part of refractive index). New capabilities for computing the Jacobians of four Stokes parameters of reflected solar radiation at the top of the atmosphere with respect to these unknown aerosol parameters and the weighting coefficients for each PC of surface reflectance are added into the UNified Linearized Vector Radiative Transfer Model (UNL-VRTM), which in turn facilitates the optimization in the inversion process. Theoretical derivations of the formulas for these new capabilities are provided, and the analytical solutions of Jacobians are validated against the finite-difference calculations with relative error less than 0.2%. Finally, self-consistency check of the inversion algorithm is conducted for the idealized green-vegetation and rangeland surfaces that were spectrally characterized by the U.S. Geological Survey digital spectral library. It shows that the first six PCs can yield the reconstruction of spectral surface reflectance with errors less than 1%. Assuming that aerosol properties can be accurately characterized, the inversion yields a retrieval of hyperspectral surface reflectance with an uncertainty of 2% (and root-mean-square error of less than 0.003), which suggests self-consistency in the

  6. Predictive score for the development or progression of Graves' orbitopathy in patients with newly diagnosed Graves' hyperthyroidism

    DEFF Research Database (Denmark)

    Wiersinga, Wilmar; Žarković, Miloš; Bartalena, Luigi

    2018-01-01

    OBJECTIVE: To construct a predictive score for the development or progression of Graves' orbitopathy (GO) in Graves' hyperthyroidism (GH). DESIGN: Prospective observational study in patients with newly diagnosed GH, treated with antithyroid drugs (ATD) for 18 months at ten participating centers f...

  7. A Discrete Fruit Fly Optimization Algorithm for the Traveling Salesman Problem.

    Directory of Open Access Journals (Sweden)

    Zi-Bin Jiang

    Full Text Available The fruit fly optimization algorithm (FOA is a newly developed bio-inspired algorithm. The continuous variant version of FOA has been proven to be a powerful evolutionary approach to determining the optima of a numerical function on a continuous definition domain. In this study, a discrete FOA (DFOA is developed and applied to the traveling salesman problem (TSP, a common combinatorial problem. In the DFOA, the TSP tour is represented by an ordering of city indices, and the bio-inspired meta-heuristic search processes are executed with two elaborately designed main procedures: the smelling and tasting processes. In the smelling process, an effective crossover operator is used by the fruit fly group to search for the neighbors of the best-known swarm location. During the tasting process, an edge intersection elimination (EXE operator is designed to improve the neighbors of the non-optimum food location in order to enhance the exploration performance of the DFOA. In addition, benchmark instances from the TSPLIB are classified in order to test the searching ability of the proposed algorithm. Furthermore, the effectiveness of the proposed DFOA is compared to that of other meta-heuristic algorithms. The results indicate that the proposed DFOA can be effectively used to solve TSPs, especially large-scale problems.

  8. Development of the algorithm for obtaining 3-dimensional information using the structured light

    Energy Technology Data Exchange (ETDEWEB)

    Shin, Dong Uk; Lee, Jae Hyub; Kim, Chung Soo [Korea University of Technology and Education, Cheonan (Korea)

    1998-03-01

    The utilization of robot in atomic power plants or nuclear-related facilities has grown rapidly. In order to perform preassigned jobs using robot in nuclear-related facilities, advanced technology extracting 3D information of objects is essential. We have studied an algorithm to extract 3D information of objects using laser slit light and camera, and developed the following hardware system and algorithms. (1) We have designed and fabricated the hardware system which consists of laser light and two cameras. The hardware system can be easily installed on the robot. (2) In order to reduce the occlusion problem when measuring 3D information using laser slit light and camera, we have studied system with laser slit light and two cameras and developed algorithm to synthesize 3D information obtained from two cameras. (2) For easy use of obtained 3D information, we expressed it as digital distance image format and developed algorithm to interpolate 3D information of points which is not obtained. (4) In order to simplify calibration of the camera's parameter, we have also designed an fabricated LED plate, and developed an algorithm detecting the center position of LED automatically. We can certify the efficiency of developed algorithm and hardware system through experimental results. 16 refs., 26 figs., 1 tabs. (Author)

  9. Pressure drop performance evaluation for test assemblies with the newly developed top and bottom nozzles

    International Nuclear Information System (INIS)

    Lee, S. K.; Park, N. K.; Su, J. M.; Kim, H. K.; Lee, J. N.; Kim, K. T.

    2003-01-01

    To perform the hydraulic test for the newly developed top and bottom nozzles, two kinds of test assemblies were manufactured i. e. one is the test assembly which has the newly developed top and bottom nozzles and the other is Guardian test assembly which is commercially in mass production now. The test results show that the test assembly with one top nozzle and two bottom nozzles has a greater pressure loss coefficient than Guardian test assembly by 60.9% and 90.4% at the bottom nozzle location. This cause is due to the debris filtering plate for bottom nozzle to improve a filtering efficiency aginst foreign material. In the region of mid grid and top nozzle, there is no difference in pressure loss coefficient between the test assemblies since the componet features in these regions are very similar or same each other. The loss coefficients are 14.2% and 21.9% for model A and B respectively in the scale of test assembly, and the value would be within the 10% in the scale of real fuel assembly. As a result of hydraulic performance evaluation, model A is superior to model B

  10. A dynamic programming algorithm for the buffer allocation problem in homogeneous asymptotically reliable serial production lines

    Directory of Open Access Journals (Sweden)

    Diamantidis A. C.

    2004-01-01

    Full Text Available In this study, the buffer allocation problem (BAP in homogeneous, asymptotically reliable serial production lines is considered. A known aggregation method, given by Lim, Meerkov, and Top (1990, for the performance evaluation (i.e., estimation of throughput of this type of production lines when the buffer allocation is known, is used as an evaluative method in conjunction with a newly developed dynamic programming (DP algorithm for the BAP. The proposed algorithm is applied to production lines where the number of machines is varying from four up to a hundred machines. The proposed algorithm is fast because it reduces the volume of computations by rejecting allocations that do not lead to maximization of the line's throughput. Numerical results are also given for large production lines.

  11. A new memetic algorithm for mitigating tandem automated guided vehicle system partitioning problem

    Science.gov (United States)

    Pourrahimian, Parinaz

    2017-11-01

    Automated Guided Vehicle System (AGVS) provides the flexibility and automation demanded by Flexible Manufacturing System (FMS). However, with the growing concern on responsible management of resource use, it is crucial to manage these vehicles in an efficient way in order reduces travel time and controls conflicts and congestions. This paper presents the development process of a new Memetic Algorithm (MA) for optimizing partitioning problem of tandem AGVS. MAs employ a Genetic Algorithm (GA), as a global search, and apply a local search to bring the solutions to a local optimum point. A new Tabu Search (TS) has been developed and combined with a GA to refine the newly generated individuals by GA. The aim of the proposed algorithm is to minimize the maximum workload of the system. After all, the performance of the proposed algorithm is evaluated using Matlab. This study also compared the objective function of the proposed MA with GA. The results showed that the TS, as a local search, significantly improves the objective function of the GA for different system sizes with large and small numbers of zone by 1.26 in average.

  12. How Schools Can Promote Healthy Development for Newly Arrived Immigrant and Refugee Adolescents: Research Priorities.

    Science.gov (United States)

    McNeely, Clea A; Morland, Lyn; Doty, S Benjamin; Meschke, Laurie L; Awad, Summer; Husain, Altaf; Nashwan, Ayat

    2017-02-01

    The US education system must find creative and effective ways to foster the healthy development of the approximately 2 million newly arrived immigrant and refugee adolescents, many of whom contend with language barriers, limited prior education, trauma, and discrimination. We identify research priorities for promoting the school success of these youth. The study used the 4-phase priority-setting method of the Child Health and Nutrition Research Initiative. In the final stage, 132 researchers, service providers, educators, and policymakers based in the United States were asked to rate the importance of 36 research options. The highest priority research options (range 1 to 5) were: evaluating newcomer programs (mean = 4.44, SD = 0.55), identifying how family and community stressors affect newly arrived immigrant and refugee adolescents' functioning in school (mean = 4.40, SD = 0.56), identifying teachers' major stressors in working with this population (mean = 4.36, SD = 0.72), and identifying how to engage immigrant and refugee families in their children's education (mean = 4.35, SD = 0.62). These research priorities emphasize the generation of practical knowledge that could translate to immediate, tangible benefits for schools. Funders, schools, and researchers can use these research priorities to guide research for the highest benefit of schools and the newly arrived immigrant and refugee adolescents they serve. © 2017, American School Health Association.

  13. A Developed Artificial Bee Colony Algorithm Based on Cloud Model

    Directory of Open Access Journals (Sweden)

    Ye Jin

    2018-04-01

    Full Text Available The Artificial Bee Colony (ABC algorithm is a bionic intelligent optimization method. The cloud model is a kind of uncertainty conversion model between a qualitative concept T ˜ that is presented by nature language and its quantitative expression, which integrates probability theory and the fuzzy mathematics. A developed ABC algorithm based on cloud model is proposed to enhance accuracy of the basic ABC algorithm and avoid getting trapped into local optima by introducing a new select mechanism, replacing the onlooker bees’ search formula and changing the scout bees’ updating formula. Experiments on CEC15 show that the new algorithm has a faster convergence speed and higher accuracy than the basic ABC and some cloud model based ABC variants.

  14. A multi-method approach to curriculum development for in-service training in China's newly established health emergency response offices.

    Directory of Open Access Journals (Sweden)

    Yadong Wang

    Full Text Available To describe an innovative approach for developing and implementing an in-service curriculum in China for staff of the newly established health emergency response offices (HEROs, and that is generalisable to other settings.The multi-method training needs assessment included reviews of the competency domains needed to implement the International Health Regulations (2005 as well as China's policies and emergency regulations. The review, iterative interviews and workshops with experts in government, academia, the military, and with HERO staff were reviewed critically by an expert technical advisory panel.Over 1600 participants contributed to curriculum development. Of the 18 competency domains identified as essential for HERO staff, nine were developed into priority in-service training modules to be conducted over 2.5 weeks. Experts from academia and experienced practitioners prepared and delivered each module through lectures followed by interactive problem-solving exercises and desktop simulations to help trainees apply, experiment with, and consolidate newly acquired knowledge and skills.This study adds to the emerging literature on China's enduring efforts to strengthen its emergency response capabilities since the outbreak of SARS in 2003. The multi-method approach to curriculum development in partnership with senior policy-makers, researchers, and experienced practitioners can be applied in other settings to ensure training is responsive and customized to local needs, resources and priorities. Ongoing curriculum development should reflect international standards and be coupled with the development of appropriate performance support systems at the workplace for motivating staff to apply their newly acquired knowledge and skills effectively and creatively.

  15. A multi-method approach to curriculum development for in-service training in China's newly established health emergency response offices.

    Science.gov (United States)

    Wang, Yadong; Li, Xiangrui; Yuan, Yiwen; Patel, Mahomed S

    2014-01-01

    To describe an innovative approach for developing and implementing an in-service curriculum in China for staff of the newly established health emergency response offices (HEROs), and that is generalisable to other settings. The multi-method training needs assessment included reviews of the competency domains needed to implement the International Health Regulations (2005) as well as China's policies and emergency regulations. The review, iterative interviews and workshops with experts in government, academia, the military, and with HERO staff were reviewed critically by an expert technical advisory panel. Over 1600 participants contributed to curriculum development. Of the 18 competency domains identified as essential for HERO staff, nine were developed into priority in-service training modules to be conducted over 2.5 weeks. Experts from academia and experienced practitioners prepared and delivered each module through lectures followed by interactive problem-solving exercises and desktop simulations to help trainees apply, experiment with, and consolidate newly acquired knowledge and skills. This study adds to the emerging literature on China's enduring efforts to strengthen its emergency response capabilities since the outbreak of SARS in 2003. The multi-method approach to curriculum development in partnership with senior policy-makers, researchers, and experienced practitioners can be applied in other settings to ensure training is responsive and customized to local needs, resources and priorities. Ongoing curriculum development should reflect international standards and be coupled with the development of appropriate performance support systems at the workplace for motivating staff to apply their newly acquired knowledge and skills effectively and creatively.

  16. Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics

    Science.gov (United States)

    Goodrich, John W.; Dyson, Rodger W.

    1999-01-01

    The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that

  17. Search for 'Little Higgs' and reconstruction algorithms developments in Atlas

    International Nuclear Information System (INIS)

    Rousseau, D.

    2007-05-01

    This document summarizes developments of framework and reconstruction algorithms for the ATLAS detector at the LHC. A library of reconstruction algorithms has been developed in a more and more complex environment. The reconstruction software originally designed on an optimistic Monte-Carlo simulation, has been confronted with a more detailed 'as-built' simulation. The 'Little Higgs' is an effective theory which can be taken for granted, or as an opportunity to study heavy resonances. In several cases, these resonances can be detected in original channels like tZ, ZH or WH. (author)

  18. Accumulation of operational history through emulation test to meet proven technology requirement for newly developed I and C technology

    International Nuclear Information System (INIS)

    Yeong Cheol, Shin; Sung Kon, Kang; Han Seong, Son

    2006-01-01

    As new advanced digital I and C technology with potential benefits of higher functionality and better cost effectiveness is available in the market, NPP (Nuclear Power Plant) operators are inclined to use the new technology for the construction of new plant and the upgrade of existing plants. However, this new technology poses risks to the NPP operators at the same time. These risks are mainly due to the poor reliability of newly developed technology. KHNP's past experiences with the new equipment shows many cases of reliability problems. And their consequences include unintended plant trips, lowered acceptance of the new digital technology by the plant I and C maintenance crew, and increased licensing burden in answering for questions from the nuclear regulatory body. Considering the fact that the risk of these failures in the nuclear plant operation is far greater than those in other industry, nuclear power plant operators want proven technology for I and C systems. This paper presents an approach for the emulation of operational history through which a newly developed technology becomes a proven technology. One of the essential elements of this approach is the feedback scheme of running the new equipment in emulated environment, gathering equipment failure, and correcting the design(and test bed). The emulation of environment includes normal and abnormal events of the new equipment such as reconfiguration of control system due to power failure, plant operation including full spectrum of credible scenarios in an NPP. Emulation of I and C equipment execution mode includes normal operation, initialization and termination, abnormal operation, hardware maintenance and maintenance of algorithm/software. Plant specific simulator is used to create complete profile of plant operational conditions that I and C equipment is to experience in the real plant. Virtual operating crew technology is developed to run the simulator scenarios without involvement of actual operators

  19. Development of Base Transceiver Station Selection Algorithm for ...

    African Journals Online (AJOL)

    TEMS) equipment was carried out on the existing BTSs, and a linear algorithm optimization program based on the spectral link efficiency of each BTS was developed, the output of this site optimization gives the selected number of base station sites ...

  20. Developing and Implementing the Data Mining Algorithms in RAVEN

    International Nuclear Information System (INIS)

    Sen, Ramazan Sonat; Maljovec, Daniel Patrick; Alfonsi, Andrea; Rabiti, Cristian

    2015-01-01

    The RAVEN code is becoming a comprehensive tool to perform probabilistic risk assessment, uncertainty quantification, and verification and validation. The RAVEN code is being developed to support many programs and to provide a set of methodologies and algorithms for advanced analysis. Scientific computer codes can generate enormous amounts of data. To post-process and analyze such data might, in some cases, take longer than the initial software runtime. Data mining algorithms/methods help in recognizing and understanding patterns in the data, and thus discover knowledge in databases. The methodologies used in the dynamic probabilistic risk assessment or in uncertainty and error quantification analysis couple system/physics codes with simulation controller codes, such as RAVEN. RAVEN introduces both deterministic and stochastic elements into the simulation while the system/physics code model the dynamics deterministically. A typical analysis is performed by sampling values of a set of parameter values. A major challenge in using dynamic probabilistic risk assessment or uncertainty and error quantification analysis for a complex system is to analyze the large number of scenarios generated. Data mining techniques are typically used to better organize and understand data, i.e. recognizing patterns in the data. This report focuses on development and implementation of Application Programming Interfaces (APIs) for different data mining algorithms, and the application of these algorithms to different databases.

  1. Developing and Implementing the Data Mining Algorithms in RAVEN

    Energy Technology Data Exchange (ETDEWEB)

    Sen, Ramazan Sonat [Idaho National Lab. (INL), Idaho Falls, ID (United States); Maljovec, Daniel Patrick [Idaho National Lab. (INL), Idaho Falls, ID (United States); Alfonsi, Andrea [Idaho National Lab. (INL), Idaho Falls, ID (United States); Rabiti, Cristian [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    The RAVEN code is becoming a comprehensive tool to perform probabilistic risk assessment, uncertainty quantification, and verification and validation. The RAVEN code is being developed to support many programs and to provide a set of methodologies and algorithms for advanced analysis. Scientific computer codes can generate enormous amounts of data. To post-process and analyze such data might, in some cases, take longer than the initial software runtime. Data mining algorithms/methods help in recognizing and understanding patterns in the data, and thus discover knowledge in databases. The methodologies used in the dynamic probabilistic risk assessment or in uncertainty and error quantification analysis couple system/physics codes with simulation controller codes, such as RAVEN. RAVEN introduces both deterministic and stochastic elements into the simulation while the system/physics code model the dynamics deterministically. A typical analysis is performed by sampling values of a set of parameter values. A major challenge in using dynamic probabilistic risk assessment or uncertainty and error quantification analysis for a complex system is to analyze the large number of scenarios generated. Data mining techniques are typically used to better organize and understand data, i.e. recognizing patterns in the data. This report focuses on development and implementation of Application Programming Interfaces (APIs) for different data mining algorithms, and the application of these algorithms to different databases.

  2. Machine Learning Algorithms Outperform Conventional Regression Models in Predicting Development of Hepatocellular Carcinoma

    Science.gov (United States)

    Singal, Amit G.; Mukherjee, Ashin; Elmunzer, B. Joseph; Higgins, Peter DR; Lok, Anna S.; Zhu, Ji; Marrero, Jorge A; Waljee, Akbar K

    2015-01-01

    Background Predictive models for hepatocellular carcinoma (HCC) have been limited by modest accuracy and lack of validation. Machine learning algorithms offer a novel methodology, which may improve HCC risk prognostication among patients with cirrhosis. Our study's aim was to develop and compare predictive models for HCC development among cirrhotic patients, using conventional regression analysis and machine learning algorithms. Methods We enrolled 442 patients with Child A or B cirrhosis at the University of Michigan between January 2004 and September 2006 (UM cohort) and prospectively followed them until HCC development, liver transplantation, death, or study termination. Regression analysis and machine learning algorithms were used to construct predictive models for HCC development, which were tested on an independent validation cohort from the Hepatitis C Antiviral Long-term Treatment against Cirrhosis (HALT-C) Trial. Both models were also compared to the previously published HALT-C model. Discrimination was assessed using receiver operating characteristic curve analysis and diagnostic accuracy was assessed with net reclassification improvement and integrated discrimination improvement statistics. Results After a median follow-up of 3.5 years, 41 patients developed HCC. The UM regression model had a c-statistic of 0.61 (95%CI 0.56-0.67), whereas the machine learning algorithm had a c-statistic of 0.64 (95%CI 0.60–0.69) in the validation cohort. The machine learning algorithm had significantly better diagnostic accuracy as assessed by net reclassification improvement (pmachine learning algorithm (p=0.047). Conclusion Machine learning algorithms improve the accuracy of risk stratifying patients with cirrhosis and can be used to accurately identify patients at high-risk for developing HCC. PMID:24169273

  3. CVFEM for Multiphase Flow with Disperse and Interface Tracking, and Algorithms Performances

    Directory of Open Access Journals (Sweden)

    M. Milanez

    2015-12-01

    Full Text Available A Control-Volume Finite-Element Method (CVFEM is newly formulated within Eulerian and spatial averaging frameworks for effective simulation of disperse transport, deposit distribution and interface tracking. Their algorithms are implemented alongside an existing continuous phase algorithm. Flow terms are newly implemented for a control volume (CV fixed in a space, and the CVs' equations are assembled based on a finite element method (FEM. Upon impacting stationary and moving boundaries, the disperse phase changes its phase and the solver triggers identification of CVs with excess deposit and their neighboring CVs for its accommodation in front of an interface. The solver then updates boundary conditions on the moving interface as well as domain conditions on the accumulating deposit. Corroboration of the algorithms' performances is conducted on illustrative simulations with novel and existing Eulerian and Lagrangian solutions, such as (- other, i. e. external methods with analytical and physical experimental formulations, and (- characteristics internal to CVFEM.

  4. Dynamic gradient descent learning algorithms for enhanced empirical modeling of power plants

    International Nuclear Information System (INIS)

    Parlos, A.G.; Atiya, Amir; Chong, K.T.

    1991-01-01

    A newly developed dynamic gradient descent-based learning algorithm is used to train a recurrent multilayer perceptron network for use in empirical modeling of power plants. The two main advantages of the proposed learning algorithm are its ability to consider past error gradient information for future use and the two forward passes associated with its implementation, instead of one forward and one backward pass of the backpropagation algorithm. The latter advantage results in computational time saving because both passes can be performed simultaneously. The dynamic learning algorithm is used to train a hybrid feedforward/feedback neural network, a recurrent multilayer perceptron, which was previously found to exhibit good interpolation and extrapolation capabilities in modeling nonlinear dynamic systems. One of the drawbacks, however, of the previously reported work has been the long training times associated with accurate empirical models. The enhanced learning capabilities provided by the dynamic gradient descent-based learning algorithm are demonstrated by a case study of a steam power plant. The number of iterations required for accurate empirical modeling has been reduced from tens of thousands to hundreds, thus significantly expediting the learning process

  5. Development of target-tracking algorithms using neural network

    Energy Technology Data Exchange (ETDEWEB)

    Park, Dong Sun; Lee, Joon Whaoan; Yoon, Sook; Baek, Seong Hyun; Lee, Myung Jae [Chonbuk National University, Chonjoo (Korea)

    1998-04-01

    The utilization of remote-control robot system in atomic power plants or nuclear-related facilities grows rapidly, to protect workers form high radiation environments. Such applications require complete stability of the robot system, so that precisely tracking the robot is essential for the whole system. This research is to accomplish the goal by developing appropriate algorithms for remote-control robot systems. A neural network tracking system is designed and experimented to trace a robot Endpoint. This model is aimed to utilized the excellent capabilities of neural networks; nonlinear mapping between inputs and outputs, learning capability, and generalization capability. The neural tracker consists of two networks for position detection and prediction. Tracking algorithms are developed and experimented for the two models. Results of the experiments show that both models are promising as real-time target-tracking systems for remote-control robot systems. (author). 10 refs., 47 figs.

  6. Development of web-based reliability data analysis algorithm model and its application

    International Nuclear Information System (INIS)

    Hwang, Seok-Won; Oh, Ji-Yong; Moosung-Jae

    2010-01-01

    For this study, a database model of plant reliability was developed for the effective acquisition and management of plant-specific data that can be used in various applications of plant programs as well as in Probabilistic Safety Assessment (PSA). Through the development of a web-based reliability data analysis algorithm, this approach systematically gathers specific plant data such as component failure history, maintenance history, and shift diary. First, for the application of the developed algorithm, this study reestablished the raw data types, data deposition procedures and features of the Enterprise Resource Planning (ERP) system process. The component codes and system codes were standardized to make statistical analysis between different types of plants possible. This standardization contributes to the establishment of a flexible database model that allows the customization of reliability data for the various applications depending on component types and systems. In addition, this approach makes it possible for users to perform trend analyses and data comparisons for the significant plant components and systems. The validation of the algorithm is performed through a comparison of the importance measure value (Fussel-Vesely) of the mathematical calculation and that of the algorithm application. The development of a reliability database algorithm is one of the best approaches for providing systemic management of plant-specific reliability data with transparency and continuity. This proposed algorithm reinforces the relationships between raw data and application results so that it can provide a comprehensive database that offers everything from basic plant-related data to final customized data.

  7. Development of web-based reliability data analysis algorithm model and its application

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Seok-Won, E-mail: swhwang@khnp.co.k [Korea Hydro and Nuclear Power Co. Ltd., Jang-Dong 25-1, Yuseong-Gu, 305-343 Daejeon (Korea, Republic of); Oh, Ji-Yong [Korea Hydro and Nuclear Power Co. Ltd., Jang-Dong 25-1, Yuseong-Gu, 305-343 Daejeon (Korea, Republic of); Moosung-Jae [Department of Nuclear Engineering Hanyang University 17 Haengdang, Sungdong, Seoul (Korea, Republic of)

    2010-02-15

    For this study, a database model of plant reliability was developed for the effective acquisition and management of plant-specific data that can be used in various applications of plant programs as well as in Probabilistic Safety Assessment (PSA). Through the development of a web-based reliability data analysis algorithm, this approach systematically gathers specific plant data such as component failure history, maintenance history, and shift diary. First, for the application of the developed algorithm, this study reestablished the raw data types, data deposition procedures and features of the Enterprise Resource Planning (ERP) system process. The component codes and system codes were standardized to make statistical analysis between different types of plants possible. This standardization contributes to the establishment of a flexible database model that allows the customization of reliability data for the various applications depending on component types and systems. In addition, this approach makes it possible for users to perform trend analyses and data comparisons for the significant plant components and systems. The validation of the algorithm is performed through a comparison of the importance measure value (Fussel-Vesely) of the mathematical calculation and that of the algorithm application. The development of a reliability database algorithm is one of the best approaches for providing systemic management of plant-specific reliability data with transparency and continuity. This proposed algorithm reinforces the relationships between raw data and application results so that it can provide a comprehensive database that offers everything from basic plant-related data to final customized data.

  8. Development of an inter-layer solute transport algorithm for SOLTR computer program. Part 1. The algorithm

    International Nuclear Information System (INIS)

    Miller, I.; Roman, K.

    1979-12-01

    In order to perform studies of the influence of regional groundwater flow systems on the long-term performance of potential high-level nuclear waste repositories, it was determined that an adequate computer model would have to consider the full three-dimensional flow system. Golder Associates' SOLTR code, while three-dimensional, has an overly simple algorithm for simulating the passage of radionuclides from one aquifier to another above or below it. Part 1 of this report describes the algorithm developed to provide SOLTR with an improved capability for simulating interaquifer transport

  9. Deploy Nalu/Kokkos algorithmic infrastructure with performance benchmarking.

    Energy Technology Data Exchange (ETDEWEB)

    Domino, Stefan P. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ananthan, Shreyas [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Knaus, Robert C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Williams, Alan B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-09-29

    The former Nalu interior heterogeneous algorithm design, which was originally designed to manage matrix assembly operations over all elemental topology types, has been modified to operate over homogeneous collections of mesh entities. This newly templated kernel design allows for removal of workset variable resize operations that were formerly required at each loop over a Sierra ToolKit (STK) bucket (nominally, 512 entities in size). Extensive usage of the Standard Template Library (STL) std::vector has been removed in favor of intrinsic Kokkos memory views. In this milestone effort, the transition to Kokkos as the underlying infrastructure to support performance and portability on many-core architectures has been deployed for key matrix algorithmic kernels. A unit-test driven design effort has developed a homogeneous entity algorithm that employs a team-based thread parallelism construct. The STK Single Instruction Multiple Data (SIMD) infrastructure is used to interleave data for improved vectorization. The collective algorithm design, which allows for concurrent threading and SIMD management, has been deployed for the core low-Mach element- based algorithm. Several tests to ascertain SIMD performance on Intel KNL and Haswell architectures have been carried out. The performance test matrix includes evaluation of both low- and higher-order methods. The higher-order low-Mach methodology builds on polynomial promotion of the core low-order control volume nite element method (CVFEM). Performance testing of the Kokkos-view/SIMD design indicates low-order matrix assembly kernel speed-up ranging between two and four times depending on mesh loading and node count. Better speedups are observed for higher-order meshes (currently only P=2 has been tested) especially on KNL. The increased workload per element on higher-order meshes bene ts from the wide SIMD width on KNL machines. Combining multiple threads with SIMD on KNL achieves a 4.6x speedup over the baseline, with

  10. Association Between Manual Loading and Newly Developed Carpal Tunnel Syndrome in Subjects With Physical Disabilities: A Follow-Up Study.

    Science.gov (United States)

    Lin, Yen-Nung; Chiu, Chun-Chieh; Huang, Shih-Wei; Hsu, Wen-Yen; Liou, Tsan-Hon; Chen, Yi-Wen; Chang, Kwang-Hwa

    2017-10-01

    To identify the association between body composition and newly developed carpal tunnel syndrome (CTS) and to search for the best probabilistic cutoff value of associated factors to predict subjects with physical disabilities developing new CTS. Longitudinal. University-affiliated medical center. Subjects with physical disabilities (N=47; mean age ± SD, 42.1±7.7y). Not applicable. Median and ulnar sensory nerve conduction velocity (SNCV) were measured at the initial and follow-up tests (interval >2y). Total and regional body composition were measured with dual-energy x-ray absorptiometry at the initial test. Leg lean tissue percentage was calculated to delineate each participant's manual loading degree during locomotion. Leg lean tissue percentage is the lean tissue mass of both legs divided by body weight. Based on median SNCV changes, we divided all participants into 3 groups: subjects with bilateral CTS (median SNCV value normative ulnar SNCV value >37.8m/s) in the initial test (n=10), subjects with newly developed CTS in the follow-up test (n=8), and subjects without additional CTS in the follow-up test (n=27). Eight of 35 subjects not having bilateral CTS initially developed new CTS (8.8% per year; mean follow-up period, 2.6y). Leg lean tissue percentage was associated with the probability of newly developed CTS (adjusted odds ratio, .64; P12% were less likely to have developed new CTS at the follow-up test (sensitivity, .75; specificity, .85; area under the curve, .88; Pphysical disabilities. Therefore, a preventive program for those subjects at risk can start early. Copyright © 2017 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  11. High-performance bidiagonal reduction using tile algorithms on homogeneous multicore architectures

    KAUST Repository

    Ltaief, Hatem; Luszczek, Piotr R.; Dongarra, Jack

    2013-01-01

    dependence translation layer that maps the general algorithm with column-major data layout into the tile data layout; and (4) a dynamic runtime system that efficiently schedules the newly implemented kernels across the processing units and ensures

  12. Method and Excel VBA Algorithm for Modeling Master Recession Curve Using Trigonometry Approach.

    Science.gov (United States)

    Posavec, Kristijan; Giacopetti, Marco; Materazzi, Marco; Birk, Steffen

    2017-11-01

    A new method was developed and implemented into an Excel Visual Basic for Applications (VBAs) algorithm utilizing trigonometry laws in an innovative way to overlap recession segments of time series and create master recession curves (MRCs). Based on a trigonometry approach, the algorithm horizontally translates succeeding recession segments of time series, placing their vertex, that is, the highest recorded value of each recession segment, directly onto the appropriate connection line defined by measurement points of a preceding recession segment. The new method and algorithm continues the development of methods and algorithms for the generation of MRC, where the first published method was based on a multiple linear/nonlinear regression model approach (Posavec et al. 2006). The newly developed trigonometry-based method was tested on real case study examples and compared with the previously published multiple linear/nonlinear regression model-based method. The results show that in some cases, that is, for some time series, the trigonometry-based method creates narrower overlaps of the recession segments, resulting in higher coefficients of determination R 2 , while in other cases the multiple linear/nonlinear regression model-based method remains superior. The Excel VBA algorithm for modeling MRC using the trigonometry approach is implemented into a spreadsheet tool (MRCTools v3.0 written by and available from Kristijan Posavec, Zagreb, Croatia) containing the previously published VBA algorithms for MRC generation and separation. All algorithms within the MRCTools v3.0 are open access and available free of charge, supporting the idea of running science on available, open, and free of charge software. © 2017, National Ground Water Association.

  13. Dedicated algorithm and software for the integrated analysis of AC and DC electrical outputs of piezoelectric vibration energy harvesters

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jae Eum [Catholic University of Daegu, Gyeongsan (Korea, Republic of)

    2014-10-15

    DC electrical outputs of a piezoelectric vibration energy harvester by nonlinear rectifying circuitry can hardly be obtained either by any mathematical models developed so far or by finite element analysis. To address the issue, this work used an equivalent electrical circuit model and newly developed an algorithm to efficiently identify relevant circuit parameters of arbitrarily-shaped cantilevered piezoelectric energy harvesters. The developed algorithm was then realized as a dedicated software module by adopting ANSYS finite element analysis software for the parameters identification and the Tcl/Tk programming language for a graphical user interface and linkage with ANSYS. For verifications, various AC electrical outputs by the developed software were compared with those by traditional finite element analysis. DC electrical outputs through rectifying circuitry were also examined for varying values of the smoothing capacitance and load resistance.

  14. Dedicated algorithm and software for the integrated analysis of AC and DC electrical outputs of piezoelectric vibration energy harvesters

    International Nuclear Information System (INIS)

    Kim, Jae Eum

    2014-01-01

    DC electrical outputs of a piezoelectric vibration energy harvester by nonlinear rectifying circuitry can hardly be obtained either by any mathematical models developed so far or by finite element analysis. To address the issue, this work used an equivalent electrical circuit model and newly developed an algorithm to efficiently identify relevant circuit parameters of arbitrarily-shaped cantilevered piezoelectric energy harvesters. The developed algorithm was then realized as a dedicated software module by adopting ANSYS finite element analysis software for the parameters identification and the Tcl/Tk programming language for a graphical user interface and linkage with ANSYS. For verifications, various AC electrical outputs by the developed software were compared with those by traditional finite element analysis. DC electrical outputs through rectifying circuitry were also examined for varying values of the smoothing capacitance and load resistance.

  15. Texas Medication Algorithm Project: development and feasibility testing of a treatment algorithm for patients with bipolar disorder.

    Science.gov (United States)

    Suppes, T; Swann, A C; Dennehy, E B; Habermacher, E D; Mason, M; Crismon, M L; Toprac, M G; Rush, A J; Shon, S P; Altshuler, K Z

    2001-06-01

    Use of treatment guidelines for treatment of major psychiatric illnesses has increased in recent years. The Texas Medication Algorithm Project (TMAP) was developed to study the feasibility and process of developing and implementing guidelines for bipolar disorder, major depressive disorder, and schizophrenia in the public mental health system of Texas. This article describes the consensus process used to develop the first set of TMAP algorithms for the Bipolar Disorder Module (Phase 1) and the trial testing the feasibility of their implementation in inpatient and outpatient psychiatric settings across Texas (Phase 2). The feasibility trial answered core questions regarding implementation of treatment guidelines for bipolar disorder. A total of 69 patients were treated with the original algorithms for bipolar disorder developed in Phase 1 of TMAP. Results support that physicians accepted the guidelines, followed recommendations to see patients at certain intervals, and utilized sequenced treatment steps differentially over the course of treatment. While improvements in clinical symptoms (24-item Brief Psychiatric Rating Scale) were observed over the course of enrollment in the trial, these conclusions are limited by the fact that physician volunteers were utilized for both treatment and ratings. and there was no control group. Results from Phases 1 and 2 indicate that it is possible to develop and implement a treatment guideline for patients with a history of mania in public mental health clinics in Texas. TMAP Phase 3, a recently completed larger and controlled trial assessing the clinical and economic impact of treatment guidelines and patient and family education in the public mental health system of Texas, improves upon this methodology.

  16. Crowdsourcing seizure detection: algorithm development and validation on human implanted device recordings.

    Science.gov (United States)

    Baldassano, Steven N; Brinkmann, Benjamin H; Ung, Hoameng; Blevins, Tyler; Conrad, Erin C; Leyde, Kent; Cook, Mark J; Khambhati, Ankit N; Wagenaar, Joost B; Worrell, Gregory A; Litt, Brian

    2017-06-01

    There exist significant clinical and basic research needs for accurate, automated seizure detection algorithms. These algorithms have translational potential in responsive neurostimulation devices and in automatic parsing of continuous intracranial electroencephalography data. An important barrier to developing accurate, validated algorithms for seizure detection is limited access to high-quality, expertly annotated seizure data from prolonged recordings. To overcome this, we hosted a kaggle.com competition to crowdsource the development of seizure detection algorithms using intracranial electroencephalography from canines and humans with epilepsy. The top three performing algorithms from the contest were then validated on out-of-sample patient data including standard clinical data and continuous ambulatory human data obtained over several years using the implantable NeuroVista seizure advisory system. Two hundred teams of data scientists from all over the world participated in the kaggle.com competition. The top performing teams submitted highly accurate algorithms with consistent performance in the out-of-sample validation study. The performance of these seizure detection algorithms, achieved using freely available code and data, sets a new reproducible benchmark for personalized seizure detection. We have also shared a 'plug and play' pipeline to allow other researchers to easily use these algorithms on their own datasets. The success of this competition demonstrates how sharing code and high quality data results in the creation of powerful translational tools with significant potential to impact patient care. © The Author (2017). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  17. The risk of newly developed visual impairment in treated normal-tension glaucoma: 10-year follow-up.

    Science.gov (United States)

    Choi, Yun Jeong; Kim, Martha; Park, Ki Ho; Kim, Dong Myung; Kim, Seok Hwan

    2014-12-01

    To investigate the risk and risk factors for newly developed visual impairment in treated patients with normal-tension glaucoma (NTG) followed up on for 10 years. Patients with NTG, who did not have visual impairment at the initial diagnosis and had undergone intraocular pressure (IOP)-lowering treatment for more than 7 years, were included on the basis of a retrospective chart review. Visual impairment was defined as either low vision (0.05 [20/400] ≤ visual acuity (VA) visual field (VF) visual impairment, Kaplan-Meier survival analysis and generalized linear mixed effects models were utilized. During the 10.8 years mean follow-up period, 20 eyes of 16 patients were diagnosed as visual impairment (12 eyes as low vision, 8 as blindness) among 623 eyes of 411 patients. The cumulative risk of visual impairment in at least one eye was 2.8% at 10 years and 8.7% at 15 years. The risk factors for visual impairment from treated NTG were worse VF mean deviation (MD) at diagnosis and longer follow-up period. The risk of newly developed visual impairment in the treated patients with NTG was relatively low. Worse VF MD at diagnosis and longer follow-up period were associated with development of visual impairment. © 2014 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.

  18. Algorithm integration using ADL (Algorithm Development Library) for improving CrIMSS EDR science product quality

    Science.gov (United States)

    Das, B.; Wilson, M.; Divakarla, M. G.; Chen, W.; Barnet, C.; Wolf, W.

    2013-05-01

    Algorithm Development Library (ADL) is a framework that mimics the operational system IDPS (Interface Data Processing Segment) that is currently being used to process data from instruments aboard Suomi National Polar-orbiting Partnership (S-NPP) satellite. The satellite was launched successfully in October 2011. The Cross-track Infrared and Microwave Sounder Suite (CrIMSS) consists of the Advanced Technology Microwave Sounder (ATMS) and Cross-track Infrared Sounder (CrIS) instruments that are on-board of S-NPP. These instruments will also be on-board of JPSS (Joint Polar Satellite System) that will be launched in early 2017. The primary products of the CrIMSS Environmental Data Record (EDR) include global atmospheric vertical temperature, moisture, and pressure profiles (AVTP, AVMP and AVPP) and Ozone IP (Intermediate Product from CrIS radiances). Several algorithm updates have recently been proposed by CrIMSS scientists that include fixes to the handling of forward modeling errors, a more conservative identification of clear scenes, indexing corrections for daytime products, and relaxed constraints between surface temperature and air temperature for daytime land scenes. We have integrated these improvements into the ADL framework. This work compares the results from ADL emulation of future IDPS system incorporating all the suggested algorithm updates with the current official processing results by qualitative and quantitative evaluations. The results prove these algorithm updates improve science product quality.

  19. Value of a newly sequenced bacterial genome

    DEFF Research Database (Denmark)

    Barbosa, Eudes; Aburjaile, Flavia F; Ramos, Rommel Tj

    2014-01-01

    and annotation will not be undertaken. It is important to know what is lost when we settle for a draft genome and to determine the "scientific value" of a newly sequenced genome. This review addresses the expected impact of newly sequenced genomes on antibacterial discovery and vaccinology. Also, it discusses...... heightened expectations that NGS would boost antibacterial discovery and vaccine development. Although many possible drug and vaccine targets have been discovered, the success rate of genome-based analysis has remained below expectations. Furthermore, NGS has had consequences for genome quality, resulting...

  20. Underground trials on a newly developed EDW 150-2 L unit

    Energy Technology Data Exchange (ETDEWEB)

    Wille, G.; Klimek, K.H.

    1982-01-01

    Coal-getting from medium thick coalbeds (> 1.7 m) requires high-performance shearer-loaders. Machine length and adjustability have to be such as to permit smooth cutting through geological faults. Furthermore they should be suitable to cut out niches for the AFC drives so that gateroads can be driven along with the face line. The newly developed EDW 150-2 L shearer-loader meets these expectations after various mechanical and electrical improvements. The unit proved its usefulness from the beginning and in the most difficult geological conditions where other shearer-loaders normally available for the range of coalbed thickness would mostly have failed. The multiple requirements and disturbances have led to a number of separate improvements and disturbances have led to a number of separate improvements which together contribute to a basic improvement of the machine concept as far as applications, operational flexibility and safety are concerned.

  1. Development of a MELCOR self-initialization algorithm for boiling water reactors

    International Nuclear Information System (INIS)

    Chien, C.S.; Wang, S.J.; Cheng, S.K.

    1996-01-01

    The MELCOR code, developed by Sandia National Laboratories, is suitable for calculating source terms and simulating severe accident phenomena of nuclear power plants. Prior to simulating a severe accident transient with MELCOR, the initial steady-state conditions must be generated in advance. The current MELCOR users' manuals do not provide a self-initialization procedure; this is the reason users have to adjust the initial conditions by themselves through a trial-and-error approach. A MELCOR self-initialization algorithm for boiling water reactor plants has been developed, which eliminates the tedious trial-and-error procedures and improves the simulation accuracy. This algorithm adjusts the important plant variable such as the dome pressure, downcomer level, and core flow rate to the desired conditions automatically. It is implemented through input with control functions provided in MELCOR. The reactor power and feedwater temperature are fed as input data. The initialization work of full-power conditions of the Kuosheng nuclear power station is cited as an example. These initial conditions are generated successfully with the developed algorithm. The generated initial conditions can be stored in a restart file and used for transient analysis. The methodology in this study improves the accuracy and consistency of transient calculations. Meanwhile, the algorithm provides all MELCOR users an easy and correct method for establishing the initial conditions

  2. A Newly Improved Modified Method Development and Validation of Bromofenac Sodium Sesquihydrate in Bulk Drug Manufacturing

    OpenAIRE

    Sunil Kumar Yelamanchi V; Useni Reddy Mallu; I. V Kasi Viswanath; D. Balasubramanyam; G. Narshima Murthy

    2016-01-01

    The main objective of this study was to develop a simple, efficient, specific, precise and accurate newly improved modified Reverse Phase High Performance Liquid Chromatographic Purity (or) Related substance method for bromofenac sodium sesquihydrate active pharmaceuticals ingredient dosage form. Validation of analytical method is the confirmation by examination and the provision of objective evidence that the particular requirements for a specific intended use are fulfilled as per ICH, USP...

  3. Nature-inspired Cuckoo Search Algorithm for Side Lobe Suppression in a Symmetric Linear Antenna Array

    Directory of Open Access Journals (Sweden)

    K. N. Abdul Rani

    2012-09-01

    Full Text Available In this paper, we proposed a newly modified cuckoo search (MCS algorithm integrated with the Roulette wheel selection operator and the inertia weight controlling the search ability towards synthesizing symmetric linear array geometry with minimum side lobe level (SLL and/or nulls control. The basic cuckoo search (CS algorithm is primarily based on the natural obligate brood parasitic behavior of some cuckoo species in combination with the Levy flight behavior of some birds and fruit flies. The CS metaheuristic approach is straightforward and capable of solving effectively general N-dimensional, linear and nonlinear optimization problems. The array geometry synthesis is first formulated as an optimization problem with the goal of SLL suppression and/or null prescribed placement in certain directions, and then solved by the newly MCS algorithm for the optimum element or isotropic radiator locations in the azimuth-plane or xy-plane. The study also focuses on the four internal parameters of MCS algorithm specifically on their implicit effects in the array synthesis. The optimal inter-element spacing solutions obtained by the MCS-optimizer are validated through comparisons with the standard CS-optimizer and the conventional array within the uniform and the Dolph-Chebyshev envelope patterns using MATLABTM. Finally, we also compared the fine-tuned MCS algorithm with two popular evolutionary algorithm (EA techniques include particle swarm optimization (PSO and genetic algorithms (GA.

  4. Development of CAD implementing the algorithm of boundary elements’ numerical analytical method

    Directory of Open Access Journals (Sweden)

    Yulia V. Korniyenko

    2015-03-01

    Full Text Available Up to recent days the algorithms for numerical-analytical boundary elements method had been implemented with programs written in MATLAB environment language. Each program had a local character, i.e. used to solve a particular problem: calculation of beam, frame, arch, etc. Constructing matrices in these programs was carried out “manually” therefore being time-consuming. The research was purposed onto a reasoned choice of programming language for new CAD development, allows to implement algorithm of numerical analytical boundary elements method and to create visualization tools for initial objects and calculation results. Research conducted shows that among wide variety of programming languages the most efficient one for CAD development, employing the numerical analytical boundary elements method algorithm, is the Java language. This language provides tools not only for development of calculating CAD part, but also to build the graphic interface for geometrical models construction and calculated results interpretation.

  5. Using qualitative research to inform development of a diagnostic algorithm for UTI in children.

    Science.gov (United States)

    de Salis, Isabel; Whiting, Penny; Sterne, Jonathan A C; Hay, Alastair D

    2013-06-01

    Diagnostic and prognostic algorithms can help reduce clinical uncertainty. The selection of candidate symptoms and signs to be measured in case report forms (CRFs) for potential inclusion in diagnostic algorithms needs to be comprehensive, clearly formulated and relevant for end users. To investigate whether qualitative methods could assist in designing CRFs in research developing diagnostic algorithms. Specifically, the study sought to establish whether qualitative methods could have assisted in designing the CRF for the Health Technology Association funded Diagnosis of Urinary Tract infection in Young children (DUTY) study, which will develop a diagnostic algorithm to improve recognition of urinary tract infection (UTI) in children aged children in primary care and a Children's Emergency Department. We elicited features that clinicians believed useful in diagnosing UTI and compared these for presence or absence and terminology with the DUTY CRF. Despite much agreement between clinicians' accounts and the DUTY CRFs, we identified a small number of potentially important symptoms and signs not included in the CRF and some included items that could have been reworded to improve understanding and final data analysis. This study uniquely demonstrates the role of qualitative methods in the design and content of CRFs used for developing diagnostic (and prognostic) algorithms. Research groups developing such algorithms should consider using qualitative methods to inform the selection and wording of candidate symptoms and signs.

  6. Man/machine interface algorithm for advanced delayed-neutron signal characterization system

    International Nuclear Information System (INIS)

    Gross, K.C.

    1985-01-01

    The present failed-element rupture detector (FERD) at Experimental Breeder Reactor II (EBR-II) consists of a single bank of delayed-neutron (DN) detectors at a fixed transit time from the core. Plans are currently under way to upgrade the FERD in 1986 and provide advanced DN signal characterization capability that is embodied in an equivalent-recoil-area (ERA) meter. The new configuration will make available to the operator a wealth of quantitative diagnostic information related to the condition and dynamic evolution of a fuel breach. The diagnostic parameters will include a continuous reading of the ERA value for the breach; the transit time, T/sub tr/, for DN emitters traveling from the core to the FERD; and the isotopic holdup time, T/sub h/, for the source. To enhance the processing, interpretation, and display of these parameters to the reactor operator, a man/machine interface (MMI) algorithm has been developed to run in the background on EBR-II's data acquisition system (DAS). The purpose of this paper is to describe the features and implementation of this newly developed MMI algorithm

  7. Development of antibiotic regimens using graph based evolutionary algorithms.

    Science.gov (United States)

    Corns, Steven M; Ashlock, Daniel A; Bryden, Kenneth M

    2013-12-01

    This paper examines the use of evolutionary algorithms in the development of antibiotic regimens given to production animals. A model is constructed that combines the lifespan of the animal and the bacteria living in the animal's gastro-intestinal tract from the early finishing stage until the animal reaches market weight. This model is used as the fitness evaluation for a set of graph based evolutionary algorithms to assess the impact of diversity control on the evolving antibiotic regimens. The graph based evolutionary algorithms have two objectives: to find an antibiotic treatment regimen that maintains the weight gain and health benefits of antibiotic use and to reduce the risk of spreading antibiotic resistant bacteria. This study examines different regimens of tylosin phosphate use on bacteria populations divided into Gram positive and Gram negative types, with a focus on Campylobacter spp. Treatment regimens were found that provided decreased antibiotic resistance relative to conventional methods while providing nearly the same benefits as conventional antibiotic regimes. By using a graph to control the information flow in the evolutionary algorithm, a variety of solutions along the Pareto front can be found automatically for this and other multi-objective problems. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  8. SPECIAL LIBRARIES OF FRAGMENTS OF ALGORITHMIC NETWORKS TO AUTOMATE THE DEVELOPMENT OF ALGORITHMIC MODELS

    Directory of Open Access Journals (Sweden)

    V. E. Marley

    2015-01-01

    Full Text Available Summary. The concept of algorithmic models appeared from the algorithmic approach in which the simulated object, the phenomenon appears in the form of process, subject to strict rules of the algorithm, which placed the process of operation of the facility. Under the algorithmic model is the formalized description of the scenario subject specialist for the simulated process, the structure of which is comparable with the structure of the causal and temporal relationships between events of the process being modeled, together with all information necessary for its software implementation. To represent the structure of algorithmic models used algorithmic network. Normally, they were defined as loaded finite directed graph, the vertices which are mapped to operators and arcs are variables, bound by operators. The language of algorithmic networks has great features, the algorithms that it can display indifference the class of all random algorithms. In existing systems, automation modeling based on algorithmic nets, mainly used by operators working with real numbers. Although this reduces their ability, but enough for modeling a wide class of problems related to economy, environment, transport, technical processes. The task of modeling the execution of schedules and network diagrams is relevant and useful. There are many counting systems, network graphs, however, the monitoring process based analysis of gaps and terms of graphs, no analysis of prediction execution schedule or schedules. The library is designed to build similar predictive models. Specifying source data to obtain a set of projections from which to choose one and take it for a new plan.

  9. Development of computed tomography system and image reconstruction algorithm

    International Nuclear Information System (INIS)

    Khairiah Yazid; Mohd Ashhar Khalid; Azaman Ahmad; Khairul Anuar Mohd Salleh; Ab Razak Hamzah

    2006-01-01

    Computed tomography is one of the most advanced and powerful nondestructive inspection techniques, which is currently used in many different industries. In several CT systems, detection has been by combination of an X-ray image intensifier and charge -coupled device (CCD) camera or by using line array detector. The recent development of X-ray flat panel detector has made fast CT imaging feasible and practical. Therefore this paper explained the arrangement of a new detection system which is using the existing high resolution (127 μm pixel size) flat panel detector in MINT and the image reconstruction technique developed. The aim of the project is to develop a prototype flat panel detector based CT imaging system for NDE. The prototype consisted of an X-ray tube, a flat panel detector system, a rotation table and a computer system to control the sample motion and image acquisition. Hence this project is divided to two major tasks, firstly to develop image reconstruction algorithm and secondly to integrate X-ray imaging components into one CT system. The image reconstruction algorithm using filtered back-projection method is developed and compared to other techniques. The MATLAB program is the tools used for the simulations and computations for this project. (Author)

  10. Discovery of new natural products by application of X-hitting, a novel algorithm for automated comparison of full UV-spectra, combined with structural determination by NMR spectroscophy

    DEFF Research Database (Denmark)

    Larsen, Thomas Ostenfeld; Petersen, Bent O.; Duus, Jens Øllgaard

    2005-01-01

    X-hitting, a newly developed algorithm for automated comparison of UV data, has been used for the tracking of two novel spiro-quinazoline metabolites, lapatins A (1)andB(2), in a screening study targeting quinazolines. The structures of 1 and 2 were elucidated by analysis of spectroscopic data...

  11. A pencil beam algorithm for helium ion beam therapy

    Energy Technology Data Exchange (ETDEWEB)

    Fuchs, Hermann; Stroebele, Julia; Schreiner, Thomas; Hirtl, Albert; Georg, Dietmar [Christian Doppler Laboratory for Medical Radiation Research for Radiation Oncology, Medical University of Vienna, 1090 Vienna (Austria); Department of Radiation Oncology, Medical University of Vienna/AKH Vienna, 1090 Vienna (Austria) and Comprehensive Cancer Center, Medical University of Vienna/AKH Vienna, 1090 Vienna (Austria); Department of Radiation Oncology, Medical University of Vienna/AKH Vienna (Austria) and Comprehensive Cancer Center, Medical University of Vienna/AKH Vienna, 1090 Vienna (Austria); PEG MedAustron, 2700 Wiener Neustadt (Austria); Department of Nuclear Medicine, Medical University of Vienna, 1090 Vienna (Austria); Christian Doppler Laboratory for Medical Radiation Research for Radiation Oncology, Medical University of Vienna, 1090 Vienna (Austria); Department of Radiation Oncology, Medical University of Vienna/AKH Vienna, 1090 Vienna (Austria) and Comprehensive Cancer Center, Medical University of Vienna/AKH Vienna, 1090 Vienna (Austria)

    2012-11-15

    Purpose: To develop a flexible pencil beam algorithm for helium ion beam therapy. Dose distributions were calculated using the newly developed pencil beam algorithm and validated using Monte Carlo (MC) methods. Methods: The algorithm was based on the established theory of fluence weighted elemental pencil beam (PB) kernels. Using a new real-time splitting approach, a minimization routine selects the optimal shape for each sub-beam. Dose depositions along the beam path were determined using a look-up table (LUT). Data for LUT generation were derived from MC simulations in water using GATE 6.1. For materials other than water, dose depositions were calculated by the algorithm using water-equivalent depth scaling. Lateral beam spreading caused by multiple scattering has been accounted for by implementing a non-local scattering formula developed by Gottschalk. A new nuclear correction was modelled using a Voigt function and implemented by a LUT approach. Validation simulations have been performed using a phantom filled with homogeneous materials or heterogeneous slabs of up to 3 cm. The beams were incident perpendicular to the phantoms surface with initial particle energies ranging from 50 to 250 MeV/A with a total number of 10{sup 7} ions per beam. For comparison a special evaluation software was developed calculating the gamma indices for dose distributions. Results: In homogeneous phantoms, maximum range deviations between PB and MC of less than 1.1% and differences in the width of the distal energy falloff of the Bragg-Peak from 80% to 20% of less than 0.1 mm were found. Heterogeneous phantoms using layered slabs satisfied a {gamma}-index criterion of 2%/2mm of the local value except for some single voxels. For more complex phantoms using laterally arranged bone-air slabs, the {gamma}-index criterion was exceeded in some areas giving a maximum {gamma}-index of 1.75 and 4.9% of the voxels showed {gamma}-index values larger than one. The calculation precision of the

  12. A Newly Developed Nested PCR Assay for the Detection of Helicobacter pylori in the Oral Cavity.

    Science.gov (United States)

    Ismail, Hawazen; Morgan, Claire; Griffiths, Paul; Williams, John; Jenkins, Gareth

    2016-01-01

    To develop a new nested polymerase chain reaction (PCR) assay for identifying Helicobacter pylori DNA from dental plaque. H. pylori is one of the most common chronic bacterial pathogens in humans. The accurate detection of this organism is essential for proper patient management and for the eradication of the bacteria following treatment. Forty-nine patients (24 males and 25 females; mean age: 51; range, 19 to 94 y) were investigated for the presence of H. pylori in dental plaque by single-step PCR and nested PCR and in the stomach by single-step PCR, nested PCR, and histologic examination. The newly developed nested PCR assay identified H. pylori DNA in gastric biopsies of 18 patients who were histologically classified as H. pylori-positive and 2 additional biopsies of patients who were H. pylori-negative by histologic examination (20/49; 40.8%). Dental plaque samples collected before and after endoscopy from the 49 patients revealed that single-step PCR did not detect H. pylori but nested PCR was able to detect H. pylori DNA in 40.8% (20/49) patients. Nested PCR gave a higher detection rate (40.8%, 20/49) than that of histology (36.7%, 18/49) and single-step PCR. When nested PCR results were compared with histology results there was no significant difference between the 2 methods. Our newly developed nested PCR assay is at least as sensitive as histology and may be useful for H. pylori detection in patients unfit for endoscopic examination.

  13. Characteristics of bread prepared from wheat flours blended with various kinds of newly developed rice flours.

    Science.gov (United States)

    Nakamura, S; Suzuki, K; Ohtsubo, K

    2009-04-01

    Characteristics of the bread prepared from wheat flour blended with the flour of various kinds of newly developed rice cultivars were investigated. Qualities of the bread made from wheat flour blended with rice flour have been reported to be inferior to those from 100% wheat flour bread. To improve its qualities, we searched for the new-characteristic rice flours among the various kinds of newly developed rice cultivars to blend with the wheat flour for the bread preparation. The most suitable new characteristic rices are combination of purple waxy rice, high-amylose rice, and sugary rice. Specific volume of the bread from the combination of wheat and these 3 kinds of rice flours showed higher specific volume (3.93) compared with the traditional wheat/rice bread (3.58). We adopted the novel method, continuous progressive compression test, to measure the physical properties of the dough and the bread in addition to the sensory evaluation. As a result of the selection of the most suitable rice cultivars and blending ratio with the wheat flour, we could develop the novel wheat/rice bread, of which loaf volume, physical properties, and tastes are acceptable and resistant to firming on even 4 d after the bread preparation. To increase the ratio of rice to wheat, we tried to add a part of rice as cooked rice grains. The specific volume and qualities of the bread were maintained well although the rice content of total flour increased from 30% to 40%.

  14. Development of glucose-responsive 'smart' insulin systems.

    Science.gov (United States)

    Rege, Nischay K; Phillips, Nelson F B; Weiss, Michael A

    2017-08-01

    The complexity of modern insulin-based therapy for type I and type II diabetes mellitus and the risks associated with excursions in blood-glucose concentration (hyperglycemia and hypoglycemia) have motivated the development of 'smart insulin' technologies (glucose-responsive insulin, GRI). Such analogs or delivery systems are entities that provide insulin activity proportional to the glycemic state of the patient without external monitoring by the patient or healthcare provider. The present review describes the relevant historical background to modern GRI technologies and highlights three distinct approaches: coupling of continuous glucose monitoring (CGM) to deliver devices (algorithm-based 'closed-loop' systems), glucose-responsive polymer encapsulation of insulin, and molecular modification of insulin itself. Recent advances in GRI research utilizing each of the three approaches are illustrated; these include newly developed algorithms for CGM-based insulin delivery systems, glucose-sensitive modifications of existing clinical analogs, newly developed hypoxia-sensitive polymer matrices, and polymer-encapsulated, stem-cell-derived pancreatic β cells. Although GRI technologies have yet to be perfected, the recent advances across several scientific disciplines that are described in this review have provided a path towards their clinical implementation.

  15. A filtered backprojection algorithm with characteristics of the iterative landweber algorithm

    OpenAIRE

    L. Zeng, Gengsheng

    2012-01-01

    Purpose: In order to eventually develop an analytical algorithm with noise characteristics of an iterative algorithm, this technical note develops a window function for the filtered backprojection (FBP) algorithm in tomography that behaves as an iterative Landweber algorithm.

  16. Development of a Thermal Equilibrium Prediction Algorithm

    International Nuclear Information System (INIS)

    Aviles-Ramos, Cuauhtemoc

    2002-01-01

    A thermal equilibrium prediction algorithm is developed and tested using a heat conduction model and data sets from calorimetric measurements. The physical model used in this study is the exact solution of a system of two partial differential equations that govern the heat conduction in the calorimeter. A multi-parameter estimation technique is developed and implemented to estimate the effective volumetric heat generation and thermal diffusivity in the calorimeter measurement chamber, and the effective thermal diffusivity of the heat flux sensor. These effective properties and the exact solution are used to predict the heat flux sensor voltage readings at thermal equilibrium. Thermal equilibrium predictions are carried out considering only 20% of the total measurement time required for thermal equilibrium. A comparison of the predicted and experimental thermal equilibrium voltages shows that the average percentage error from 330 data sets is only 0.1%. The data sets used in this study come from calorimeters of different sizes that use different kinds of heat flux sensors. Furthermore, different nuclear material matrices were assayed in the process of generating these data sets. This study shows that the integration of this algorithm into the calorimeter data acquisition software will result in an 80% reduction of measurement time. This reduction results in a significant cutback in operational costs for the calorimetric assay of nuclear materials. (authors)

  17. developed algorithm for the application of british method of concret

    African Journals Online (AJOL)

    t-iyke

    Most of the methods of concrete mix design developed over the years were geared towards manual approach. ... Key words: Concrete mix design; British method; Manual Approach; Algorithm. ..... Statistics for Science and Engineering.

  18. Establishment probability in newly founded populations

    Directory of Open Access Journals (Sweden)

    Gusset Markus

    2012-06-01

    Full Text Available Abstract Background Establishment success in newly founded populations relies on reaching the established phase, which is defined by characteristic fluctuations of the population’s state variables. Stochastic population models can be used to quantify the establishment probability of newly founded populations; however, so far no simple but robust method for doing so existed. To determine a critical initial number of individuals that need to be released to reach the established phase, we used a novel application of the “Wissel plot”, where –ln(1 – P0(t is plotted against time t. This plot is based on the equation P0t=1–c1e–ω1t, which relates the probability of extinction by time t, P0(t, to two constants: c1 describes the probability of a newly founded population to reach the established phase, whereas ω1 describes the population’s probability of extinction per short time interval once established. Results For illustration, we applied the method to a previously developed stochastic population model of the endangered African wild dog (Lycaon pictus. A newly founded population reaches the established phase if the intercept of the (extrapolated linear parts of the “Wissel plot” with the y-axis, which is –ln(c1, is negative. For wild dogs in our model, this is the case if a critical initial number of four packs, consisting of eight individuals each, are released. Conclusions The method we present to quantify the establishment probability of newly founded populations is generic and inferences thus are transferable to other systems across the field of conservation biology. In contrast to other methods, our approach disaggregates the components of a population’s viability by distinguishing establishment from persistence.

  19. The Texas Medication Algorithm Project (TMAP) schizophrenia algorithms.

    Science.gov (United States)

    Miller, A L; Chiles, J A; Chiles, J K; Crismon, M L; Rush, A J; Shon, S P

    1999-10-01

    In the Texas Medication Algorithm Project (TMAP), detailed guidelines for medication management of schizophrenia and related disorders, bipolar disorders, and major depressive disorders have been developed and implemented. This article describes the algorithms developed for medication treatment of schizophrenia and related disorders. The guidelines recommend a sequence of medications and discuss dosing, duration, and switch-over tactics. They also specify response criteria at each stage of the algorithm for both positive and negative symptoms. The rationale and evidence for each aspect of the algorithms are presented.

  20. Redundant and fault-tolerant algorithms for real-time measurement and control systems for weapon equipment.

    Science.gov (United States)

    Li, Dan; Hu, Xiaoguang

    2017-03-01

    Because of the high availability requirements from weapon equipment, an in-depth study has been conducted on the real-time fault-tolerance of the widely applied Compact PCI (CPCI) bus measurement and control system. A redundancy design method that uses heartbeat detection to connect the primary and alternate devices has been developed. To address the low successful execution rate and relatively large waste of time slices in the primary version of the task software, an improved algorithm for real-time fault-tolerant scheduling is proposed based on the Basic Checking available time Elimination idle time (BCE) algorithm, applying a single-neuron self-adaptive proportion sum differential (PSD) controller. The experimental validation results indicate that this system has excellent redundancy and fault-tolerance, and the newly developed method can effectively improve the system availability. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  1. Evaluation of a newly developed media-supported 4-step approach for basic life support training

    Directory of Open Access Journals (Sweden)

    Sopka Saša

    2012-07-01

    Full Text Available Abstract Objective The quality of external chest compressions (ECC is of primary importance within basic life support (BLS. Recent guidelines delineate the so-called 4“-step approach” for teaching practical skills within resuscitation training guided by a certified instructor. The objective of this study was to evaluate whether a “media-supported 4-step approach” for BLS training leads to equal practical performance compared to the standard 4-step approach. Materials and methods After baseline testing, 220 laypersons were either trained using the widely accepted method for resuscitation training (4-step approach or using a newly created “media-supported 4-step approach”, both of equal duration. In this approach, steps 1 and 2 were ensured via a standardised self-produced podcast, which included all of the information regarding the BLS algorithm and resuscitation skills. Participants were tested on manikins in the same mock cardiac arrest single-rescuer scenario prior to intervention, after one week and after six months with respect to ECC-performance, and participants were surveyed about the approach. Results Participants (age 23 ± 11, 69% female reached comparable practical ECC performances in both groups, with no statistical difference. Even after six months, there was no difference detected in the quality of the initial assessment algorithm or delay concerning initiation of CPR. Overall, at least 99% of the intervention group (n = 99; mean 1.5 ± 0.8; 6-point Likert scale: 1 = completely agree, 6 = completely disagree agreed that the video provided an adequate introduction to BLS skills. Conclusions The “media-supported 4-step approach” leads to comparable practical ECC-performance compared to standard teaching, even with respect to retention of skills. Therefore, this approach could be useful in special educational settings where, for example, instructors’ resources are sparse or large-group sessions

  2. Application of a rule extraction algorithm family based on the Re-RX algorithm to financial credit risk assessment from a Pareto optimal perspective

    Directory of Open Access Journals (Sweden)

    Yoichi Hayashi

    2016-01-01

    Full Text Available Historically, the assessment of credit risk has proved to be both highly important and extremely difficult. Currently, financial institutions rely on the use of computer-generated credit scores for risk assessment. However, automated risk evaluations are currently imperfect, and the loss of vast amounts of capital could be prevented by improving the performance of computerized credit assessments. A number of approaches have been developed for the computation of credit scores over the last several decades, but these methods have been considered too complex without good interpretability and have therefore not been widely adopted. Therefore, in this study, we provide the first comprehensive comparison of results regarding the assessment of credit risk obtained using 10 runs of 10-fold cross validation of the Re-RX algorithm family, including the Re-RX algorithm, the Re-RX algorithm with both discrete and continuous attributes (Continuous Re-RX, the Re-RX algorithm with J48graft, the Re-RX algorithm with a trained neural network (Sampling Re-RX, NeuroLinear, NeuroLinear+GRG, and three unique rule extraction techniques involving support vector machines and Minerva from four real-life, two-class mixed credit-risk datasets. We also discuss the roles of various newly-extended types of the Re-RX algorithm and high performance classifiers from a Pareto optimal perspective. Our findings suggest that Continuous Re-RX, Re-RX with J48graft, and Sampling Re-RX comprise a powerful management tool that allows the creation of advanced, accurate, concise and interpretable decision support systems for credit risk evaluation. In addition, from a Pareto optimal perspective, the Re-RX algorithm family has superior features in relation to the comprehensibility of extracted rules and the potential for credit scoring with Big Data.

  3. The development of controller and navigation algorithm for underwater wall crawler

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Hyung Suck; Kim, Kyung Hoon; Kim, Min Young [Korea Advanced Institute of Science and Technology, Taejon (Korea)

    1999-01-01

    In this project, the control system of a underwater robotic vehicle(URV) for underwater wall inspection in the nuclear reactor pool or the related facilities has been developed. The following 4-sub projects have been studied for this project: (1) Development of the controller and motor driver for the URV (2) Development of the control algorithm for the tracking control of the URV (3) Development of the localization system (4) Underwater experiments of the developed system. First, the dynamic characteristic of thruster with the DC servo-motor was analyzed experimentally. Second the controller board using the INTEL 80C196 was designed and constructed, and the software for the communication and motor control is developed. Third the PWM motor-driver was developed. Fourth the localization system using the laser scanner and inclinometer was developed and tested in the pool. Fifth the dynamics of the URV was studied and the proper control algorithms for the URV was proposed. Lastly the validation of the integrated system was experimentally performed. (author). 27 refs., 51 figs., 8 tabs.

  4. Correlation signatures of wet soils and snows. [algorithm development and computer programming

    Science.gov (United States)

    Phillips, M. R.

    1972-01-01

    Interpretation, analysis, and development of algorithms have provided the necessary computational programming tools for soil data processing, data handling and analysis. Algorithms that have been developed thus far, are adequate and have been proven successful for several preliminary and fundamental applications such as software interfacing capabilities, probability distributions, grey level print plotting, contour plotting, isometric data displays, joint probability distributions, boundary mapping, channel registration and ground scene classification. A description of an Earth Resources Flight Data Processor, (ERFDP), which handles and processes earth resources data under a users control is provided.

  5. The effectiveness of newly developed written asthma action plan in improvement of asthma outcome in children.

    Science.gov (United States)

    Lakupoch, Kingthong; Manuyakorn, Wiparat; Preutthipan, Aroonwan; Kamalaporn, Harutai

    2017-09-17

    Providing asthma education about controller medication use and appropriate management of asthma exacerbation are the keys to improving the disease outcome. Many asthma guidelines recommend that physicians provide written asthma action plan (WAAP) to all of their asthmatic patients. However, the benefit of WAAP is unclear. Thus, we have created a new WAAP which is simplified in Thai and more user friendly. To determine the effectiveness of the newly developed asthma action plan in management of children with asthma. Asthmatic children who meet inclusion criteria all received the WAAP and they were followed up for 6 months with measurement of outcome variables, such as asthma exacerbation that required emergency room visit, unscheduled OPD visit, admission and school absence in order to compare with the past 6 months before receiving the WAAP. The analyzed outcomes of forty-nine children show significantly reduced emergency room visit (P-value 0.005), unscheduled OPD visit (P-value 0.046), admission days (P-value 0.026) and school absence days (P-value 0.022). Well controlled group and mild severity group were not the factors that contribute to decreased emergency room visit but step up therapy may be the co-factor to decreased ER visit. The results of this study suggest that the provision of newly developed WAAP is useful for improving self-care of asthma patients and reducing asthma exacerbation.

  6. Integrated Graphics Operations and Analysis Lab Development of Advanced Computer Graphics Algorithms

    Science.gov (United States)

    Wheaton, Ira M.

    2011-01-01

    The focus of this project is to aid the IGOAL in researching and implementing algorithms for advanced computer graphics. First, this project focused on porting the current International Space Station (ISS) Xbox experience to the web. Previously, the ISS interior fly-around education and outreach experience only ran on an Xbox 360. One of the desires was to take this experience and make it into something that can be put on NASA s educational site for anyone to be able to access. The current code works in the Unity game engine which does have cross platform capability but is not 100% compatible. The tasks for an intern to complete this portion consisted of gaining familiarity with Unity and the current ISS Xbox code, porting the Xbox code to the web as is, and modifying the code to work well as a web application. In addition, a procedurally generated cloud algorithm will be developed. Currently, the clouds used in AGEA animations and the Xbox experiences are a texture map. The desire is to create a procedurally generated cloud algorithm to provide dynamically generated clouds for both AGEA animations and the Xbox experiences. This task consists of gaining familiarity with AGEA and the plug-in interface, developing the algorithm, creating an AGEA plug-in to implement the algorithm inside AGEA, and creating a Unity script to implement the algorithm for the Xbox. This portion of the project was unable to be completed in the time frame of the internship; however, the IGOAL will continue to work on it in the future.

  7. Development of hybrid genetic-algorithm-based neural networks using regression trees for modeling air quality inside a public transportation bus.

    Science.gov (United States)

    Kadiyala, Akhil; Kaur, Devinder; Kumar, Ashok

    2013-02-01

    -based neural network IAQ models outperformed the traditional ANN methods of the back-propagation and the radial basis function networks. The novelty of this research is the development of a novel approach to modeling vehicular indoor air quality by integration of the advanced methods of genetic algorithms, regression trees, and the analysis of variance for the monitored in-vehicle gaseous and particulate matter contaminants, and comparing the results obtained from using the developed approach with conventional artificial intelligence techniques of back propagation networks and radial basis function networks. This study validated the newly developed approach using holdout and threefold cross-validation methods. These results are of great interest to scientists, researchers, and the public in understanding the various aspects of modeling an indoor microenvironment. This methodology can easily be extended to other fields of study also.

  8. Development of the Landsat Data Continuity Mission Cloud Cover Assessment Algorithms

    Science.gov (United States)

    Scaramuzza, Pat; Bouchard, M.A.; Dwyer, John L.

    2012-01-01

    The upcoming launch of the Operational Land Imager (OLI) will start the next era of the Landsat program. However, the Automated Cloud-Cover Assessment (CCA) (ACCA) algorithm used on Landsat 7 requires a thermal band and is thus not suited for OLI. There will be a thermal instrument on the Landsat Data Continuity Mission (LDCM)-the Thermal Infrared Sensor-which may not be available during all OLI collections. This illustrates a need for CCA for LDCM in the absence of thermal data. To research possibilities for full-resolution OLI cloud assessment, a global data set of 207 Landsat 7 scenes with manually generated cloud masks was created. It was used to evaluate the ACCA algorithm, showing that the algorithm correctly classified 79.9% of a standard test subset of 3.95 109 pixels. The data set was also used to develop and validate two successor algorithms for use with OLI data-one derived from an off-the-shelf machine learning package and one based on ACCA but enhanced by a simple neural network. These comprehensive CCA algorithms were shown to correctly classify pixels as cloudy or clear 88.5% and 89.7% of the time, respectively.

  9. Development of transmission dose estimation algorithm for in vivo dosimetry in high energy radiation treatment

    International Nuclear Information System (INIS)

    Yun, Hyong Geun; Shin, Kyo Chul; Hun, Soon Nyung; Woo, Hong Gyun; Ha, Sung Whan; Lee, Hyoung Koo

    2004-01-01

    In vivo dosimetry is very important for quality assurance purpose in high energy radiation treatment. Measurement of transmission dose is a new method of in vivo dosimetry which is noninvasive and easy for daily performance. This study is to develop a tumor dose estimation algorithm using measured transmission dose for open radiation field. For basic beam data, transmission dose was measured with various field size (FS) of square radiation field, phantom thickness (Tp), and phantom chamber distance (PCD) with a acrylic phantom for 6 MV and 10 MV X-ray. Source to chamber distance (SCD) was set to 150 cm. Measurement was conducted with a 0.6 cc Farmer type ion chamber. By using regression analysis of measured basic beam data, a transmission dose estimation algorithm was developed. Accuracy of the algorithm was tested with flat solid phantom with various thickness in various settings of rectangular fields and various PCD. In our developed algorithm, transmission dose was equated to quadratic function of log(A/P) (where A/P is area-perimeter ratio) and the coefficients of the quadratic functions were equated to tertiary functions of PCD. Our developed algorithm could estimate the radiation dose with the errors within ±0.5% for open square field, and with the errors within ±1.0% for open elongated radiation field. Developed algorithm could accurately estimate the transmission dose in open radiation fields with various treatment settings of high energy radiation treatment. (author)

  10. jClustering, an open framework for the development of 4D clustering algorithms.

    Directory of Open Access Journals (Sweden)

    José María Mateos-Pérez

    Full Text Available We present jClustering, an open framework for the design of clustering algorithms in dynamic medical imaging. We developed this tool because of the difficulty involved in manually segmenting dynamic PET images and the lack of availability of source code for published segmentation algorithms. Providing an easily extensible open tool encourages publication of source code to facilitate the process of comparing algorithms and provide interested third parties with the opportunity to review code. The internal structure of the framework allows an external developer to implement new algorithms easily and quickly, focusing only on the particulars of the method being implemented and not on image data handling and preprocessing. This tool has been coded in Java and is presented as an ImageJ plugin in order to take advantage of all the functionalities offered by this imaging analysis platform. Both binary packages and source code have been published, the latter under a free software license (GNU General Public License to allow modification if necessary.

  11. Performance and development for the Inner Detector Trigger algorithms at ATLAS

    CERN Document Server

    Penc, O; The ATLAS collaboration

    2014-01-01

    The performance of the ATLAS Inner Detector (ID) Trigger algorithms being developed for running on the ATLAS High Level Trigger (HLT) processor farm during Run 2 of the LHC are presented. During the 2013-14 LHC long shutdown modifications are being carried out to the LHC accelerator to increase both the beam energy and luminosity. These modifications will pose significant challenges for the ID Trigger algorithms, both in terms execution time and physics performance. To meet these challenges, the ATLAS HLT software is being restructured to run as a more flexible single stage HLT, instead of two separate stages (Level2 and Event Filter) as in Run 1. This will reduce the overall data volume that needs to be requested by the HLT system, since data will no longer need to be requested for each of the two separate processing stages. Development of the ID Trigger algorithms for Run 2, currently expected to be ready for detector commissioning near the end of 2014, is progressing well and the current efforts towards op...

  12. Evidence-based algorithm for heparin dosing before cardiopulmonary bypass. Part 1: Development of the algorithm.

    Science.gov (United States)

    McKinney, Mark C; Riley, Jeffrey B

    2007-12-01

    The incidence of heparin resistance during adult cardiac surgery with cardiopulmonary bypass has been reported at 15%-20%. The consistent use of a clinical decision-making algorithm may increase the consistency of patient care and likely reduce the total required heparin dose and other problems associated with heparin dosing. After a directed survey of practicing perfusionists regarding treatment of heparin resistance and a literature search for high-level evidence regarding the diagnosis and treatment of heparin resistance, an evidence-based decision-making algorithm was constructed. The face validity of the algorithm decisive steps and logic was confirmed by a second survey of practicing perfusionists. The algorithm begins with review of the patient history to identify predictors for heparin resistance. The definition for heparin resistance contained in the algorithm is an activated clotting time 450 IU/kg heparin loading dose. Based on the literature, the treatment for heparin resistance used in the algorithm is anti-thrombin III supplement. The algorithm seems to be valid and is supported by high-level evidence and clinician opinion. The next step is a human randomized clinical trial to test the clinical procedure guideline algorithm vs. current standard clinical practice.

  13. Development and testing of incident detection algorithms. Vol. 2, research methodology and detailed results.

    Science.gov (United States)

    1976-04-01

    The development and testing of incident detection algorithms was based on Los Angeles and Minneapolis freeway surveillance data. Algorithms considered were based on times series and pattern recognition techniques. Attention was given to the effects o...

  14. Application of Hybrid Optimization Algorithm in the Synthesis of Linear Antenna Array

    Directory of Open Access Journals (Sweden)

    Ezgi Deniz Ülker

    2014-01-01

    Full Text Available The use of hybrid algorithms for solving real-world optimization problems has become popular since their solution quality can be made better than the algorithms that form them by combining their desirable features. The newly proposed hybrid method which is called Hybrid Differential, Particle, and Harmony (HDPH algorithm is different from the other hybrid forms since it uses all features of merged algorithms in order to perform efficiently for a wide variety of problems. In the proposed algorithm the control parameters are randomized which makes its implementation easy and provides a fast response. This paper describes the application of HDPH algorithm to linear antenna array synthesis. The results obtained with the HDPH algorithm are compared with three merged optimization techniques that are used in HDPH. The comparison shows that the performance of the proposed algorithm is comparatively better in both solution quality and robustness. The proposed hybrid algorithm HDPH can be an efficient candidate for real-time optimization problems since it yields reliable performance at all times when it gets executed.

  15. A prediction algorithm for first onset of major depression in the general population: development and validation.

    Science.gov (United States)

    Wang, JianLi; Sareen, Jitender; Patten, Scott; Bolton, James; Schmitz, Norbert; Birney, Arden

    2014-05-01

    Prediction algorithms are useful for making clinical decisions and for population health planning. However, such prediction algorithms for first onset of major depression do not exist. The objective of this study was to develop and validate a prediction algorithm for first onset of major depression in the general population. Longitudinal study design with approximate 3-year follow-up. The study was based on data from a nationally representative sample of the US general population. A total of 28 059 individuals who participated in Waves 1 and 2 of the US National Epidemiologic Survey on Alcohol and Related Conditions and who had not had major depression at Wave 1 were included. The prediction algorithm was developed using logistic regression modelling in 21 813 participants from three census regions. The algorithm was validated in participants from the 4th census region (n=6246). Major depression occurred since Wave 1 of the National Epidemiologic Survey on Alcohol and Related Conditions, assessed by the Alcohol Use Disorder and Associated Disabilities Interview Schedule-diagnostic and statistical manual for mental disorders IV. A prediction algorithm containing 17 unique risk factors was developed. The algorithm had good discriminative power (C statistics=0.7538, 95% CI 0.7378 to 0.7699) and excellent calibration (F-adjusted test=1.00, p=0.448) with the weighted data. In the validation sample, the algorithm had a C statistic of 0.7259 and excellent calibration (Hosmer-Lemeshow χ(2)=3.41, p=0.906). The developed prediction algorithm has good discrimination and calibration capacity. It can be used by clinicians, mental health policy-makers and service planners and the general public to predict future risk of having major depression. The application of the algorithm may lead to increased personalisation of treatment, better clinical decisions and more optimal mental health service planning.

  16. Conventional and improved cytotoxicity test methods of newly developed biodegradable magnesium alloys

    Science.gov (United States)

    Han, Hyung-Seop; Kim, Hee-Kyoung; Kim, Yu-Chan; Seok, Hyun-Kwang; Kim, Young-Yul

    2015-11-01

    Unique biodegradable property of magnesium has spawned countless studies to develop ideal biodegradable orthopedic implant materials in the last decade. However, due to the rapid pH change and extensive amount of hydrogen gas generated during biocorrosion, it is extremely difficult to determine the accurate cytotoxicity of newly developed magnesium alloys using the existing methods. Herein, we report a new method to accurately determine the cytotoxicity of magnesium alloys with varying corrosion rate while taking in-vivo condition into the consideration. For conventional method, extract quantities of each metal ion were determined using ICP-MS and the result showed that the cytotoxicity due to pH change caused by corrosion affected the cell viability rather than the intrinsic cytotoxicity of magnesium alloy. In physiological environment, pH is regulated and adjusted within normal pH (˜7.4) range by homeostasis. Two new methods using pH buffered extracts were proposed and performed to show that environmental buffering effect of pH, dilution of the extract, and the regulation of eluate surface area must be taken into consideration for accurate cytotoxicity measurement of biodegradable magnesium alloys.

  17. Implementation and analysis of list mode algorithm using tubes of response on a dedicated brain and breast PET

    Science.gov (United States)

    Moliner, L.; Correcher, C.; González, A. J.; Conde, P.; Hernández, L.; Orero, A.; Rodríguez-Álvarez, M. J.; Sánchez, F.; Soriano, A.; Vidal, L. F.; Benlloch, J. M.

    2013-02-01

    In this work we present an innovative algorithm for the reconstruction of PET images based on the List-Mode (LM) technique which improves their spatial resolution compared to results obtained with current MLEM algorithms. This study appears as a part of a large project with the aim of improving diagnosis in early Alzheimer disease stages by means of a newly developed hybrid PET-MR insert. At the present, Alzheimer is the most relevant neurodegenerative disease and the best way to apply an effective treatment is its early diagnosis. The PET device will consist of several monolithic LYSO crystals coupled to SiPM detectors. Monolithic crystals can reduce scanner costs with the advantage to enable implementation of very small virtual pixels in their geometry. This is especially useful for LM reconstruction algorithms, since they do not need a pre-calculated system matrix. We have developed an LM algorithm which has been initially tested with a large aperture (186 mm) breast PET system. Such an algorithm instead of using the common lines of response, incorporates a novel calculation of tubes of response. The new approach improves the volumetric spatial resolution about a factor 2 at the border of the field of view when compared with traditionally used MLEM algorithm. Moreover, it has also shown to decrease the image noise, thus increasing the image quality.

  18. Implementation and analysis of list mode algorithm using tubes of response on a dedicated brain and breast PET

    International Nuclear Information System (INIS)

    Moliner, L.; Correcher, C.; González, A.J.; Conde, P.; Hernández, L.; Orero, A.; Rodríguez-Álvarez, M.J.; Sánchez, F.; Soriano, A.; Vidal, L.F.; Benlloch, J.M.

    2013-01-01

    In this work we present an innovative algorithm for the reconstruction of PET images based on the List-Mode (LM) technique which improves their spatial resolution compared to results obtained with current MLEM algorithms. This study appears as a part of a large project with the aim of improving diagnosis in early Alzheimer disease stages by means of a newly developed hybrid PET-MR insert. At the present, Alzheimer is the most relevant neurodegenerative disease and the best way to apply an effective treatment is its early diagnosis. The PET device will consist of several monolithic LYSO crystals coupled to SiPM detectors. Monolithic crystals can reduce scanner costs with the advantage to enable implementation of very small virtual pixels in their geometry. This is especially useful for LM reconstruction algorithms, since they do not need a pre-calculated system matrix. We have developed an LM algorithm which has been initially tested with a large aperture (186 mm) breast PET system. Such an algorithm instead of using the common lines of response, incorporates a novel calculation of tubes of response. The new approach improves the volumetric spatial resolution about a factor 2 at the border of the field of view when compared with traditionally used MLEM algorithm. Moreover, it has also shown to decrease the image noise, thus increasing the image quality

  19. Leadership development in the age of the algorithm.

    Science.gov (United States)

    Buckingham, Marcus

    2012-06-01

    By now we expect personalized content--it's routinely served up by online retailers and news services, for example. But the typical leadership development program still takes a formulaic, one-size-fits-all approach. And it rarely happens that an excellent technique can be effectively transferred from one leader to all others. Someone trying to adopt a practice from a leader with a different style usually seems stilted and off--a Franken-leader. Breakthrough work at Hilton Hotels and other organizations shows how companies can use an algorithmic model to deliver training tips uniquely suited to each individual's style. It's a five-step process: First, a company must choose a tool with which to identify each person's leadership type. Second, it should assess its best leaders, and third, it should interview them about their techniques. Fourth, it should use its algorithmic model to feed tips drawn from those techniques to developing leaders of the same type. And fifth, it should make the system dynamically intelligent, with user reactions sharpening the content and targeting of tips. The power of this kind of system--highly customized, based on peer-to-peer sharing, and continually evolving--will soon overturn the generic model of leadership development. And such systems will inevitably break through any one organization, until somewhere in the cloud the best leadership tips from all over are gathered, sorted, and distributed according to which ones suit which people best.

  20. DOOCS environment for FPGA-based cavity control system and control algorithms development

    International Nuclear Information System (INIS)

    Pucyk, P.; Koprek, W.; Kaleta, P.; Szewinski, J.; Pozniak, K.T.; Czarski, T.; Romaniuk, R.S.

    2005-01-01

    The paper describes the concept and realization of the DOOCS control software for FPGAbased TESLA cavity controller and simulator (SIMCON). It bases on universal software components, created for laboratory purposes and used in MATLAB based control environment. These modules have been recently adapted to the DOOCS environment to ensure a unified software to hardware communication model. The presented solution can be also used as a general platform for control algorithms development. The proposed interfaces between MATLAB and DOOCS modules allow to check the developed algorithm in the operation environment before implementation in the FPGA. As the examples two systems have been presented. (orig.)

  1. Replication and Comparison of the Newly Proposed ADOS-2, Module 4 Algorithm in ASD without ID: A Multi-site Study

    OpenAIRE

    Pugliese, Cara E.; Kenworthy, Lauren; Bal, Vanessa Hus; Wallace, Gregory L; Yerys, Benjamin E; Maddox, Brenna B.; White, Susan W.; Popal, Haroon; Armour, Anna Chelsea; Miller, Judith; Herrington, John D.; Schultz, Robert T.; Martin, Alex; Anthony, Laura Gutermuth

    2015-01-01

    Recent updates have been proposed to the Autism Diagnostic Observation Schedule-2 Module 4 diagnostic algorithm. This new algorithm, however, has not yet been validated in an independent sample without intellectual disability (ID). This multi-site study compared the original and revised algorithms in individuals with ASD without ID. The revised algorithm demonstrated increased sensitivity, but lower specificity in the overall sample. Estimates were highest for females, individuals with a verb...

  2. Algorithm improvement program nuclide identification algorithm scoring criteria and scoring application.

    Energy Technology Data Exchange (ETDEWEB)

    Enghauser, Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-02-01

    The goal of the Domestic Nuclear Detection Office (DNDO) Algorithm Improvement Program (AIP) is to facilitate gamma-radiation detector nuclide identification algorithm development, improvement, and validation. Accordingly, scoring criteria have been developed to objectively assess the performance of nuclide identification algorithms. In addition, a Microsoft Excel spreadsheet application for automated nuclide identification scoring has been developed. This report provides an overview of the equations, nuclide weighting factors, nuclide equivalencies, and configuration weighting factors used by the application for scoring nuclide identification algorithm performance. Furthermore, this report presents a general overview of the nuclide identification algorithm scoring application including illustrative examples.

  3. Evaluation of Shielding Performance for Newly Developed Composite Materials

    Science.gov (United States)

    Evans, Beren Richard

    This work details an investigation into the contributing factors behind the success of newly developed composite neutron shield materials. Monte Carlo simulation methods were utilized to assess the neutron shielding capabilities and secondary radiation production characteristics of aluminum boron carbide, tungsten boron carbide, bismuth borosilicate glass, and Metathene within various neutron energy spectra. Shielding performance and secondary radiation data suggested that tungsten boron carbide was the most effective composite material. An analysis of the macroscopic cross-section contributions from constituent materials and interaction mechanisms was then performed in an attempt to determine the reasons for tungsten boron carbide's success over the other investigated materials. This analysis determined that there was a positive correlation between a non-elastic interaction contribution towards a material's total cross-section and shielding performance within the thermal and epi-thermal energy regimes. This finding was assumed to be a result of the boron-10 absorption reaction. The analysis also determined that within the faster energy regions, materials featuring higher non-elastic interaction contributions were comparable to those exhibiting primarily elastic scattering via low Z elements. This allowed for the conclusion that composite shield success within higher energy neutron spectra does not necessitate the use elastic scattering via low Z elements. These findings suggest that the inclusion of materials featuring high thermal absorption properties is more critical to composite neutron shield performance than the presence of constituent materials more inclined to maximize elastic scattering energy loss.

  4. The development of an algebraic multigrid algorithm for symmetric positive definite linear systems

    Energy Technology Data Exchange (ETDEWEB)

    Vanek, P.; Mandel, J.; Brezina, M. [Univ. of Colorado, Denver, CO (United States)

    1996-12-31

    An algebraic multigrid algorithm for symmetric, positive definite linear systems is developed based on the concept of prolongation by smoothed aggregation. Coarse levels are generated automatically. We present a set of requirements motivated heuristically by a convergence theory. The algorithm then attempts to satisfy the requirements. Input to the method are the coefficient matrix and zero energy modes, which are determined from nodal coordinates and knowledge of the differential equation. Efficiency of the resulting algorithm is demonstrated by computational results on real world problems from solid elasticity, plate blending, and shells.

  5. Development of GPT-based optimization algorithm

    International Nuclear Information System (INIS)

    White, J.R.; Chapman, D.M.; Biswas, D.

    1985-01-01

    The University of Lowell and Westinghouse Electric Corporation are involved in a joint effort to evaluate the potential benefits of generalized/depletion perturbation theory (GPT/DTP) methods for a variety of light water reactor (LWR) physics applications. One part of that work has focused on the development of a GPT-based optimization algorithm for the overall design, analysis, and optimization of LWR reload cores. The use of GPT sensitivity data in formulating the fuel management optimization problem is conceptually straightforward; it is the actual execution of the concept that is challenging. Thus, the purpose of this paper is to address some of the major difficulties, to outline our approach to these problems, and to present some illustrative examples of an efficient GTP-based optimization scheme

  6. The development of a scalable parallel 3-D CFD algorithm for turbomachinery. M.S. Thesis Final Report

    Science.gov (United States)

    Luke, Edward Allen

    1993-01-01

    Two algorithms capable of computing a transonic 3-D inviscid flow field about rotating machines are considered for parallel implementation. During the study of these algorithms, a significant new method of measuring the performance of parallel algorithms is developed. The theory that supports this new method creates an empirical definition of scalable parallel algorithms that is used to produce quantifiable evidence that a scalable parallel application was developed. The implementation of the parallel application and an automated domain decomposition tool are also discussed.

  7. Design requirements and development of an airborne descent path definition algorithm for time navigation

    Science.gov (United States)

    Izumi, K. H.; Thompson, J. L.; Groce, J. L.; Schwab, R. W.

    1986-01-01

    The design requirements for a 4D path definition algorithm are described. These requirements were developed for the NASA ATOPS as an extension of the Local Flow Management/Profile Descent algorithm. They specify the processing flow, functional and data architectures, and system input requirements, and recommended the addition of a broad path revision (reinitialization) function capability. The document also summarizes algorithm design enhancements and the implementation status of the algorithm on an in-house PDP-11/70 computer. Finally, the requirements for the pilot-computer interfaces, the lateral path processor, and guidance and steering function are described.

  8. Investigation of Five Algorithms for Selection of the Optimal Region of Interest in Smartphone Photoplethysmography

    Directory of Open Access Journals (Sweden)

    Rong-Chao Peng

    2016-01-01

    Full Text Available Smartphone photoplethysmography is a newly developed technique that can detect several physiological parameters from the photoplethysmographic signal obtained by the built-in camera of a smartphone. It is simple, low-cost, and easy-to-use, with a great potential to be used in remote medicine and home healthcare service. However, the determination of the optimal region of interest (ROI, which is an important issue for extracting photoplethysmographic signals from the camera video, has not been well studied. We herein proposed five algorithms for ROI selection: variance (VAR, spectral energy ratio (SER, template matching (TM, temporal difference (TD, and gradient (GRAD. Their performances were evaluated by a 50-subject experiment comparing the heart rates measured from the electrocardiogram and those from the smartphone using the five algorithms. The results revealed that the TM and the TD algorithms outperformed the other three as they had less standard error of estimate (<1.5 bpm and smaller limits of agreement (<3 bpm. The TD algorithm was slightly better than the TM algorithm and more suitable for smartphone applications. These results may be helpful to improve the accuracy of the physiological parameters measurement and to make the smartphone photoplethysmography technique more practical.

  9. Combustion, performance and emissions characteristics of a newly ...

    Indian Academy of Sciences (India)

    of a newly developed CRDI single cylinder diesel engine. AVINASH ... In case of unit injector and unit pump systems, fuel injection pressure depends on ... nozzle hole diameters were effective in reducing smoke and PM emissions. However ...

  10. Genetic algorithms

    Science.gov (United States)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  11. Replication and Comparison of the Newly Proposed ADOS-2, Module 4 Algorithm in ASD without ID: A Multi-Site Study

    Science.gov (United States)

    Pugliese, Cara E.; Kenworthy, Lauren; Bal, Vanessa Hus; Wallace, Gregory L.; Yerys, Benjamin E.; Maddox, Brenna B.; White, Susan W.; Popal, Haroon; Armour, Anna Chelsea; Miller, Judith; Herrington, John D.; Schultz, Robert T.; Martin, Alex; Anthony, Laura Gutermuth

    2015-01-01

    Recent updates have been proposed to the Autism Diagnostic Observation Schedule-2 Module 4 diagnostic algorithm. This new algorithm, however, has not yet been validated in an independent sample without intellectual disability (ID). This multi-site study compared the original and revised algorithms in individuals with ASD without ID. The revised…

  12. The use of newly developed real-time PCR for the rapid identification of bacteria in culture-negative osteomyelitis.

    Science.gov (United States)

    Kobayashi, Naomi; Bauer, Thomas W; Sakai, Hiroshige; Togawa, Daisuke; Lieberman, Isador H; Fujishiro, Takaaki; Procop, Gary W

    2006-12-01

    We report a case of a culture-negative osteomyelitis in which our newly developed real-time polymerase chain reaction (PCR) could differentiate Staphylococcus aureus from Staphylococcus epidermidis. This is the first report that described the application of this novel assay to an orthopedics clinical sample. This assay may be useful for other clinical culture-negative cases in a combination with a broad-spectrum assay as a rapid microorganism identification method.

  13. Development of real-time plasma analysis and control algorithms for the TCV tokamak using SIMULINK

    International Nuclear Information System (INIS)

    Felici, F.; Le, H.B.; Paley, J.I.; Duval, B.P.; Coda, S.; Moret, J.-M.; Bortolon, A.; Federspiel, L.; Goodman, T.P.; Hommen, G.; Karpushov, A.; Piras, F.; Pitzschke, A.; Romero, J.; Sevillano, G.; Sauter, O.; Vijvers, W.

    2014-01-01

    Highlights: • A new digital control system for the TCV tokamak has been commissioned. • The system is entirely programmable by SIMULINK, allowing rapid algorithm development. • Different control system nodes can run different algorithms at varying sampling times. • The previous control system functions have been emulated and improved. • New capabilities include MHD control, profile control, equilibrium reconstruction. - Abstract: One of the key features of the new digital plasma control system installed on the TCV tokamak is the possibility to rapidly design, test and deploy real-time algorithms. With this flexibility the new control system has been used for a large number of new experiments which exploit TCV's powerful actuators consisting of 16 individually controllable poloidal field coils and 7 real-time steerable electron cyclotron (EC) launchers. The system has been used for various applications, ranging from event-based real-time MHD control to real-time current diffusion simulations. These advances have propelled real-time control to one of the cornerstones of the TCV experimental program. Use of the SIMULINK graphical programming language to directly program the control system has greatly facilitated algorithm development and allowed a multitude of different algorithms to be deployed in a short time. This paper will give an overview of the developed algorithms and their application in physics experiments

  14. Perceptions of the clinical competence of newly registered nurses in the North West province

    Directory of Open Access Journals (Sweden)

    M.R. Moeti

    2004-09-01

    Full Text Available The clinical competence of newly registered nurses relating to the care of individual Clients, depends on their ability to correlate theoretical knowledge learned in the classroom with practice and the development of clinical skills. Its foundation lies in the ability to identify and solve problems that emanate from critical thinking, analytical reasoning and reflective practice. It is clear that the quality of clinical exposure plays a leading role in the development of nursing professionals. Nursing skills alone cannot ensure quality care of clients without the application of theory. Facilitation of this theory to practice therefore remains an essential component of nursing education. This study was aimed at identifying areas of incompetence of newly registered nurses (1998- 2001 in the clinical area by determining the newly registered nurses1 and professional nurses1 perceptions of the competence of the newly registered nurses. A quantitative, non-experimental, descriptive survey was used to collect the data regarding the clinical competence of newly registered nurses (1998-2001.

  15. Evaluation of a photovoltaic energy mechatronics system with a built-in quadratic maximum power point tracking algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Chao, R.M.; Ko, S.H.; Lin, I.H. [Department of Systems and Naval Mechatronics Engineering, National Cheng Kung University, Tainan, Taiwan 701 (China); Pai, F.S. [Department of Electronic Engineering, National University of Tainan (China); Chang, C.C. [Department of Environment and Energy, National University of Tainan (China)

    2009-12-15

    The historically high cost of crude oil price is stimulating research into solar (green) energy as an alternative energy source. In general, applications with large solar energy output require a maximum power point tracking (MPPT) algorithm to optimize the power generated by the photovoltaic effect. This work aims to provide a stand-alone solution for solar energy applications by integrating a DC/DC buck converter to a newly developed quadratic MPPT algorithm along with its appropriate software and hardware. The quadratic MPPT method utilizes three previously used duty cycles with their corresponding power outputs. It approaches the maximum value by using a second order polynomial formula, which converges faster than the existing MPPT algorithm. The hardware implementation takes advantage of the real-time controller system from National Instruments, USA. Experimental results have shown that the proposed solar mechatronics system can correctly and effectively track the maximum power point without any difficulties. (author)

  16. Class hierarchical test case generation algorithm based on expanded EMDPN model

    Institute of Scientific and Technical Information of China (English)

    LI Jun-yi; GONG Hong-fang; HU Ji-ping; ZOU Bei-ji; SUN Jia-guang

    2006-01-01

    A new model of event and message driven Petri network(EMDPN) based on the characteristic of class interaction for messages passing between two objects was extended. Using EMDPN interaction graph, a class hierarchical test-case generation algorithm with cooperated paths (copaths) was proposed, which can be used to solve the problems resulting from the class inheritance mechanism encountered in object-oriented software testing such as oracle, message transfer errors, and unreachable statement. Finally, the testing sufficiency was analyzed with the ordered sequence testing criterion(OSC). The results indicate that the test cases stemmed from newly proposed automatic algorithm of copaths generation satisfies synchronization message sequences testing criteria, therefore the proposed new algorithm of copaths generation has a good coverage rate.

  17. Measurement equivalence of the newly developed Quality of Life in Childhood Epilepsy Questionnaire (QOLCE-55).

    Science.gov (United States)

    Ferro, Mark A; Goodwin, Shane W; Sabaz, Mark; Speechley, Kathy N

    2016-03-01

    The aim of this study was to examine measurement equivalence of the newly developed Quality of Life in Childhood Epilepsy Questionnaire (QOLCE-55) across age, sex, and time in a representative sample of children with newly diagnosed epilepsy. Data come from 373 children enrolled in the Health-related Quality of Life in Children with Epilepsy Study (HERQULES), a multisite prospective cohort study. Measurement equivalence was examined using a multiple-group confirmatory factor analysis framework, whereby increasingly stringent parameter constraints are imposed on the model. Comparison groups were stratified based on age (4-7 years vs. 8-12 years), sex (male vs. female), and time (measurement of health-related quality of life at diagnosis vs. 24 months later). The QOLCE-55 demonstrated measurement equivalence at the level of strict invariance for each model tested--age: χ(2) (3,123) = 4,097.3, p QOLCE-55 are perceived similarly among groups stratified by age, sex, and time and provide further evidence supporting the validity of the scale in children with epilepsy. Health professionals and researchers should be confident that group comparisons made using the QOLCE-55 are unbiased and that any group differences detected are meaningful; that is, not related to differences in the interpretation of items by informants. Future research replicating these findings is encouraged. Wiley Periodicals, Inc. © 2016 International League Against Epilepsy.

  18. Practicing on Newly Dead

    Directory of Open Access Journals (Sweden)

    Jewel Abraham

    2015-07-01

    Full Text Available A newly dead cadaver simulation is practiced on the physical remains of the dead before the onset of rigor mortis. This technique has potential benefits for providing real-life in-situ experience for novice providers in health care practices. Evolving ethical views in health care brings into question some of the ethical aspects associated with newly dead cadaver simulation in terms of justification for practice, autonomy, consent, and the need of disclosure. A clear statement of policies and procedures on newly dead cadaver simulation has yet to be implemented. Although there are benefits and disadvantages to an in-situ cadaver simulation, such practices should not be carried out in secrecy as there is no compelling evidence that suggests such training as imperative. Secrecy in these practices is a violation of honor code of nursing ethics. As health care providers, practitioners are obliged to be ethically honest and trustworthy to their patients. The author explores the ethical aspects of using newly dead cadaver simulation in training novice nursing providers to gain competency in various lifesaving skills, which otherwise cannot be practiced on a living individual. The author explores multiple views on cadaver simulation in relation to ethical theories and practices such as consent and disclosure to family.

  19. Progressive geometric algorithms

    NARCIS (Netherlands)

    Alewijnse, S.P.A.; Bagautdinov, T.M.; de Berg, M.T.; Bouts, Q.W.; ten Brink, Alex P.; Buchin, K.A.; Westenberg, M.A.

    2015-01-01

    Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms

  20. Progressive geometric algorithms

    NARCIS (Netherlands)

    Alewijnse, S.P.A.; Bagautdinov, T.M.; Berg, de M.T.; Bouts, Q.W.; Brink, ten A.P.; Buchin, K.; Westenberg, M.A.

    2014-01-01

    Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms

  1. Newly qualified teachers´ possibilities to get foothold in a lifelong career course

    DEFF Research Database (Denmark)

    Krøjgaard, Frede; Frederiksen, Lisbeth Angela Lunde

    Keyword: Induction program, newly qualified teachers, NQT, retention, professional development In Contrary to many other countries in Europe Denmark does not have any kind of national program regarding teacher induction program (TIP) or support in general to newly qualified teachers what so ever...

  2. Prosthetic joint infection development of an evidence-based diagnostic algorithm.

    Science.gov (United States)

    Mühlhofer, Heinrich M L; Pohlig, Florian; Kanz, Karl-Georg; Lenze, Ulrich; Lenze, Florian; Toepfer, Andreas; Kelch, Sarah; Harrasser, Norbert; von Eisenhart-Rothe, Rüdiger; Schauwecker, Johannes

    2017-03-09

    Increasing rates of prosthetic joint infection (PJI) have presented challenges for general practitioners, orthopedic surgeons and the health care system in the recent years. The diagnosis of PJI is complex; multiple diagnostic tools are used in the attempt to correctly diagnose PJI. Evidence-based algorithms can help to identify PJI using standardized diagnostic steps. We reviewed relevant publications between 1990 and 2015 using a systematic literature search in MEDLINE and PUBMED. The selected search results were then classified into levels of evidence. The keywords were prosthetic joint infection, biofilm, diagnosis, sonication, antibiotic treatment, implant-associated infection, Staph. aureus, rifampicin, implant retention, pcr, maldi-tof, serology, synovial fluid, c-reactive protein level, total hip arthroplasty (THA), total knee arthroplasty (TKA) and combinations of these terms. From an initial 768 publications, 156 publications were stringently reviewed. Publications with class I-III recommendations (EAST) were considered. We developed an algorithm for the diagnostic approach to display the complex diagnosis of PJI in a clear and logically structured process according to ISO 5807. The evidence-based standardized algorithm combines modern clinical requirements and evidence-based treatment principles. The algorithm provides a detailed transparent standard operating procedure (SOP) for diagnosing PJI. Thus, consistently high, examiner-independent process quality is assured to meet the demands of modern quality management in PJI diagnosis.

  3. The practical skills of newly qualified nurses.

    Science.gov (United States)

    Danbjørg, Dorthe Boe; Birkelund, Regner

    2011-02-01

    This paper reports the findings from a study of newly qualified nurses and which subjects the nurses regarded as the most important in order to be able to live up to the requirements of clinical practice, and how they experience their potential for developing practical and moral skills, after the decrease in practical training. A qualitative approach guided the research process and the analysis of the data. The data was collected by participant observation and qualitative interviews with four nurses as informants. The conclusions made in this study are based on the statements and the observations of the newly qualified nurses. Our findings are discussed in relation to the Aristotelian concept and other relevant literature. The main message is that the newly qualified nurses did not feel equipped when they finished their training. This could be interpreted as a direct consequence of the decrease in practical training. Our study also underlines that the way nursing theory is perceived and taught is problematic. The interviews revealed that the nurses think that nursing theories should be applied directly in practice. This misunderstanding is probably also applicable to the teachers of the theories. Copyright © 2010 Elsevier Ltd. All rights reserved.

  4. Comparison of newly developed anti-bone morphogenetic protein 4 llama-derived antibodies with commercially available BMP4 inhibitors.

    Science.gov (United States)

    Calpe, Silvia; Correia, Ana C P; Sancho-Serra, Maria Del Carmen; Krishnadath, Kausilia K

    2016-01-01

    Due to improved understanding of the role of bone morphogenetic protein 4 (BMP4) in an increasing number of diseases, the development of selective inhibitors of BMP4 is an attractive therapeutic option. The currently available BMP4 inhibitors are not suitable as therapeutics because of their low specificity and low effectiveness. Here, we compared newly generated anti-BMP4 llama-derived antibodies (VHHs) with 3 different types of commercially available BMP4 inhibitors, natural antagonists, small molecule BMPR inhibitors and conventional anti-BMP4 monoclonal antibodies. We found that the anti-BMP4 VHHs were as effective as the natural antagonist or small molecule inhibitors, but had higher specificity. We also showed that commercial anti-BMP4 antibodies were inferior in terms of both specificity and effectiveness. These findings might result from the fact that the VHHs C4C4 and C8C8 target a small region within the BMPR1 epitope of BMP4, whereas the commercial antibodies target other areas of the BMP4 molecule. Our results show that the newly developed anti-BMP4 VHHs are promising antibodies with better specificity and effectivity for inhibition of BMP4, making them an attractive tool for research and for therapeutic applications.

  5. A deep learning method for lincRNA detection using auto-encoder algorithm.

    Science.gov (United States)

    Yu, Ning; Yu, Zeng; Pan, Yi

    2017-12-06

    RNA sequencing technique (RNA-seq) enables scientists to develop novel data-driven methods for discovering more unidentified lincRNAs. Meantime, knowledge-based technologies are experiencing a potential revolution ignited by the new deep learning methods. By scanning the newly found data set from RNA-seq, scientists have found that: (1) the expression of lincRNAs appears to be regulated, that is, the relevance exists along the DNA sequences; (2) lincRNAs contain some conversed patterns/motifs tethered together by non-conserved regions. The two evidences give the reasoning for adopting knowledge-based deep learning methods in lincRNA detection. Similar to coding region transcription, non-coding regions are split at transcriptional sites. However, regulatory RNAs rather than message RNAs are generated. That is, the transcribed RNAs participate the biological process as regulatory units instead of generating proteins. Identifying these transcriptional regions from non-coding regions is the first step towards lincRNA recognition. The auto-encoder method achieves 100% and 92.4% prediction accuracy on transcription sites over the putative data sets. The experimental results also show the excellent performance of predictive deep neural network on the lincRNA data sets compared with support vector machine and traditional neural network. In addition, it is validated through the newly discovered lincRNA data set and one unreported transcription site is found by feeding the whole annotated sequences through the deep learning machine, which indicates that deep learning method has the extensive ability for lincRNA prediction. The transcriptional sequences of lincRNAs are collected from the annotated human DNA genome data. Subsequently, a two-layer deep neural network is developed for the lincRNA detection, which adopts the auto-encoder algorithm and utilizes different encoding schemes to obtain the best performance over intergenic DNA sequence data. Driven by those newly

  6. Development of algorithm for continuous generation of a computer game in terms of usability and optimization of developed code in computer science

    Directory of Open Access Journals (Sweden)

    Tibor Skala

    2018-03-01

    Full Text Available As both hardware and software have become increasingly available and constantly developed, they globally contribute to improvements in technology in every field of technology and arts. Digital tools for creation and processing of graphical contents are very developed and they have been designed to shorten the time required for content creation, which is, in this case, animation. Since contemporary animation has experienced a surge in various visual styles and visualization methods, programming is built-in in everything that is currently in use. There is no doubt that there is a variety of algorithms and software which are the brain and the moving force behind any idea created for a specific purpose and applicability in society. Art and technology combined make a direct and oriented medium for publishing and marketing in every industry, including those which are not necessarily closely related to those that rely heavily on visual aspect of work. Additionally, quality and consistency of an algorithm will also depend on proper integration into the system that will be powered by that algorithm as well as on the way the algorithm is designed. Development of an endless algorithm and its effective use will be shown during the use of the computer game. In order to present the effect of various parameters, in the final phase of the computer game development an endless algorithm was tested with varying number of key input parameters (achieved time, score reached, pace of the game.

  7. A comparison of three self-tuning control algorithms developed for the Bristol-Babcock controller

    International Nuclear Information System (INIS)

    Tapp, P.A.

    1992-04-01

    A brief overview of adaptive control methods relating to the design of self-tuning proportional-integral-derivative (PID) controllers is given. The methods discussed include gain scheduling, self-tuning, auto-tuning, and model-reference adaptive control systems. Several process identification and parameter adjustment methods are discussed. Characteristics of the two most common types of self-tuning controllers implemented by industry (i.e., pattern recognition and process identification) are summarized. The substance of the work is a comparison of three self-tuning proportional-plus-integral (STPI) control algorithms developed to work in conjunction with the Bristol-Babcock PID control module. The STPI control algorithms are based on closed-loop cycling theory, pattern recognition theory, and model-based theory. A brief theory of operation of these three STPI control algorithms is given. Details of the process simulations developed to test the STPI algorithms are given, including an integrating process, a first-order system, a second-order system, a system with initial inverse response, and a system with variable time constant and delay. The STPI algorithms' performance with regard to both setpoint changes and load disturbances is evaluated, and their robustness is compared. The dynamic effects of process deadtime and noise are also considered. Finally, the limitations of each of the STPI algorithms is discussed, some conclusions are drawn from the performance comparisons, and a few recommendations are made. 6 refs

  8. Development of a framework for identification of political environmental issues faced by multinational hotel chains in newly industrialized countries in Asia

    OpenAIRE

    Kim, Chol Yong

    1992-01-01

    The primary/objective of this study was to develop a framework for identification of political environmental issues faced by multinational hotel chains in newly industrialized countries in Asia. To accomplish the objective, key factors having an impact upon these hotel chains were identified using the Delphi Technique.

  9. A Developed ESPRIT Algorithm for DOA Estimation

    Science.gov (United States)

    Fayad, Youssef; Wang, Caiyun; Cao, Qunsheng; Hafez, Alaa El-Din Sayed

    2015-05-01

    A novel algorithm for estimating direction of arrival (DOAE) for target, which aspires to contribute to increase the estimation process accuracy and decrease the calculation costs, has been carried out. It has introduced time and space multiresolution in Estimation of Signal Parameter via Rotation Invariance Techniques (ESPRIT) method (TS-ESPRIT) to realize subspace approach that decreases errors caused by the model's nonlinearity effect. The efficacy of the proposed algorithm is verified by using Monte Carlo simulation, the DOAE accuracy has evaluated by closed-form Cramér-Rao bound (CRB) which reveals that the proposed algorithm's estimated results are better than those of the normal ESPRIT methods leading to the estimator performance enhancement.

  10. Performance comparison between total variation (TV)-based compressed sensing and statistical iterative reconstruction algorithms

    International Nuclear Information System (INIS)

    Tang Jie; Nett, Brian E; Chen Guanghong

    2009-01-01

    Of all available reconstruction methods, statistical iterative reconstruction algorithms appear particularly promising since they enable accurate physical noise modeling. The newly developed compressive sampling/compressed sensing (CS) algorithm has shown the potential to accurately reconstruct images from highly undersampled data. The CS algorithm can be implemented in the statistical reconstruction framework as well. In this study, we compared the performance of two standard statistical reconstruction algorithms (penalized weighted least squares and q-GGMRF) to the CS algorithm. In assessing the image quality using these iterative reconstructions, it is critical to utilize realistic background anatomy as the reconstruction results are object dependent. A cadaver head was scanned on a Varian Trilogy system at different dose levels. Several figures of merit including the relative root mean square error and a quality factor which accounts for the noise performance and the spatial resolution were introduced to objectively evaluate reconstruction performance. A comparison is presented between the three algorithms for a constant undersampling factor comparing different algorithms at several dose levels. To facilitate this comparison, the original CS method was formulated in the framework of the statistical image reconstruction algorithms. Important conclusions of the measurements from our studies are that (1) for realistic neuro-anatomy, over 100 projections are required to avoid streak artifacts in the reconstructed images even with CS reconstruction, (2) regardless of the algorithm employed, it is beneficial to distribute the total dose to more views as long as each view remains quantum noise limited and (3) the total variation-based CS method is not appropriate for very low dose levels because while it can mitigate streaking artifacts, the images exhibit patchy behavior, which is potentially harmful for medical diagnosis.

  11. Development and comparisons of wind retrieval algorithms for small unmanned aerial systems

    Science.gov (United States)

    Bonin, T. A.; Chilson, P. B.; Zielke, B. S.; Klein, P. M.; Leeman, J. R.

    2012-12-01

    Recently, there has been an increase in use of Unmanned Aerial Systems (UASs) as platforms for conducting fundamental and applied research in the lower atmosphere due to their relatively low cost and ability to collect samples with high spatial and temporal resolution. Concurrent with this development comes the need for accurate instrumentation and measurement methods suitable for small meteorological UASs. Moreover, the instrumentation to be integrated into such platforms must be small and lightweight. Whereas thermodynamic variables can be easily measured using well aspirated sensors onboard, it is much more challenging to accurately measure the wind with a UAS. Several algorithms have been developed that incorporate GPS observations as a means of estimating the horizontal wind vector, with each algorithm exhibiting its own particular strengths and weaknesses. In the present study, the performance of three such GPS-based wind-retrieval algorithms has been investigated and compared with wind estimates from rawinsonde and sodar observations. Each of the algorithms considered agreed well with the wind measurements from sounding and sodar data. Through the integration of UAS-retrieved profiles of thermodynamic and kinematic parameters, one can investigate the static and dynamic stability of the atmosphere and relate them to the state of the boundary layer across a variety of times and locations, which might be difficult to access using conventional instrumentation.

  12. Development of pattern recognition algorithms for the central drift chamber of the Belle II detector

    Energy Technology Data Exchange (ETDEWEB)

    Trusov, Viktor

    2016-11-04

    In this thesis, the development of one of the pattern recognition algorithms for the Belle II experiment based on conformal and Legendre transformations is presented. In order to optimize the performance of the algorithm (CPU time and efficiency) specialized processing steps have been introduced. To show achieved results, Monte-Carlo based efficiency measurements of the tracking algorithms in the Central Drift Chamber (CDC) has been done.

  13. Genetic Algorithms for Development of New Financial Products

    Directory of Open Access Journals (Sweden)

    Eder Oliveira Abensur

    2007-06-01

    Full Text Available New Product Development (NPD is recognized as a fundamental activity that has a relevant impact on the performance of companies. Despite the relevance of the financial market there is a lack of work on new financial product development. The aim of this research is to propose the use of Genetic Algorithms (GA as an alternative procedure for evaluating the most favorable combination of variables for the product launch. The paper focuses on: (i determining the essential variables of the financial product studied (investment fund; (ii determining how to evaluate the success of a new investment fund launch and (iii how GA can be applied to the financial product development problem. The proposed framework was tested using 4 years of real data from the Brazilian financial market and the results suggest that this is an innovative development methodology and useful for designing complex financial products with many attributes.

  14. Evaluation of a New Backtrack Free Path Planning Algorithm for Manipulators

    Science.gov (United States)

    Islam, Md. Nazrul; Tamura, Shinsuke; Murata, Tomonari; Yanase, Tatsuro

    This paper evaluates a newly proposed backtrack free path planning algorithm (BFA) for manipulators. BFA is an exact algorithm, i.e. it is resolution complete. Different from existing resolution complete algorithms, its computation time and memory space are proportional to the number of arms. Therefore paths can be calculated within practical and predetermined time even for manipulators with many arms, and it becomes possible to plan complicated motions of multi-arm manipulators in fully automated environments. The performance of BFA is evaluated for 2-dimensional environments while changing the number of arms and obstacle placements. Its performance under locus and attitude constraints is also evaluated. Evaluation results show that the computation volume of the algorithm is almost the same as the theoretical one, i.e. it increases linearly with the number of arms even in complicated environments. Moreover BFA achieves the constant performance independent of environments.

  15. Development Modules for Specification of Requirements for a System of Verification of Parallel Algorithms

    Directory of Open Access Journals (Sweden)

    Vasiliy Yu. Meltsov

    2012-05-01

    Full Text Available This paper presents the results of the development of one of the modules of the system verification of parallel algorithms that are used to verify the inference engine. This module is designed to build the specification requirements, the feasibility of which on the algorithm is necessary to prove (test.

  16. Algorithmic mathematics

    CERN Document Server

    Hougardy, Stefan

    2016-01-01

    Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.

  17. Superior Generalization Capability of Hardware-Learing Algorithm Developed for Self-Learning Neuron-MOS Neural Networks

    Science.gov (United States)

    Kondo, Shuhei; Shibata, Tadashi; Ohmi, Tadahiro

    1995-02-01

    We have investigated the learning performance of the hardware backpropagation (HBP) algorithm, a hardware-oriented learning algorithm developed for the self-learning architecture of neural networks constructed using neuron MOS (metal-oxide-semiconductor) transistors. The solution to finding a mirror symmetry axis in a 4×4 binary pixel array was tested by computer simulation based on the HBP algorithm. Despite the inherent restrictions imposed on the hardware-learning algorithm, HBP exhibits equivalent learning performance to that of the original backpropagation (BP) algorithm when all the pertinent parameters are optimized. Very importantly, we have found that HBP has a superior generalization capability over BP; namely, HBP exhibits higher performance in solving problems that the network has not yet learnt.

  18. Implementation techniques and acceleration of DBPF reconstruction algorithm based on GPGPU for helical cone beam CT

    International Nuclear Information System (INIS)

    Shen Le; Xing Yuxiang

    2010-01-01

    The derivative back-projection filtered algorithm for a helical cone-beam CT is a newly developed exact reconstruction method. Due to its large computational complexity, the reconstruction is rather slow for practical use. General purpose graphic processing unit (GPGPU) is an SIMD paralleled hardware architecture with powerful float-point operation capacity. In this paper,we propose a new method for PI-line choice and sampling grid, and a paralleled PI-line reconstruction algorithm implemented on NVIDIA's Compute Unified Device Architecture (CUDA). Numerical simulation studies are carried out to validate our method. Compared with conventional CPU implementation, the CUDA accelerated method provides images of the same quality with a speedup factor of 318. Optimization strategies for the GPU acceleration are presented. Finally, influence of the parameters of the PI-line samples on the reconstruction speed and image quality is discussed. (authors)

  19. Newly graduated nurses' use of knowledge sources

    DEFF Research Database (Denmark)

    Voldbjerg, Siri Lygum; Grønkjaer, Mette; Sørensen, Erik Elgaard

    2016-01-01

    AIM: To advance evidence on newly graduated nurses' use of knowledge sources. BACKGROUND: Clinical decisions need to be evidence-based and understanding the knowledge sources that newly graduated nurses use will inform both education and practice. Qualitative studies on newly graduated nurses' use...... underscoring progression in knowledge use and perception of competence and confidence among newly graduated nurses. CONCLUSION: The transition phase, feeling of confidence and ability to use critical thinking and reflection, has a great impact on knowledge sources incorporated in clinical decisions....... The synthesis accentuates that for use of newly graduated nurses' qualifications and skills in evidence-based practice, clinical practice needs to provide a supportive environment which nurtures critical thinking and questions and articulates use of multiple knowledge sources....

  20. Being a team leader: newly registered nurses relate their experiences.

    Science.gov (United States)

    Ekström, Louise; Idvall, Ewa

    2015-01-01

    This paper presents a study that explores how newly qualified registered nurses experience their leadership role in the ward-based nursing care team. A nurse's clinical leadership affects the quality of care provided. Newly qualified nurses experience difficulties during the transition period from student to qualified professional and find it challenging to lead nursing care. Twelve nurses were interviewed and the transcribed texts analysed using qualitative content analysis to assess both manifest and latent content. Five themes were identified: feeling stranded; forming well-functioning teams; learning to lead; having the courage, strength, and desire to lead; and ensuring appropriate care. The findings indicate that many factors limit nurses' leadership but some circumstances are supportive. The leadership prerequisites for newly registered nurses need to improve, emphasizing different ways to create a supportive atmosphere that promotes professional development and job satisfaction. To increase nurse retention and promote quality of care, nurse managers need to clarify expectations and guide and support newly qualified nurses in a planned way. © 2013 John Wiley & Sons Ltd.

  1. Development of Data Processing Algorithms for the Upgraded LHCb Vertex Locator

    CERN Document Server

    AUTHOR|(CDS)2101352

    The LHCb detector will see a major upgrade during LHC Long Shutdown II, which is planned for 2019/20. The silicon Vertex Locator subdetector will be upgraded for operation under the new run conditions. The detector will be read out using a data acquisition board based on an FPGA. The work presented in this thesis is concerned with the development of the data processing algorithms to be used in this data acquisition board. In particular, work in three different areas of the FPGA is covered: the data processing block, the low level interface, and the post router block. The algorithms produced have been simulated and tested, and shown to provide the required performance. Errors in the initial implementation of the Gigabit Wireline Transmitter serialized data in the low level interface were discovered and corrected. The data scrambling algorithm and the post router block have been incorporated in the front end readout chip.

  2. Developments in the Aerosol Layer Height Retrieval Algorithm for the Copernicus Sentinel-4/UVN Instrument

    Science.gov (United States)

    Nanda, Swadhin; Sanders, Abram; Veefkind, Pepijn

    2016-04-01

    The Sentinel-4 mission is a part of the European Commission's Copernicus programme, the goal of which is to provide geo-information to manage environmental assets, and to observe, understand and mitigate the effects of the changing climate. The Sentinel-4/UVN instrument design is motivated by the need to monitor trace gas concentrations and aerosols in the atmosphere from a geostationary orbit. The on-board instrument is a high resolution UV-VIS-NIR (UVN) spectrometer system that provides hourly radiance measurements over Europe and northern Africa with a spatial sampling of 8 km. The main application area of Sentinel-4/UVN is air quality. One of the data products that is being developed for Sentinel-4/UVN is the Aerosol Layer Height (ALH). The goal is to determine the height of aerosol plumes with a resolution of better than 0.5 - 1 km. The ALH product thus targets aerosol layers in the free troposphere, such as desert dust, volcanic ash and biomass during plumes. KNMI is assigned with the development of the Aerosol Layer Height (ALH) algorithm. Its heritage is the ALH algorithm developed by Sanders and De Haan (ATBD, 2016) for the TROPOMI instrument on board the Sentinel-5 Precursor mission that is to be launched in June or July 2016 (tentative date). The retrieval algorithm designed so far for the aerosol height product is based on the absorption characteristics of the oxygen-A band (759-770 nm). The algorithm has heritage to the ALH algorithm developed for TROPOMI on the Sentinel 5 precursor satellite. New aspects for Sentinel-4/UVN include the higher resolution (0.116 nm compared to 0.4 for TROPOMI) and hourly observation from the geostationary orbit. The algorithm uses optimal estimation to obtain a spectral fit of the reflectance across absorption band, while assuming a single uniform layer with fixed width to represent the aerosol vertical distribution. The state vector includes amongst other elements the height of this layer and its aerosol optical

  3. Development of a meta-algorithm for guiding primary care encounters for patients with multimorbidity using evidence-based and case-based guideline development methodology.

    Science.gov (United States)

    Muche-Borowski, Cathleen; Lühmann, Dagmar; Schäfer, Ingmar; Mundt, Rebekka; Wagner, Hans-Otto; Scherer, Martin

    2017-06-22

    The study aimed to develop a comprehensive algorithm (meta-algorithm) for primary care encounters of patients with multimorbidity. We used a novel, case-based and evidence-based procedure to overcome methodological difficulties in guideline development for patients with complex care needs. Systematic guideline development methodology including systematic evidence retrieval (guideline synopses), expert opinions and informal and formal consensus procedures. Primary care. The meta-algorithm was developed in six steps:1. Designing 10 case vignettes of patients with multimorbidity (common, epidemiologically confirmed disease patterns and/or particularly challenging health care needs) in a multidisciplinary workshop.2. Based on the main diagnoses, a systematic guideline synopsis of evidence-based and consensus-based clinical practice guidelines was prepared. The recommendations were prioritised according to the clinical and psychosocial characteristics of the case vignettes.3. Case vignettes along with the respective guideline recommendations were validated and specifically commented on by an external panel of practicing general practitioners (GPs).4. Guideline recommendations and experts' opinions were summarised as case specific management recommendations (N-of-one guidelines).5. Healthcare preferences of patients with multimorbidity were elicited from a systematic literature review and supplemented with information from qualitative interviews.6. All N-of-one guidelines were analysed using pattern recognition to identify common decision nodes and care elements. These elements were put together to form a generic meta-algorithm. The resulting meta-algorithm reflects the logic of a GP's encounter of a patient with multimorbidity regarding decision-making situations, communication needs and priorities. It can be filled with the complex problems of individual patients and hereby offer guidance to the practitioner. Contrary to simple, symptom-oriented algorithms, the meta-algorithm

  4. Development of MODIS data-based algorithm for retrieving sea surface temperature in coastal waters.

    Science.gov (United States)

    Wang, Jiao; Deng, Zhiqiang

    2017-06-01

    A new algorithm was developed for retrieving sea surface temperature (SST) in coastal waters using satellite remote sensing data from Moderate Resolution Imaging Spectroradiometer (MODIS) aboard Aqua platform. The new SST algorithm was trained using the Artificial Neural Network (ANN) method and tested using 8 years of remote sensing data from MODIS Aqua sensor and in situ sensing data from the US coastal waters in Louisiana, Texas, Florida, California, and New Jersey. The ANN algorithm could be utilized to map SST in both deep offshore and particularly shallow nearshore waters at the high spatial resolution of 1 km, greatly expanding the coverage of remote sensing-based SST data from offshore waters to nearshore waters. Applications of the ANN algorithm require only the remotely sensed reflectance values from the two MODIS Aqua thermal bands 31 and 32 as input data. Application results indicated that the ANN algorithm was able to explaining 82-90% variations in observed SST in US coastal waters. While the algorithm is generally applicable to the retrieval of SST, it works best for nearshore waters where important coastal resources are located and existing algorithms are either not applicable or do not work well, making the new ANN-based SST algorithm unique and particularly useful to coastal resource management.

  5. Derivation of Land Surface Temperature for Landsat-8 TIRS Using a Split Window Algorithm

    Directory of Open Access Journals (Sweden)

    Offer Rozenstein

    2014-03-01

    Full Text Available Land surface temperature (LST is one of the most important variables measured by satellite remote sensing. Public domain data are available from the newly operational Landsat-8 Thermal Infrared Sensor (TIRS. This paper presents an adjustment of the split window algorithm (SWA for TIRS that uses atmospheric transmittance and land surface emissivity (LSE as inputs. Various alternatives for estimating these SWA inputs are reviewed, and a sensitivity analysis of the SWA to misestimating the input parameters is performed. The accuracy of the current development was assessed using simulated Modtran data. The root mean square error (RMSE of the simulated LST was calculated as 0.93 °C. This SWA development is leading to progress in the determination of LST by Landsat-8 TIRS.

  6. On developing B-spline registration algorithms for multi-core processors

    International Nuclear Information System (INIS)

    Shackleford, J A; Kandasamy, N; Sharp, G C

    2010-01-01

    Spline-based deformable registration methods are quite popular within the medical-imaging community due to their flexibility and robustness. However, they require a large amount of computing time to obtain adequate results. This paper makes two contributions towards accelerating B-spline-based registration. First, we propose a grid-alignment scheme and associated data structures that greatly reduce the complexity of the registration algorithm. Based on this grid-alignment scheme, we then develop highly data parallel designs for B-spline registration within the stream-processing model, suitable for implementation on multi-core processors such as graphics processing units (GPUs). Particular attention is focused on an optimal method for performing analytic gradient computations in a data parallel fashion. CPU and GPU versions are validated for execution time and registration quality. Performance results on large images show that our GPU algorithm achieves a speedup of 15 times over the single-threaded CPU implementation whereas our multi-core CPU algorithm achieves a speedup of 8 times over the single-threaded implementation. The CPU and GPU versions achieve near-identical registration quality in terms of RMS differences between the generated vector fields.

  7. Development of computational algorithms for quantification of pulmonary structures

    International Nuclear Information System (INIS)

    Oliveira, Marcela de; Alvarez, Matheus; Alves, Allan F.F.; Miranda, Jose R.A.; Pina, Diana R.

    2012-01-01

    The high-resolution computed tomography has become the imaging diagnostic exam most commonly used for the evaluation of the squeals of Paracoccidioidomycosis. The subjective evaluations the radiological abnormalities found on HRCT images do not provide an accurate quantification. The computer-aided diagnosis systems produce a more objective assessment of the abnormal patterns found in HRCT images. Thus, this research proposes the development of algorithms in MATLAB® computing environment can quantify semi-automatically pathologies such as pulmonary fibrosis and emphysema. The algorithm consists in selecting a region of interest (ROI), and by the use of masks, filter densities and morphological operators, to obtain a quantification of the injured area to the area of a healthy lung. The proposed method was tested on ten HRCT scans of patients with confirmed PCM. The results of semi-automatic measurements were compared with subjective evaluations performed by a specialist in radiology, falling to a coincidence of 80% for emphysema and 58% for fibrosis. (author)

  8. Development of algorithm for depreciation costs allocation in dynamic input-output industrial enterprise model

    Directory of Open Access Journals (Sweden)

    Keller Alevtina

    2017-01-01

    Full Text Available The article considers the issue of allocation of depreciation costs in the dynamic inputoutput model of an industrial enterprise. Accounting the depreciation costs in such a model improves the policy of fixed assets management. It is particularly relevant to develop the algorithm for the allocation of depreciation costs in the construction of dynamic input-output model of an industrial enterprise, since such enterprises have a significant amount of fixed assets. Implementation of terms of the adequacy of such an algorithm itself allows: evaluating the appropriateness of investments in fixed assets, studying the final financial results of an industrial enterprise, depending on management decisions in the depreciation policy. It is necessary to note that the model in question for the enterprise is always degenerate. It is caused by the presence of zero rows in the matrix of capital expenditures by lines of structural elements unable to generate fixed assets (part of the service units, households, corporate consumers. The paper presents the algorithm for the allocation of depreciation costs for the model. This algorithm was developed by the authors and served as the basis for further development of the flowchart for subsequent implementation with use of software. The construction of such algorithm and its use for dynamic input-output models of industrial enterprises is actualized by international acceptance of the effectiveness of the use of input-output models for national and regional economic systems. This is what allows us to consider that the solutions discussed in the article are of interest to economists of various industrial enterprises.

  9. Autonomous Star Tracker Algorithms

    DEFF Research Database (Denmark)

    Betto, Maurizio; Jørgensen, John Leif; Kilsgaard, Søren

    1998-01-01

    Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances.......Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances....

  10. Development of ODL in a Newly Industrialized Country according to Face-to-Face Contact, ICT, and E-Readiness

    Directory of Open Access Journals (Sweden)

    J. Marinda van Zyl

    2013-03-01

    Full Text Available A large number of unqualified and under-qualified in-service teachers are holding back socio-economical development in South Africa, a newly industrialized country. Open and distance learning (ODL provides an innovative strategy and praxis for developing and newly industrialized countries to reach their educational and socio-economical objectives through professional development and training. In order to examine factors which affect the success of ODL offered by the North-West University in South Africa, a qualitative and quantitative research approach is used. Factors examined include face-to-face classroom contact, the implementation and use of ICTs, and e-readiness. The relationships between these factors are also discussed. A questionnaire was administered to 87 teacher-students in four Advanced Certificate in Education (ACE programs to collect quantitative data regarding aspects of their classes and the e-readiness of students. This data was qualitatively elaborated upon by three semi-structured, open-ended focus-group interviews. Besides descriptive statistics, Spearman’s rank-order correlations (r were determined between variables pertaining to negative feelings towards face-to-face classroom contact, ODL as students’ choice of delivery mode, and students’ positive attitude towards information and communication technology (ICT. Combined quantitative and qualitative findings were used to evaluate the effectiveness of contact classes as well as the e-readiness of students towards the attainment of ODL development Phase D. This phase refers to UNESCO’s description of ICT implementation, integration, and use. Relationships (Spearman’s rank-order correlations between ODL, as teacher-students’ choice of educational delivery mode, and aspects of their e-readiness suggest that the e-readiness of teacher-students is implicit to their choice of ODL as educational delivery mode for professional development.

  11. Development of algorithms for building inventory compilation through remote sensing and statistical inferencing

    Science.gov (United States)

    Sarabandi, Pooya

    Building inventories are one of the core components of disaster vulnerability and loss estimations models, and as such, play a key role in providing decision support for risk assessment, disaster management and emergency response efforts. In may parts of the world inclusive building inventories, suitable for the use in catastrophe models cannot be found. Furthermore, there are serious shortcomings in the existing building inventories that include incomplete or out-dated information on critical attributes as well as missing or erroneous values for attributes. In this dissertation a set of methodologies for updating spatial and geometric information of buildings from single and multiple high-resolution optical satellite images are presented. Basic concepts, terminologies and fundamentals of 3-D terrain modeling from satellite images are first introduced. Different sensor projection models are then presented and sources of optical noise such as lens distortions are discussed. An algorithm for extracting height and creating 3-D building models from a single high-resolution satellite image is formulated. The proposed algorithm is a semi-automated supervised method capable of extracting attributes such as longitude, latitude, height, square footage, perimeter, irregularity index and etc. The associated errors due to the interactive nature of the algorithm are quantified and solutions for minimizing the human-induced errors are proposed. The height extraction algorithm is validated against independent survey data and results are presented. The validation results show that an average height modeling accuracy of 1.5% can be achieved using this algorithm. Furthermore, concept of cross-sensor data fusion for the purpose of 3-D scene reconstruction using quasi-stereo images is developed in this dissertation. The developed algorithm utilizes two or more single satellite images acquired from different sensors and provides the means to construct 3-D building models in a more

  12. A novel gridding algorithm to create regional trace gas maps from satellite observations

    Science.gov (United States)

    Kuhlmann, G.; Hartl, A.; Cheung, H. M.; Lam, Y. F.; Wenig, M. O.

    2014-02-01

    The recent increase in spatial resolution for satellite instruments has made it feasible to study distributions of trace gas column densities on a regional scale. For this application a new gridding algorithm was developed to map measurements from the instrument's frame of reference (level 2) onto a longitude-latitude grid (level 3). The algorithm is designed for the Ozone Monitoring Instrument (OMI) and can easily be employed for similar instruments - for example, the upcoming TROPOspheric Monitoring Instrument (TROPOMI). Trace gas distributions are reconstructed by a continuous parabolic spline surface. The algorithm explicitly considers the spatially varying sensitivity of the sensor resulting from the instrument function. At the swath edge, the inverse problem of computing the spline coefficients is very sensitive to measurement errors and is regularised by a second-order difference matrix. Since this regularisation corresponds to the penalty term for smoothing splines, it similarly attenuates the effect of measurement noise over the entire swath width. Monte Carlo simulations are conducted to study the performance of the algorithm for different distributions of trace gas column densities. The optimal weight of the penalty term is found to be proportional to the measurement uncertainty and the width of the instrument function. A comparison with an established gridding algorithm shows improved performance for small to moderate measurement errors due to better parametrisation of the distribution. The resulting maps are smoother and extreme values are more accurately reconstructed. The performance improvement is further illustrated with high-resolution distributions obtained from a regional chemistry model. The new algorithm is applied to tropospheric NO2 column densities measured by OMI. Examples of regional NO2 maps are shown for densely populated areas in China, Europe and the United States of America. This work demonstrates that the newly developed gridding

  13. A novel gridding algorithm to create regional trace gas maps from satellite observations

    Directory of Open Access Journals (Sweden)

    G. Kuhlmann

    2014-02-01

    Full Text Available The recent increase in spatial resolution for satellite instruments has made it feasible to study distributions of trace gas column densities on a regional scale. For this application a new gridding algorithm was developed to map measurements from the instrument's frame of reference (level 2 onto a longitude–latitude grid (level 3. The algorithm is designed for the Ozone Monitoring Instrument (OMI and can easily be employed for similar instruments – for example, the upcoming TROPOspheric Monitoring Instrument (TROPOMI. Trace gas distributions are reconstructed by a continuous parabolic spline surface. The algorithm explicitly considers the spatially varying sensitivity of the sensor resulting from the instrument function. At the swath edge, the inverse problem of computing the spline coefficients is very sensitive to measurement errors and is regularised by a second-order difference matrix. Since this regularisation corresponds to the penalty term for smoothing splines, it similarly attenuates the effect of measurement noise over the entire swath width. Monte Carlo simulations are conducted to study the performance of the algorithm for different distributions of trace gas column densities. The optimal weight of the penalty term is found to be proportional to the measurement uncertainty and the width of the instrument function. A comparison with an established gridding algorithm shows improved performance for small to moderate measurement errors due to better parametrisation of the distribution. The resulting maps are smoother and extreme values are more accurately reconstructed. The performance improvement is further illustrated with high-resolution distributions obtained from a regional chemistry model. The new algorithm is applied to tropospheric NO2 column densities measured by OMI. Examples of regional NO2 maps are shown for densely populated areas in China, Europe and the United States of America. This work demonstrates that the newly

  14. Nonuniform Sparse Data Clustering Cascade Algorithm Based on Dynamic Cumulative Entropy

    Directory of Open Access Journals (Sweden)

    Ning Li

    2016-01-01

    Full Text Available A small amount of prior knowledge and randomly chosen initial cluster centers have a direct impact on the accuracy of the performance of iterative clustering algorithm. In this paper we propose a new algorithm to compute initial cluster centers for k-means clustering and the best number of the clusters with little prior knowledge and optimize clustering result. It constructs the Euclidean distance control factor based on aggregation density sparse degree to select the initial cluster center of nonuniform sparse data and obtains initial data clusters by multidimensional diffusion density distribution. Multiobjective clustering approach based on dynamic cumulative entropy is adopted to optimize the initial data clusters and the best number of the clusters. The experimental results show that the newly proposed algorithm has good performance to obtain the initial cluster centers for the k-means algorithm and it effectively improves the clustering accuracy of nonuniform sparse data by about 5%.

  15. Solution Algorithm for a New Bi-Level Discrete Network Design Problem

    Directory of Open Access Journals (Sweden)

    Qun Chen

    2013-12-01

    Full Text Available A new discrete network design problem (DNDP was pro-posed in this paper, where the variables can be a series of integers rather than just 0-1. The new DNDP can determine both capacity improvement grades of reconstruction roads and locations and capacity grades of newly added roads, and thus complies with the practical projects where road capacity can only be some discrete levels corresponding to the number of lanes of roads. This paper designed a solution algorithm combining branch-and-bound with Hooke-Jeeves algorithm, where feasible integer solutions are recorded in searching the process of Hooke-Jeeves algorithm, lend -ing itself to determine the upper bound of the upper-level problem. The thresholds for branch cutting and ending were set for earlier convergence. Numerical examples are given to demonstrate the efficiency of the proposed algorithm.

  16. DEVELOPMENT OF THE ALGORITHM FOR CHOOSING THE OPTIMAL SCENARIO FOR THE DEVELOPMENT OF THE REGION'S ECONOMY

    Directory of Open Access Journals (Sweden)

    I. S. Borisova

    2018-01-01

    Full Text Available Purpose: the article deals with the development of an algorithm for choosing the optimal scenario for the development of the regional economy. Since the "Strategy for socio-economic development of the Lipetsk region for the period until 2020" does not contain scenarios for the development of the region, the algorithm for choosing the optimal scenario for the development of the regional economy is formalized. The scenarios for the development of the economy of the Lipetsk region according to the indicators of the Program of social and economic development are calculated: "Quality of life index", "Average monthly nominal wage", "Level of registered unemployment", "Growth rate of gross regional product", "The share of innovative products in the total volume of goods shipped, works performed and services rendered by industrial organizations", "Total volume of atmospheric pollution per unit GRP" and "Satisfaction of the population with the activity of executive bodies of state power of the region". Based on the calculation of development scenarios, the dynamics of the values of these indicators was developed in the implementation of scenarios for the development of the economy of the Lipetsk region in 2016–2020. Discounted financial costs of economic participants for realization of scenarios of development of economy of the Lipetsk region are estimated. It is shown that the current situation in the economy of the Russian Federation assumes the choice of a paradigm for the innovative development of territories and requires all participants in economic relations at the regional level to concentrate their resources on the creation of new science-intensive products. An assessment of the effects of the implementation of reasonable scenarios for the development of the economy of the Lipetsk region was carried out. It is shown that the most acceptable is the "base" scenario, which assumes a consistent change in the main indicators. The specific economic

  17. Erosion of newly developed CFCs and Be under disruption heat loads

    Science.gov (United States)

    Nakamura, K.; Akiba, M.; Araki, M.; Dairaku, M.; Sato, K.; Suzuki, S.; Yokoyama, K.; Linke, J.; Duwe, R.; Bolt, H.; Roedig, M.

    1996-10-01

    An evaluation of the erosion under disruption heat loads is very important to the lifetime prediction of divertor armour tiles of next fusion devices such as ITER. In particular, erosion data on CFCs (carbon fiber reinforced composites) and beryllium (Be) as the armour materials is urgently required in the ITER design. For CFCs, high heat flux experiments on the newly developed CFCs with high thermal conductivity have been performed under the heat flux of around 800-2000 MW/m 2 and the pulse length of 2-5 ms in JAERI electron beam irradiation systems (JEBIS). As a result, the weight losses of B 4C doped CFCs after heating were almost same to those of the non doped CFC up to 5 wt% boron content. For Be, we have carried out our first disruption experiments on S65/C grade Be specimens in the Juelich divertor test facility in hot cells (JUDITH) facility as a frame work of the J—EU collaboration. The heating conditions were heat loads of 1250-5000 MW/m 2 for 2-8 ms, and the heated area was 3 × 3 mm 2. As a result, the protuberances of the heated area of Be were observed under the lower heat flux.

  18. Erosion of newly developed CFCs and Be under disruption heat loads

    International Nuclear Information System (INIS)

    Nakamura, K.; Duwe, R.; Bolt, H.; Roedig, M.

    1996-01-01

    An evaluation of the erosion under disruption heat loads is very important to the lifetime prediction of divertor armour tiles of next fusion devices such as ITER. In particular, erosion data on CFCs (carbon fiber reinforced composites) and beryllium (Be) as the armour materials is urgently required in the ITER design. For CFCs, high heat flux experiments on the newly developed CFCs with high thermal conductivity have been performed under the heat flux of around 800-2000 MW/m 2 and the pulse length of 2-5 ms in JAERI electron beam irradiation systems (JEBIS). As a result, the weight losses of B 4 C doped CFCs after heating were almost same to those of the non doped CFC up to 5 wt% boron content. For Be, we have carried out our first disruption experiments on S65/C grade Be specimens in the Juelich divertor test facility in hot cells (JUDITH) facility as a frame work of the J-EU collaboration. The heating conditions were heat loads of 1250-5000 MW/m 2 for 2-8 ms, and the heated area was 3 x 3 mm 2 . As a result, the protuberances of the heated area of Be were observed under the lower heat flux. (orig.)

  19. Weldability aspects of a newly developed duplex stainless steel LDX 2101

    Energy Technology Data Exchange (ETDEWEB)

    Westin, E.M. [Avesta Research Centre, Avesta (Sweden). Outokumpu Stainless; Brolund, B. [SSAB Tunnplat, Borlaenge (Sweden); Hertzman, S. [Outokumpu Stainless Research Foundation, Stockholm (Sweden)

    2008-06-15

    Duplex grades have, due to balanced chemical compositions of both filler and base metals, a weldability that allows for successful welding using a majority of the technically relevant techniques of today. In order to fulfil the performance requirements several aspects must be considered. In the heat affected zone (HAZ) the austenite reformation must be reasonably high and in the weld metal the microstructure must be stable so that e.g. high productivity welding and multi-pass welding are possible, without precipitation of detrimental phases in previous passes. This paper addresses the effect of alloying elements and thermal cycles on phase balance in the high temperature HAZ (HTHAZ) of the newly developed lean duplex grade LDX 2101 (EN 1.4162, UNS S32101). Bead-on-plate welds and simulated weld structures have been produced and investigated using metallography, scanning electron microscopy (SEM) and transmission electron microscopy (TEM). The results are analysed using the thermodynamic database Thermo-Calc and a model for phase transformation based on a paraequilibrium assumption for ferrite-austenite transformation. In the temperature region outside the paraequilibrium domain, growth controlled by diffusion of substitutional elements was considered. The analysis follows a model by Cahn regarding grain boundary nucleated growth and the Hillert-Engberg model on kinetics of spherical and planar growth. (orig.)

  20. NUMERICAL MODELLING OF THE SOIL BEHAVIOUR BY USING NEWLY DEVELOPED ADVANCED MATERIAL MODEL

    Directory of Open Access Journals (Sweden)

    Jan Veselý

    2017-02-01

    Full Text Available This paper describes a theoretical background, implementation and validation of the newly developed Jardine plastic hardening-softening model (JPHS model, which can be used for numerical modelling of the soils behaviour. Although the JPHS model is based on the elasto-plastic theory, like the Mohr-Coulomb model that is widely used in geotechnics, it contains some improvements, which removes the main disadvantages of the MC model. The presented model is coupled with an isotopically hardening and softening law, non-linear elastic stress-strain law, non-associated elasto-plastic material description and a cap yield surface. The validation of the model is done by comparing the numerical results with real measured data from the laboratory tests and by testing of the model on the real project of the tunnel excavation. The 3D numerical analysis is performed and the comparison between the JPHS, Mohr-Coulomb, Modified Cam-Clay, Hardening small strain model and monitoring in-situ data is done.

  1. Development of Speckle Interferometry Algorithm and System

    International Nuclear Information System (INIS)

    Shamsir, A. A. M.; Jafri, M. Z. M.; Lim, H. S.

    2011-01-01

    Electronic speckle pattern interferometry (ESPI) method is a wholefield, non destructive measurement method widely used in the industries such as detection of defects on metal bodies, detection of defects in intergrated circuits in digital electronics components and in the preservation of priceless artwork. In this research field, this method is widely used to develop algorithms and to develop a new laboratory setup for implementing the speckle pattern interferometry. In speckle interferometry, an optically rough test surface is illuminated with an expanded laser beam creating a laser speckle pattern in the space surrounding the illuminated region. The speckle pattern is optically mixed with a second coherent light field that is either another speckle pattern or a smooth light field. This produces an interferometric speckle pattern that will be detected by sensor to count the change of the speckle pattern due to force given. In this project, an experimental setup of ESPI is proposed to analyze a stainless steel plate using 632.8 nm (red) wavelength of lights.

  2. Genetic Algorithm Design of a 3D Printed Heat Sink

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Tong [ORNL; Ozpineci, Burak [ORNL; Ayers, Curtis William [ORNL

    2016-01-01

    In this paper, a genetic algorithm- (GA-) based approach is discussed for designing heat sinks based on total heat generation and dissipation for a pre-specified size andshape. This approach combines random iteration processesand genetic algorithms with finite element analysis (FEA) to design the optimized heat sink. With an approach that prefers survival of the fittest , a more powerful heat sink can bedesigned which can cool power electronics more efficiently. Some of the resulting designs can only be 3D printed due totheir complexity. In addition to describing the methodology, this paper also includes comparisons of different cases to evaluate the performance of the newly designed heat sinkcompared to commercially available heat sinks.

  3. Development of a Framework for Genetic Algorithms

    OpenAIRE

    Wååg, Håkan

    2009-01-01

    Genetic algorithms is a method of optimization that can be used tosolve many different kinds of problems. This thesis focuses ondeveloping a framework for genetic algorithms that is capable ofsolving at least the two problems explored in the work. Otherproblems are supported by allowing user-made extensions.The purpose of this thesis is to explore the possibilities of geneticalgorithms for optimization problems and artificial intelligenceapplications.To test the framework two applications are...

  4. Parsing multiple processes of high temperature impacts on corn/soybean yield using a newly developed CLM-APSIM modeling framework

    Science.gov (United States)

    Peng, B.; Guan, K.; Chen, M.

    2016-12-01

    Future agricultural production faces a grand challenge of higher temperature under climate change. There are multiple physiological or metabolic processes of how high temperature affects crop yield. Specifically, we consider the following major processes: (1) direct temperature effects on photosynthesis and respiration; (2) speed-up growth rate and the shortening of growing season; (3) heat stress during reproductive stage (flowering and grain-filling); (4) high-temperature induced increase of atmospheric water demands. In this work, we use a newly developed modeling framework (CLM-APSIM) to simulate the corn and soybean growth and explicitly parse the above four processes. By combining the strength of CLM in modeling surface biophysical (e.g., hydrology and energy balance) and biogeochemical (e.g., photosynthesis and carbon-nitrogen interactions), as well as that of APSIM in modeling crop phenology and reproductive stress, the newly developed CLM-APSIM modeling framework enables us to diagnose the impacts of high temperature stress through different processes at various crop phenology stages. Ground measurements from the advanced SoyFACE facility at University of Illinois is used here to calibrate, validate, and improve the CLM-APSIM modeling framework at the site level. We finally use the CLM-APSIM modeling framework to project crop yield for the whole US Corn Belt under different climate scenarios.

  5. VISUALIZATION OF PAGERANK ALGORITHM

    OpenAIRE

    Perhaj, Ervin

    2013-01-01

    The goal of the thesis is to develop a web application that help users understand the functioning of the PageRank algorithm. The thesis consists of two parts. First we develop an algorithm to calculate PageRank values of web pages. The input of algorithm is a list of web pages and links between them. The user enters the list through the web interface. From the data the algorithm calculates PageRank value for each page. The algorithm repeats the process, until the difference of PageRank va...

  6. Use of newly developed standardized form for interpretation of high-resolution CT in screening for pneumoconiosis

    International Nuclear Information System (INIS)

    Julien, P.J.; Sider, L.; Silverman, J.M.; Dahlgren, J.; Harber, P.; Bunn, W.

    1991-01-01

    This paper reports that although the International Labour Office (ILO) standard for interpretation of the posteroanterior chest radiograph has been available for 10 years, there has been no attempt to standardize the high-resolution CT (HRTC) readings for screening of pneumoconiosis. An integrated respirator surveillance program for 87 workers exposed to inorganic dust was conducted. This program consisted of a detailed occupational exposure history, physical symptoms and signs, spirometry, chest radiography, and HRCT. Two groups of workers with known exposure were studied with HRCT. Group 1 had normal spirometry results and chest radiographs, and group 2 had abnormalities at spirometry or on chest radiographs. The HRCT scans were read independently of the clinical findings and chest radiographs. The HRCT scans were interpreted by using an ILO-based standard form developed by the authors for this project. With the newly developed HRCT form, individual descriptive abnormality localized severity, and overall rating systems have been developed and compared for inter- and intraobserver consistency

  7. The Algorithmic Imaginary

    DEFF Research Database (Denmark)

    Bucher, Taina

    2017-01-01

    the notion of the algorithmic imaginary. It is argued that the algorithmic imaginary – ways of thinking about what algorithms are, what they should be and how they function – is not just productive of different moods and sensations but plays a generative role in moulding the Facebook algorithm itself...... of algorithms affect people's use of these platforms, if at all? To help answer these questions, this article examines people's personal stories about the Facebook algorithm through tweets and interviews with 25 ordinary users. To understand the spaces where people and algorithms meet, this article develops...

  8. Implementation on Landsat Data of a Simple Cloud Mask Algorithm Developed for MODIS Land Bands

    Science.gov (United States)

    Oreopoulos, Lazaros; Wilson, Michael J.; Varnai, Tamas

    2010-01-01

    This letter assesses the performance on Landsat-7 images of a modified version of a cloud masking algorithm originally developed for clear-sky compositing of Moderate Resolution Imaging Spectroradiometer (MODIS) images at northern mid-latitudes. While data from recent Landsat missions include measurements at thermal wavelengths, and such measurements are also planned for the next mission, thermal tests are not included in the suggested algorithm in its present form to maintain greater versatility and ease of use. To evaluate the masking algorithm we take advantage of the availability of manual (visual) cloud masks developed at USGS for the collection of Landsat scenes used here. As part of our evaluation we also include the Automated Cloud Cover Assesment (ACCA) algorithm that includes thermal tests and is used operationally by the Landsat-7 mission to provide scene cloud fractions, but no cloud masks. We show that the suggested algorithm can perform about as well as ACCA both in terms of scene cloud fraction and pixel-level cloud identification. Specifically, we find that the algorithm gives an error of 1.3% for the scene cloud fraction of 156 scenes, and a root mean square error of 7.2%, while it agrees with the manual mask for 93% of the pixels, figures very similar to those from ACCA (1.2%, 7.1%, 93.7%).

  9. Fast geometric algorithms

    International Nuclear Information System (INIS)

    Noga, M.T.

    1984-01-01

    This thesis addresses a number of important problems that fall within the framework of the new discipline of Computational Geometry. The list of topics covered includes sorting and selection, convex hull algorithms, the L 1 hull, determination of the minimum encasing rectangle of a set of points, the Euclidean and L 1 diameter of a set of points, the metric traveling salesman problem, and finding the superrange of star-shaped and monotype polygons. The main theme of all the work was to develop a set of very fast state-of-the-art algorithms that supersede any rivals in terms of speed and ease of implementation. In some cases existing algorithms were refined; for others new techniques were developed that add to the present database of fast adaptive geometric algorithms. What emerges is a collection of techniques that is successful at merging modern tools developed in analysis of algorithms with those of classical geometry

  10. Developing the science product algorithm testbed for Chinese next-generation geostationary meteorological satellites: Fengyun-4 series

    Science.gov (United States)

    Min, Min; Wu, Chunqiang; Li, Chuan; Liu, Hui; Xu, Na; Wu, Xiao; Chen, Lin; Wang, Fu; Sun, Fenglin; Qin, Danyu; Wang, Xi; Li, Bo; Zheng, Zhaojun; Cao, Guangzhen; Dong, Lixin

    2017-08-01

    Fengyun-4A (FY-4A), the first of the Chinese next-generation geostationary meteorological satellites, launched in 2016, offers several advances over the FY-2: more spectral bands, faster imaging, and infrared hyperspectral measurements. To support the major objective of developing the prototypes of FY-4 science algorithms, two science product algorithm testbeds for imagers and sounders have been developed by the scientists in the FY-4 Algorithm Working Group (AWG). Both testbeds, written in FORTRAN and C programming languages for Linux or UNIX systems, have been tested successfully by using Intel/g compilers. Some important FY-4 science products, including cloud mask, cloud properties, and temperature profiles, have been retrieved successfully through using a proxy imager, Himawari-8/Advanced Himawari Imager (AHI), and sounder data, obtained from the Atmospheric InfraRed Sounder, thus demonstrating their robustness. In addition, in early 2016, the FY-4 AWG was developed based on the imager testbed—a near real-time processing system for Himawari-8/AHI data for use by Chinese weather forecasters. Consequently, robust and flexible science product algorithm testbeds have provided essential and productive tools for popularizing FY-4 data and developing substantial improvements in FY-4 products.

  11. Dataset exploited for the development and validation of automated cyanobacteria quantification algorithm, ACQUA

    Directory of Open Access Journals (Sweden)

    Emanuele Gandola

    2016-09-01

    Full Text Available The estimation and quantification of potentially toxic cyanobacteria in lakes and reservoirs are often used as a proxy of risk for water intended for human consumption and recreational activities. Here, we present data sets collected from three volcanic Italian lakes (Albano, Vico, Nemi that present filamentous cyanobacteria strains at different environments. Presented data sets were used to estimate abundance and morphometric characteristics of potentially toxic cyanobacteria comparing manual Vs. automated estimation performed by ACQUA (“ACQUA: Automated Cyanobacterial Quantification Algorithm for toxic filamentous genera using spline curves, pattern recognition and machine learning” (Gandola et al., 2016 [1]. This strategy was used to assess the algorithm performance and to set up the denoising algorithm. Abundance and total length estimations were used for software development, to this aim we evaluated the efficiency of statistical tools and mathematical algorithms, here described. The image convolution with the Sobel filter has been chosen to denoise input images from background signals, then spline curves and least square method were used to parameterize detected filaments and to recombine crossing and interrupted sections aimed at performing precise abundances estimations and morphometric measurements. Keywords: Comparing data, Filamentous cyanobacteria, Algorithm, Deoising, Natural sample

  12. Clustering and Candidate Motif Detection in Exosomal miRNAs by Application of Machine Learning Algorithms.

    Science.gov (United States)

    Gaur, Pallavi; Chaturvedi, Anoop

    2017-07-22

    The clustering pattern and motifs give immense information about any biological data. An application of machine learning algorithms for clustering and candidate motif detection in miRNAs derived from exosomes is depicted in this paper. Recent progress in the field of exosome research and more particularly regarding exosomal miRNAs has led much bioinformatic-based research to come into existence. The information on clustering pattern and candidate motifs in miRNAs of exosomal origin would help in analyzing existing, as well as newly discovered miRNAs within exosomes. Along with obtaining clustering pattern and candidate motifs in exosomal miRNAs, this work also elaborates the usefulness of the machine learning algorithms that can be efficiently used and executed on various programming languages/platforms. Data were clustered and sequence candidate motifs were detected successfully. The results were compared and validated with some available web tools such as 'BLASTN' and 'MEME suite'. The machine learning algorithms for aforementioned objectives were applied successfully. This work elaborated utility of machine learning algorithms and language platforms to achieve the tasks of clustering and candidate motif detection in exosomal miRNAs. With the information on mentioned objectives, deeper insight would be gained for analyses of newly discovered miRNAs in exosomes which are considered to be circulating biomarkers. In addition, the execution of machine learning algorithms on various language platforms gives more flexibility to users to try multiple iterations according to their requirements. This approach can be applied to other biological data-mining tasks as well.

  13. Algorithm development and verification of UASCM for multi-dimension and multi-group neutron kinetics model

    International Nuclear Information System (INIS)

    Si, S.

    2012-01-01

    The Universal Algorithm of Stiffness Confinement Method (UASCM) for neutron kinetics model of multi-dimensional and multi-group transport equations or diffusion equations has been developed. The numerical experiments based on transport theory code MGSNM and diffusion theory code MGNEM have demonstrated that the algorithm has sufficient accuracy and stability. (authors)

  14. Development of an algorithm for quantifying extremity biological tissue

    International Nuclear Information System (INIS)

    Pavan, Ana L.M.; Miranda, Jose R.A.; Pina, Diana R. de

    2013-01-01

    The computerized radiology (CR) has become the most widely used device for image acquisition and production, since its introduction in the 80s. The detection and early diagnosis, obtained via CR, are important for the successful treatment of diseases such as arthritis, metabolic bone diseases, tumors, infections and fractures. However, the standards used for optimization of these images are based on international protocols. Therefore, it is necessary to compose radiographic techniques for CR system that provides a secure medical diagnosis, with doses as low as reasonably achievable. To this end, the aim of this work is to develop a quantifier algorithm of tissue, allowing the construction of a homogeneous end used phantom to compose such techniques. It was developed a database of computed tomography images of hand and wrist of adult patients. Using the Matlab ® software, was developed a computational algorithm able to quantify the average thickness of soft tissue and bones present in the anatomical region under study, as well as the corresponding thickness in simulators materials (aluminium and lucite). This was possible through the application of mask and Gaussian removal technique of histograms. As a result, was obtained an average thickness of soft tissue of 18,97 mm and bone tissue of 6,15 mm, and their equivalents in materials simulators of 23,87 mm of acrylic and 1,07mm of aluminum. The results obtained agreed with the medium thickness of biological tissues of a patient's hand pattern, enabling the construction of an homogeneous phantom

  15. New Enhanced Artificial Bee Colony (JA-ABC5 Algorithm with Application for Reactive Power Optimization

    Directory of Open Access Journals (Sweden)

    Noorazliza Sulaiman

    2015-01-01

    Full Text Available The standard artificial bee colony (ABC algorithm involves exploration and exploitation processes which need to be balanced for enhanced performance. This paper proposes a new modified ABC algorithm named JA-ABC5 to enhance convergence speed and improve the ability to reach the global optimum by balancing exploration and exploitation processes. New stages have been proposed at the earlier stages of the algorithm to increase the exploitation process. Besides that, modified mutation equations have also been introduced in the employed and onlooker-bees phases to balance the two processes. The performance of JA-ABC5 has been analyzed on 27 commonly used benchmark functions and tested to optimize the reactive power optimization problem. The performance results have clearly shown that the newly proposed algorithm has outperformed other compared algorithms in terms of convergence speed and global optimum achievement.

  16. New enhanced artificial bee colony (JA-ABC5) algorithm with application for reactive power optimization.

    Science.gov (United States)

    Sulaiman, Noorazliza; Mohamad-Saleh, Junita; Abro, Abdul Ghani

    2015-01-01

    The standard artificial bee colony (ABC) algorithm involves exploration and exploitation processes which need to be balanced for enhanced performance. This paper proposes a new modified ABC algorithm named JA-ABC5 to enhance convergence speed and improve the ability to reach the global optimum by balancing exploration and exploitation processes. New stages have been proposed at the earlier stages of the algorithm to increase the exploitation process. Besides that, modified mutation equations have also been introduced in the employed and onlooker-bees phases to balance the two processes. The performance of JA-ABC5 has been analyzed on 27 commonly used benchmark functions and tested to optimize the reactive power optimization problem. The performance results have clearly shown that the newly proposed algorithm has outperformed other compared algorithms in terms of convergence speed and global optimum achievement.

  17. Verification of gamma knife based fractionated radiosurgery with newly developed head-thorax phantom

    International Nuclear Information System (INIS)

    Bisht, Raj Kishor; Kale, Shashank Sharad; Natanasabapathi, Gopishankar; Singh, Manmohan Jit; Agarwal, Deepak; Garg, Ajay; Rath, Goura Kishore; Julka, Pramod Kumar; Kumar, Pratik; Thulkar, Sanjay; Sharma, Bhawani Shankar

    2016-01-01

    Objective: Purpose of the study is to verify the Gamma Knife Extend™ system (ES) based fractionated stereotactic radiosurgery with newly developed head-thorax phantom. Methods: Phantoms are extensively used to measure radiation dose and verify treatment plan in radiotherapy. A human upper body shaped phantom with thorax was designed to simulate fractionated stereotactic radiosurgery using Extend™ system of Gamma Knife. The central component of the phantom aids in performing radiological precision test, dosimetric evaluation and treatment verification. A hollow right circular cylindrical space of diameter 7.0 cm was created at the centre of this component to place various dosimetric devices using suitable adaptors. The phantom is made of poly methyl methacrylate (PMMA), a transparent thermoplastic material. Two sets of disk assemblies were designed to place dosimetric films in (1) horizontal (xy) and (2) vertical (xz) planes. Specific cylindrical adaptors were designed to place thimble ionization chamber inside phantom for point dose recording along xz axis. EBT3 Gafchromic films were used to analyze and map radiation field. The focal precision test was performed using 4 mm collimator shot in phantom to check radiological accuracy of treatment. The phantom head position within the Extend™ frame was estimated using encoded aperture measurement of repositioning check tool (RCT). For treatment verification, the phantom with inserts for film and ion chamber was scanned in reference treatment position using X-ray computed tomography (CT) machine and acquired stereotactic images were transferred into Leksell Gammaplan (LGP). A patient treatment plan with hypo-fractionated regimen was delivered and identical fractions were compared using EBT3 films and in-house MATLAB codes. Results: RCT measurement showed an overall positional accuracy of 0.265 mm (range 0.223 mm–0.343 mm). Gamma index analysis across fractions exhibited close agreement between LGP and film

  18. A New Lightweight Watchdog-Based Algorithm for Detecting Sybil Nodes in Mobile WSNs

    Directory of Open Access Journals (Sweden)

    Rezvan Almas Shehni

    2017-12-01

    Full Text Available Wide-spread deployment of Wireless Sensor Networks (WSN necessitates special attention to security issues, amongst which Sybil attacks are the most important ones. As a core to Sybil attacks, malicious nodes try to disrupt network operations by creating several fabricated IDs. Due to energy consumption concerns in WSNs, devising detection algorithms which release the sensor nodes from high computational and communicational loads are of great importance. In this paper, a new computationally lightweight watchdog-based algorithm is proposed for detecting Sybil IDs in mobile WSNs. The proposed algorithm employs watchdog nodes for collecting detection information and a designated watchdog node for detection information processing and the final Sybil list generation. Benefiting from a newly devised co-presence state diagram and adequate detection rules, the new algorithm features low extra communication overhead, as well as a satisfactory compromise between two otherwise contradictory detection measures of performance, True Detection Rate (TDR and False Detection Rate (FDR. Extensive simulation results illustrate the merits of the new algorithm compared to a couple of recent watchdog-based Sybil detection algorithms.

  19. Advancements in the Development of an Operational Lightning Jump Algorithm for GOES-R GLM

    Science.gov (United States)

    Shultz, Chris; Petersen, Walter; Carey, Lawrence

    2011-01-01

    Rapid increases in total lightning have been shown to precede the manifestation of severe weather at the surface. These rapid increases have been termed lightning jumps, and are the current focus of algorithm development for the GOES-R Geostationary Lightning Mapper (GLM). Recent lightning jump algorithm work has focused on evaluation of algorithms in three additional regions of the country, as well as, markedly increasing the number of thunderstorms in order to evaluate the each algorithm s performance on a larger population of storms. Lightning characteristics of just over 600 thunderstorms have been studied over the past four years. The 2 lightning jump algorithm continues to show the most promise for an operational lightning jump algorithm, with a probability of detection of 82%, a false alarm rate of 35%, a critical success index of 57%, and a Heidke Skill Score of 0.73 on the entire population of thunderstorms. Average lead time for the 2 algorithm on all severe weather is 21.15 minutes, with a standard deviation of +/- 14.68 minutes. Looking at tornadoes alone, the average lead time is 18.71 minutes, with a standard deviation of +/-14.88 minutes. Moreover, removing the 2 lightning jumps that occur after a jump has been detected, and before severe weather is detected at the ground, the 2 lightning jump algorithm s false alarm rate drops from 35% to 21%. Cold season, low topped, and tropical environments cause problems for the 2 lightning jump algorithm, due to their relative dearth in lightning as compared to a supercellular or summertime airmass thunderstorm environment.

  20. Flagellation of Pseudomonas aeruginosa in newly divided cells

    Science.gov (United States)

    Zhao, Kun; Lee, Calvin; Anda, Jaime; Wong, Gerard

    2015-03-01

    For monotrichous bacteria, Pseudomonas aeruginosa, after cell division, one daughter cell inherits the old flagellum from its mother cell, and the other grows a new flagellum during or after cell division. It had been shown that the new flagellum grows at the distal pole of the dividing cell when the two daughter cells haven't completely separated. However, for those daughter cells who grow new flagella after division, it still remains unknown at which pole the new flagellum will grow. Here, by combining our newly developed bacteria family tree tracking techniques with genetic manipulation method, we showed that for the daughter cell who did not inherit the old flagellum, a new flagellum has about 90% chances to grow at the newly formed pole. We proposed a model for flagellation of P. aeruginosa.

  1. Surface ultrastuctures of the human laryngeal mucosa - observation by an newly developed technique of SEM cinematography

    International Nuclear Information System (INIS)

    Ohyama, M.; Ohno, I.; Fujita, T.; Adachi, K.

    1981-01-01

    With the newly-developed techniques of SEM cinematography, surface ultrastructures of the human normal and pathological laryngeal mucosa were demonstrated. The high specialization of the laryngeal mucosa with its marked regional differences stresses the fact that even the squamous epithelium and nonciliated epithelium may play a role of utmost importance. All specimens were obtained after laryngectomy from 10 patients affected by laryngeal cancer which had been treated with or without preoperative irradiation of Lineac in total doses of 3,500-4,500 rad. Special attention was paid to the occurrence of microvilli and microplicae in the normal and pathological mucosa of the larynx, and their morphological and physiological significances were discussed briefly. (Auth.)

  2. Transferability of Newly Developed Pear SSR Markers to Other Rosaceae Species.

    Science.gov (United States)

    Fan, L; Zhang, M-Y; Liu, Q-Z; Li, L-T; Song, Y; Wang, L-F; Zhang, S-L; Wu, J

    2013-01-01

    A set of 120 simple sequence repeats (SSRs) was developed from the newly assembled pear sequence and evaluated for polymorphisms in seven genotypes of pear from different genetic backgrounds. Of these, 67 (55.8 %) primer pairs produced polymorphic amplifications. Together, the 67 SSRs detected 277 alleles with an average of 4.13 per locus. Sequencing of the amplification products from randomly picked loci NAUPy31a and NAUpy53a verified the presence of the SSR loci. When the 67 primer pairs were tested on 96 individual members of eight species in the Rosaceae family, 61.2 % (41/67) of the tested SSRs successfully amplified a PCR product in at least one of the Rosaceae genera. The transferability from pear to different species varied from 58.2 % (apple) to 11.9 % (cherry). The ratio of transferability also reflected the closer relationships within Maloideae over Prunoideae. Two pear SSR markers, NAUpy43c and NAUpy55k, could distinguish the 20 different apple genotypes thoroughly, and UPGMA cluster analysis grouped them into three groups at the similarity level of 0.56. The high level of polymorphism and good transferability of pear SSRs to Rosaceae species indicate their promise for application to future molecular screening, map construction, and comparative genomic studies among pears and other Rosaceae species.

  3. The ASTRA Toolbox: A platform for advanced algorithm development in electron tomography

    Energy Technology Data Exchange (ETDEWEB)

    Aarle, Wim van, E-mail: wim.vanaarle@uantwerpen.be [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Palenstijn, Willem Jan, E-mail: willemjan.palenstijn@uantwerpen.be [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Centrum Wiskunde & Informatica, Science Park 123, NL-1098 XG Amsterdam (Netherlands); De Beenhouwer, Jan, E-mail: jan.debeenhouwer@uantwerpen.be [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Altantzis, Thomas, E-mail: thomas.altantzis@uantwerpen.be [Electron Microscopy for Materials Science, University of Antwerp, Groenenborgerlaan 171, B-2020 Wilrijk (Belgium); Bals, Sara, E-mail: sara.bals@uantwerpen.be [Electron Microscopy for Materials Science, University of Antwerp, Groenenborgerlaan 171, B-2020 Wilrijk (Belgium); Batenburg, K. Joost, E-mail: joost.batenburg@cwi.nl [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Centrum Wiskunde & Informatica, Science Park 123, NL-1098 XG Amsterdam (Netherlands); Mathematical Institute, Leiden University, P.O. Box 9512, NL-2300 RA Leiden (Netherlands); Sijbers, Jan, E-mail: jan.sijbers@uantwerpen.be [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium)

    2015-10-15

    We present the ASTRA Toolbox as an open platform for 3D image reconstruction in tomography. Most of the software tools that are currently used in electron tomography offer limited flexibility with respect to the geometrical parameters of the acquisition model and the algorithms used for reconstruction. The ASTRA Toolbox provides an extensive set of fast and flexible building blocks that can be used to develop advanced reconstruction algorithms, effectively removing these limitations. We demonstrate this flexibility, the resulting reconstruction quality, and the computational efficiency of this toolbox by a series of experiments, based on experimental dual-axis tilt series. - Highlights: • The ASTRA Toolbox is an open platform for 3D image reconstruction in tomography. • Advanced reconstruction algorithms can be prototyped using the fast and flexible building blocks. • This flexibility is demonstrated on a common use case: dual-axis tilt series reconstruction with prior knowledge. • The computational efficiency is validated on an experimentally measured tilt series.

  4. A sonification algorithm for developing the off-roads models for driving simulators

    Science.gov (United States)

    Chiroiu, Veturia; Brişan, Cornel; Dumitriu, Dan; Munteanu, Ligia

    2018-01-01

    In this paper, a sonification algorithm for developing the off-road models for driving simulators, is proposed. The aim of this algorithm is to overcome difficulties of heuristics identification which are best suited to a particular off-road profile built by measurements. The sonification algorithm is based on the stochastic polynomial chaos analysis suitable in solving equations with random input data. The fluctuations are generated by incomplete measurements leading to inhomogeneities of the cross-sectional curves of off-roads before and after deformation, the unstable contact between the tire and the road and the unreal distribution of contact and friction forces in the unknown contact domains. The approach is exercised on two particular problems and results compare favorably to existing analytical and numerical solutions. The sonification technique represents a useful multiscale analysis able to build a low-cost virtual reality environment with increased degrees of realism for driving simulators and higher user flexibility.

  5. The ASTRA Toolbox: A platform for advanced algorithm development in electron tomography

    International Nuclear Information System (INIS)

    Aarle, Wim van; Palenstijn, Willem Jan; De Beenhouwer, Jan; Altantzis, Thomas; Bals, Sara; Batenburg, K. Joost; Sijbers, Jan

    2015-01-01

    We present the ASTRA Toolbox as an open platform for 3D image reconstruction in tomography. Most of the software tools that are currently used in electron tomography offer limited flexibility with respect to the geometrical parameters of the acquisition model and the algorithms used for reconstruction. The ASTRA Toolbox provides an extensive set of fast and flexible building blocks that can be used to develop advanced reconstruction algorithms, effectively removing these limitations. We demonstrate this flexibility, the resulting reconstruction quality, and the computational efficiency of this toolbox by a series of experiments, based on experimental dual-axis tilt series. - Highlights: • The ASTRA Toolbox is an open platform for 3D image reconstruction in tomography. • Advanced reconstruction algorithms can be prototyped using the fast and flexible building blocks. • This flexibility is demonstrated on a common use case: dual-axis tilt series reconstruction with prior knowledge. • The computational efficiency is validated on an experimentally measured tilt series

  6. Development of a Crosstalk Suppression Algorithm for KID Readout

    Science.gov (United States)

    Lee, Kyungmin; Ishitsuka, H.; Oguri, S.; Suzuki, J.; Tajima, O.; Tomita, N.; Won, Eunil; Yoshida, M.

    2018-06-01

    The GroundBIRD telescope aims to detect B-mode polarization of the cosmic microwave background radiation using the kinetic inductance detector array as a polarimeter. For the readout of the signal from detector array, we have developed a frequency division multiplexing readout system based on a digital down converter method. These techniques in general have the leakage problems caused by the crosstalks. The window function was applied in the field programmable gate arrays to mitigate the effect of these problems and tested it in algorithm level.

  7. Cross-Linking Mast Cell Specific Gangliosides Stimulates the Release of Newly Formed Lipid Mediators and Newly Synthesized Cytokines

    Directory of Open Access Journals (Sweden)

    Edismauro Garcia Freitas Filho

    2016-01-01

    Full Text Available Mast cells are immunoregulatory cells that participate in inflammatory processes. Cross-linking mast cell specific GD1b derived gangliosides by mAbAA4 results in partial activation of mast cells without the release of preformed mediators. The present study examines the release of newly formed and newly synthesized mediators following ganglioside cross-linking. Cross-linking the gangliosides with mAbAA4 released the newly formed lipid mediators, prostaglandins D2 and E2, without release of leukotrienes B4 and C4. The effect of cross-linking these gangliosides on the activation of enzymes in the arachidonate cascade was then investigated. Ganglioside cross-linking resulted in phosphorylation of cytosolic phospholipase A2 and increased expression of cyclooxygenase-2. Translocation of 5-lipoxygenase from the cytosol to the nucleus was not induced by ganglioside cross-linking. Cross-linking of GD1b derived gangliosides also resulted in the release of the newly synthesized mediators, interleukin-4, interleukin-6, and TNF-α. The effect of cross-linking the gangliosides on the MAP kinase pathway was then investigated. Cross-linking the gangliosides induced the phosphorylation of ERK1/2, JNK1/2, and p38 as well as activating both NFκB and NFAT in a Syk-dependent manner. Therefore, cross-linking the mast cell specific GD1b derived gangliosides results in the activation of signaling pathways that culminate with the release of newly formed and newly synthesized mediators.

  8. Development and testing of a mobile application to support diabetes self-management for people with newly diagnosed type 2 diabetes: a design thinking case study.

    Science.gov (United States)

    Petersen, Mira; Hempler, Nana F

    2017-06-26

    Numerous mobile applications have been developed to support diabetes-self-management. However, the majority of these applications lack a theoretical foundation and the involvement of people with diabetes during development. The aim of this study was to develop and test a mobile application (app) supporting diabetes self-management among people with newly diagnosed type 2 diabetes using design thinking. The app was developed and tested in 2015 using a design-based research approach involving target users (individuals newly diagnosed with type 2 diabetes), research scientists, healthcare professionals, designers, and app developers. The research approach comprised three major phases: inspiration, ideation, and implementation. The first phase included observations of diabetes education and 12 in-depth interviews with users regarding challenges and needs related to living with diabetes. The ideation phrase consisted of four interactive workshops with users focusing on app needs, in which ideas were developed and prioritized. Finally, 14 users tested the app over 4 weeks; they were interviewed about usability and perceptions about the app as a support tool. A multifunctional app was useful for people with newly diagnosed type 2 diabetes. The final app comprised five major functions: overview of diabetes activities after diagnosis, recording of health data, reflection games and goal setting, knowledge games and recording of psychological data such as sleep, fatigue, and well-being. Users found the app to be a valuable tool for support, particularly for raising their awareness about their psychological health and for informing and guiding them through the healthcare system after diagnosis. The design thinking processes used in the development and implementation of the mobile health app were crucial to creating value for users. More attention should be paid to the training of professionals who introduce health apps. Danish Data Protection Agency: 2012-58-0004. Registered 6

  9. Newly Homeless Youth Typically Return Home

    OpenAIRE

    Milburn, Norweeta G.; Rosenthal, Doreen; Rotheram-Borus, Mary Jane; Mallett, Shelley; Batterham, Philip; Rice, Eric; Solorio, Rosa

    2007-01-01

    165 newly homeless adolescents from Melbourne, Australia and 261 from Los Angeles, United States were surveyed and followed for two years. Most newly homeless adolescents returned home (70% U.S., 47% Australia) for significant amounts of time (39% U.S., 17% Australia more than 12 months) within two years of becoming homeless.

  10. A false-alarm aware methodology to develop robust and efficient multi-scale infrared small target detection algorithm

    Science.gov (United States)

    Moradi, Saed; Moallem, Payman; Sabahi, Mohamad Farzan

    2018-03-01

    False alarm rate and detection rate are still two contradictory metrics for infrared small target detection in an infrared search and track system (IRST), despite the development of new detection algorithms. In certain circumstances, not detecting true targets is more tolerable than detecting false items as true targets. Hence, considering background clutter and detector noise as the sources of the false alarm in an IRST system, in this paper, a false alarm aware methodology is presented to reduce false alarm rate while the detection rate remains undegraded. To this end, advantages and disadvantages of each detection algorithm are investigated and the sources of the false alarms are determined. Two target detection algorithms having independent false alarm sources are chosen in a way that the disadvantages of the one algorithm can be compensated by the advantages of the other one. In this work, multi-scale average absolute gray difference (AAGD) and Laplacian of point spread function (LoPSF) are utilized as the cornerstones of the desired algorithm of the proposed methodology. After presenting a conceptual model for the desired algorithm, it is implemented through the most straightforward mechanism. The desired algorithm effectively suppresses background clutter and eliminates detector noise. Also, since the input images are processed through just four different scales, the desired algorithm has good capability for real-time implementation. Simulation results in term of signal to clutter ratio and background suppression factor on real and simulated images prove the effectiveness and the performance of the proposed methodology. Since the desired algorithm was developed based on independent false alarm sources, our proposed methodology is expandable to any pair of detection algorithms which have different false alarm sources.

  11. Technetium-99m carboxymethylcellulose: A newly developed fibre marker for gastric emptying studies

    International Nuclear Information System (INIS)

    Schade, J.H.; Hoving, J.; Brouweres, J.R.B.J.; Riedstra-van Gent, H.G.; Zijlstra, J.; Dijkstra, J.P.H.

    1991-01-01

    We report a study of technetium-99m-labelled carboxymethyl-cellulose ( 99m Tc-CMC) as a newly developed non-digestible marker of the solid phase of gastric contents. The radiosynthesis is simple and shows a high labelling efficiency. In vitro and in vivo experiments demonstrated stability of the marker in the gastrointestinal tract during the process of gastric emptying. The gastric half-emptying time in ten healthy volunteers of both sexes was 105±17 min (mean±SD). This rate of gastric emptying is similar to that of non-digestible solid-phase markers such as in vivo labelled 99m Tc-chicken liver or radio-iodinated cellulose. In comparison with digestible solid-phase markers such as 99m Tc-labelled pancake or 99m Tc-cooked egg, gastric emptying of 99m Tc-CMC occurred more slowly, confirming the expected behaviour of a non-digestible solid-phase marker. We conclude that 99m Tc-CMC has the advantage of a simple and rapid labelling procedure and may be useful for clinical studies of gastric emptying. (orig.)

  12. Development of independent MU/treatment time verification algorithm for non-IMRT treatment planning: A clinical experience

    Science.gov (United States)

    Tatli, Hamza; Yucel, Derya; Yilmaz, Sercan; Fayda, Merdan

    2018-02-01

    The aim of this study is to develop an algorithm for independent MU/treatment time (TT) verification for non-IMRT treatment plans, as a part of QA program to ensure treatment delivery accuracy. Two radiotherapy delivery units and their treatment planning systems (TPS) were commissioned in Liv Hospital Radiation Medicine Center, Tbilisi, Georgia. Beam data were collected according to vendors' collection guidelines, and AAPM reports recommendations, and processed by Microsoft Excel during in-house algorithm development. The algorithm is designed and optimized for calculating SSD and SAD treatment plans, based on AAPM TG114 dose calculation recommendations, coded and embedded in MS Excel spreadsheet, as a preliminary verification algorithm (VA). Treatment verification plans were created by TPSs based on IAEA TRS 430 recommendations, also calculated by VA, and point measurements were collected by solid water phantom, and compared. Study showed that, in-house VA can be used for non-IMRT plans MU/TT verifications.

  13. A Novel Chaotic Particle Swarm Optimization Algorithm for Parking Space Guidance

    Directory of Open Access Journals (Sweden)

    Na Dong

    2016-01-01

    Full Text Available An evolutionary approach of parking space guidance based upon a novel Chaotic Particle Swarm Optimization (CPSO algorithm is proposed. In the newly proposed CPSO algorithm, the chaotic dynamics is combined into the position updating rules of Particle Swarm Optimization to improve the diversity of solutions and to avoid being trapped in the local optima. This novel approach, that combines the strengths of Particle Swarm Optimization and chaotic dynamics, is then applied into the route optimization (RO problem of parking lots, which is an important issue in the management systems of large-scale parking lots. It is used to find out the optimized paths between any source and destination nodes in the route network. Route optimization problems based on real parking lots are introduced for analyzing and the effectiveness and practicability of this novel optimization algorithm for parking space guidance have been verified through the application results.

  14. Development of Image Reconstruction Algorithms in electrical Capacitance Tomography

    International Nuclear Information System (INIS)

    Fernandez Marron, J. L.; Alberdi Primicia, J.; Barcala Riveira, J. M.

    2007-01-01

    The Electrical Capacitance Tomography (ECT) has not obtained a good development in order to be used at industrial level. That is due first to difficulties in the measurement of very little capacitances (in the range of femto farads) and second to the problem of reconstruction on- line of the images. This problem is due also to the small numbers of electrodes (maximum 16), that made the usual algorithms of reconstruction has many errors. In this work it is described a new purely geometrical method that could be used for this purpose. (Author) 4 refs

  15. THE NEXUS BETWEEN ENERGY CONSUMPTION AND FINANCIAL DEVELOPMENT WITH ASYMMETRIC CAUSALITY TEST: NEW EVIDENCE FROM NEWLY INDUSTRIALIZED COUNTRIES

    Directory of Open Access Journals (Sweden)

    Feyyaz Zeren

    2014-01-01

    Full Text Available In this study, the relationship between energy consumption and financial development is investigated via Hatemi-J asymmetric causality test (2012 which is able to separate positive and negative shocks in analysis. In order to determine different dimensions of financial system, deposit money bank assets to GDP (dbagdp, financial system deposits to GDP (fdgdp and private credit to GDP (pcrdbgdp were used as three different indicators. As a result of this study on Newly Industrialized 7 Countries spanning the period 1971 till 2010, both positive and negative shocks existed for Malaysia and Mexico, causality from energy consumption to financial developments emerged for Philippines in only negative shocks. While two-way causality occurred for India, Turkey and Thailand, there was not for South Africa.

  16. A Region-Based GeneSIS Segmentation Algorithm for the Classification of Remotely Sensed Images

    Directory of Open Access Journals (Sweden)

    Stelios K. Mylonas

    2015-03-01

    Full Text Available This paper proposes an object-based segmentation/classification scheme for remotely sensed images, based on a novel variant of the recently proposed Genetic Sequential Image Segmentation (GeneSIS algorithm. GeneSIS segments the image in an iterative manner, whereby at each iteration a single object is extracted via a genetic-based object extraction algorithm. Contrary to the previous pixel-based GeneSIS where the candidate objects to be extracted were evaluated through the fuzzy content of their included pixels, in the newly developed region-based GeneSIS algorithm, a watershed-driven fine segmentation map is initially obtained from the original image, which serves as the basis for the forthcoming GeneSIS segmentation. Furthermore, in order to enhance the spatial search capabilities, we introduce a more descriptive encoding scheme in the object extraction algorithm, where the structural search modules are represented by polygonal shapes. Our objectives in the new framework are posed as follows: enhance the flexibility of the algorithm in extracting more flexible object shapes, assure high level classification accuracies, and reduce the execution time of the segmentation, while at the same time preserving all the inherent attributes of the GeneSIS approach. Finally, exploiting the inherent attribute of GeneSIS to produce multiple segmentations, we also propose two segmentation fusion schemes that operate on the ensemble of segmentations generated by GeneSIS. Our approaches are tested on an urban and two agricultural images. The results show that region-based GeneSIS has considerably lower computational demands compared to the pixel-based one. Furthermore, the suggested methods achieve higher classification accuracies and good segmentation maps compared to a series of existing algorithms.

  17. Design of an X-band accelerating structure using a newly developed structural optimization procedure

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Xiaoxia [Shanghai Institute of Applied Physics, Chinese Academy of Sciences, Shanghai 201800 (China); University of Chinese Academy of Sciences, Beijing 100049 (China); Fang, Wencheng; Gu, Qiang [Shanghai Institute of Applied Physics, Chinese Academy of Sciences, Shanghai 201800 (China); Zhao, Zhentang, E-mail: zhaozhentang@sinap.ac.cn [Shanghai Institute of Applied Physics, Chinese Academy of Sciences, Shanghai 201800 (China); University of Chinese Academy of Sciences, Beijing 100049 (China)

    2017-05-11

    An X-band high gradient accelerating structure is a challenging technology for implementation in advanced electron linear accelerator facilities. The present work discusses the design of an X-band accelerating structure for dedicated application to a compact hard X-ray free electron laser facility at the Shanghai Institute of Applied Physics, and numerous design optimizations are conducted with consideration for radio frequency (RF) breakdown, RF efficiency, short-range wakefields, and dipole/quadrupole field modes, to ensure good beam quality and a high accelerating gradient. The designed X-band accelerating structure is a constant gradient structure with a 4π/5 operating mode and input and output dual-feed couplers in a racetrack shape. The design process employs a newly developed effective optimization procedure for optimization of the X-band accelerating structure. In addition, the specific design of couplers providing high beam quality by eliminating dipole field components and reducing quadrupole field components is discussed in detail.

  18. Duplex ultrasound: Indications and findings in a newly created ...

    African Journals Online (AJOL)

    Duplex ultrasound: Indications and findings in a newly created facility at the University of Calabar Teaching Hospital, Calabar. ... It is recommended that timely referrals be made, and mobile Doppler units be acquired to save more lives and limbs in the developing world. Keywords: Calabar, deep venous thrombosis, duplex ...

  19. Competence of newly qualified registered nurses from a nursing college

    Directory of Open Access Journals (Sweden)

    BG Morolong

    2005-09-01

    Full Text Available The South African education and training system, through its policy of outcomesbased education and training, has made competency a national priority. In compliance to this national requirement of producing competent learners, the South African Nursing Council ( 1999 B require that the beginner professional nurse practitioners and midwives have the necessary knowledge, skills, attitudes and values which will enable them to render efficient professional service. The health care system also demands competent nurse practitioners to ensure quality in health care. In the light of competency being a national priority and a statutory demand, the research question that emerges is, how competent are the newly qualified registered nurses from a specific nursing college in clinical nursing education? A quantitative, non-experimental contextual design was used to evaluate the competence of newly qualified registered nurses from a specific nursing college. The study was conducted in two phases. The first phase dealt with the development of an instrument together with its manual through the conceptualisation process. The second phase focused on the evaluation of the competency of newly qualified nurses using the instrument based on the steps of the nursing process. A pilot study was conducted to test the feasibility of the items of the instrument. During the evaluation phase, a sample of twenty-six newly qualified nurses was selected by simple random sampling from a target population of thirty-six newly qualified registered nurses. However, six participants withdrew from the study. Data was collected in two general hospitals where the newly qualified registered nurses were working. Observation and questioning were used as data collection techniques in accordance with the developed instrument. Measures were taken to ensure internal validity and reliability of the results. To protect the rights of the participants, the researcher adhered to DENOSA’S (1998

  20. Unveiling the development of intracranial injury using dynamic brain EIT: an evaluation of current reconstruction algorithms.

    Science.gov (United States)

    Li, Haoting; Chen, Rongqing; Xu, Canhua; Liu, Benyuan; Tang, Mengxing; Yang, Lin; Dong, Xiuzhen; Fu, Feng

    2017-08-21

    Dynamic brain electrical impedance tomography (EIT) is a promising technique for continuously monitoring the development of cerebral injury. While there are many reconstruction algorithms available for brain EIT, there is still a lack of study to compare their performance in the context of dynamic brain monitoring. To address this problem, we develop a framework for evaluating different current algorithms with their ability to correctly identify small intracranial conductivity changes. Firstly, a simulation 3D head phantom with realistic layered structure and impedance distribution is developed. Next several reconstructing algorithms, such as back projection (BP), damped least-square (DLS), Bayesian, split Bregman (SB) and GREIT are introduced. We investigate their temporal response, noise performance, location and shape error with respect to different noise levels on the simulation phantom. The results show that the SB algorithm demonstrates superior performance in reducing image error. To further improve the location accuracy, we optimize SB by incorporating the brain structure-based conductivity distribution priors, in which differences of the conductivities between different brain tissues and the inhomogeneous conductivity distribution of the skull are considered. We compare this novel algorithm (called SB-IBCD) with SB and DLS using anatomically correct head shaped phantoms with spatial varying skull conductivity. Main results and Significance: The results showed that SB-IBCD is the most effective in unveiling small intracranial conductivity changes, where it can reduce the image error by an average of 30.0% compared to DLS.

  1. A newly developed grab sampling system for collecting stratospheric air over Antarctica

    Directory of Open Access Journals (Sweden)

    Hideyuki Honda

    1996-07-01

    Full Text Available In order to measure the concentrations of various minor constituents and their isotopic ratios in the stratosphere over Antarctica, a simple grab sampling system was newly developed. The sampling system was designed to be launched by a small number of personnel using a rubber balloon under severe experimental conditions. Special attention was paid to minimize the contamination of sample air, as well as to allow easy handling of the system. The sampler consisted mainly of a 15l sample container with electromagnetic and manual valves, control electronics for executing the air sampling procedures and sending the position and status information of the sampler to the ground station, batteries and a transmitter. All these parts were assembled in an aluminum frame gondola with a shock absorbing system for landing. The sampler was equipped with a turn-over mechanism of the gondola to minimize contamination from the gondola, as well as with a GPS receiver and a rawinsonde for its tracking. Total weight of the sampler was about 11kg. To receive, display and store the position and status data of the sampling system at the ground station, a simple data acquisition system with a portable receiver and a microcomputer was also developed. A new gas handling system was prepared to simplify the injection of He gas into the balloon. For air sampling experiments, three sampling systems were launched at Syowa Station (69°00′S, 39°35′E, Antarctica and then recovered on sea ice near the station on January 22 and 25,1996.

  2. Development of an improved genetic algorithm and its application in the optimal design of ship nuclear power system

    International Nuclear Information System (INIS)

    Jia Baoshan; Yu Jiyang; You Songbo

    2005-01-01

    This article focuses on the development of an improved genetic algorithm and its application in the optimal design of the ship nuclear reactor system, whose goal is to find a combination of system parameter values that minimize the mass or volume of the system given the power capacity requirement and safety criteria. An improved genetic algorithm (IGA) was developed using an 'average fitness value' grouping + 'specified survival probability' rank selection method and a 'separate-recombine' duplication operator. Combining with a simulated annealing algorithm (SAA) that continues the local search after the IGA reaches a satisfactory point, the algorithm gave satisfactory optimization results from both search efficiency and accuracy perspectives. This IGA-SAA algorithm successfully solved the design optimization problem of ship nuclear power system. It is an advanced and efficient methodology that can be applied to the similar optimization problems in other areas. (authors)

  3. An improved energy conserving implicit time integration algorithm for nonlinear dynamic structural analysis

    International Nuclear Information System (INIS)

    Haug, E.; Rouvray, A.L. de; Nguyen, Q.S.

    1977-01-01

    This study proposes a general nonlinear algorithm stability criterion; it introduces a nonlinear algorithm, easily implemented in existing incremental/iterative codes, and it applies the new scheme beneficially to problems of linear elastic dynamic snap buckling. Based on the concept of energy conservation, the paper outlines an algorithm which degenerates into the trapezoidal rule, if applied to linear systems. The new algorithm conserves energy in systems having elastic potentials up to the fourth order in the displacements. This is true in the important case of nonlinear total Lagrange formulations where linear elastic material properties are substituted. The scheme is easily implemented in existing incremental-iterative codes with provisions for stiffness reformation and containing the basic Newmark scheme. Numerical analyses of dynamic stability can be dramatically sensitive to amplitude errors, because damping algorithms may mask, and overestimating schemes may numerically trigger, the physical instability. The newly proposed scheme has been applied with larger time steps and less cost to the dynamic snap buckling of simple one and multi degree-of-freedom structures for various initial conditions

  4. Evaluation of newly developed veterinary portable blood glucose meter with hematocrit correction in dogs and cats.

    Science.gov (United States)

    Mori, Akihiro; Oda, Hitomi; Onozawa, Eri; Shono, Saori; Sako, Toshinori

    2017-10-07

    This study evaluated the accuracy of a newly developed veterinary portable blood glucose meter (PBGM) with hematocrit correction in dogs and cats. Sixty-one dogs and 31 cats were used for the current study. Blood samples were obtained from each dog and cat one to six times. Acceptable results were obtained in error grid analysis between PBGM and reference method values (glucose oxidation methods) in both dogs and cats. Bland-Altman plot analysis revealed a mean difference between the PBGM value and reference method value of -1.975 mg/dl (bias) in dogs and 1.339 mg/dl (bias) in cats. Hematocrit values did not affect the results of the veterinary PBGM. Therefore, this veterinary PBGM is clinically useful in dogs and cats.

  5. Newly-graduated midwives transcending barriers: a grounded theory study.

    Science.gov (United States)

    Barry, Michele J; Hauck, Yvonne L; O'Donoghue, Thomas; Clarke, Simon

    2013-12-01

    Midwifery has developed its own philosophy to formalise its unique identity as a profession. Newly-graduated midwives are taught, and ideally embrace, this philosophy during their education. However, embarking in their career within a predominantly institutionalised and the medically focused health-care model may challenge this application. The research question guiding this study was as follows: 'How do newly graduated midwives deal with applying the philosophy of midwifery in their first six months of practice?' The aim was to generate a grounded theory around this social process. This Western Australian grounded theory study is conceptualised within the social theory of symbolic interactionism. Data were collected by means of in-depth, semi-structured interviews with 11 recent midwifery graduates. Participant and interviewer's journals provided supplementary data. The 'constant comparison' approach was used for data analysis. The substantive theory of transcending barriers was generated. Three stages in transcending barriers were identified: Addressing personal attributes, Understanding the 'bigger picture', and finally, 'Evaluating, planning and acting' to provide woman-centred care. An overview of these three stages provides the focus of this article. The theory of transcending barriers provides a new perspective on how newly-graduated midwives deal with applying the philosophy of midwifery in their first six months of practice. A number of implications for pre and post registration midwifery education and policy development are suggested, as well as recommendations for future research. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. The development of a novel knowledge-based weaning algorithm using pulmonary parameters: a simulation study.

    Science.gov (United States)

    Guler, Hasan; Kilic, Ugur

    2018-03-01

    Weaning is important for patients and clinicians who have to determine correct weaning time so that patients do not become addicted to the ventilator. There are already some predictors developed, such as the rapid shallow breathing index (RSBI), the pressure time index (PTI), and Jabour weaning index. Many important dimensions of weaning are sometimes ignored by these predictors. This is an attempt to develop a knowledge-based weaning process via fuzzy logic that eliminates the disadvantages of the present predictors. Sixteen vital parameters listed in published literature have been used to determine the weaning decisions in the developed system. Since there are considered to be too many individual parameters in it, related parameters were grouped together to determine acid-base balance, adequate oxygenation, adequate pulmonary function, hemodynamic stability, and the psychological status of the patients. To test the performance of the developed algorithm, 20 clinical scenarios were generated using Monte Carlo simulations and the Gaussian distribution method. The developed knowledge-based algorithm and RSBI predictor were applied to the generated scenarios. Finally, a clinician evaluated each clinical scenario independently. The Student's t test was used to show the statistical differences between the developed weaning algorithm, RSBI, and the clinician's evaluation. According to the results obtained, there were no statistical differences between the proposed methods and the clinician evaluations.

  7. A newly developed lubricant, chitosan laurate, in the manufacture of acetaminophen tablets.

    Science.gov (United States)

    Bani-Jaber, Ahmad; Kobayashi, Asuka; Yamada, Kyohei; Haj-Ali, Dana; Uchimoto, Takeaki; Iwao, Yasunori; Noguchi, Shuji; Itai, Shigeru

    2015-04-10

    To study the usefulness of chitosan laurate (CS-LA), a newly developed chitosan salt, as a lubricant, lubrication properties such as the pressure transmission ratio and ejection force were determined at different concentrations of CS-LA in tableting. In addition, tablet properties such as the tensile strength, disintegration time, and dissolution behavior, were also determined. When CS-LA was mixed at concentrations of 0.1%-3.0%, the pressure transmission ratio was increased in a concentration-dependent manner, and the value at a CS-LA concentration of 3% was equal to that of magnesium stearate (Mg-St), a widely used lubricant. Additionally, a reduction in the ejection force was observed at a concentration from 1%, proving that CS-LA has good lubrication performance. A prolonged disintegration time and decreased tensile strength, which are known disadvantages of Mg-St, were not observed with CS-LA. Furthermore, with CS-LA, retardation of dissolution of the drug from the tablets was not observed. Conjugation of CS with LA was found to be quite important for both lubricant and tablet properties. In conclusion, CS-LA should be useful as an alternative lubricant to Mg-St. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. A Novel Algorithm (G-JPSO and Its Development for the Optimal Control of Pumps in Water Distribution Networks

    Directory of Open Access Journals (Sweden)

    Rasoul Rajabpour

    2017-01-01

    Full Text Available Recent decades have witnessed growing applications of metaheuristic techniques as efficient tools for solving complex engineering problems. One such method is the JPSO algorithm. In this study, innovative modifications were made in the nature of the jump algorithm JPSO to make it capable of coping with graph-based solutions, which led to the development of a new algorithm called ‘G-JPSO’. The new algorithm was then used to solve the Fletcher-Powell optimal control problem and its application to optimal control of pumps in water distribution networks was evaluated. Optimal control of pumps consists in an optimum operation timetable (on and off for each of the pumps at the desired time interval. Maximum number of on and off positions for each pump was introduced into the objective function as a constraint such that not only would power consumption at each node be reduced but such problem requirements as the minimum pressure required at each node and minimum/maximum storage tank heights would be met. To determine the optimal operation of pumps, a model-based optimization-simulation algorithm was developed based on G-JPSO and JPSO algorithms. The model proposed by van Zyl was used to determine the optimal operation of the distribution network. Finally, the results obtained from the proposed algorithm were compared with those obtained from ant colony, genetic, and JPSO algorithms to show the robustness of the proposed algorithm in finding near-optimum solutions at reasonable computation costs.

  9. Immunoparesis in newly diagnosed Multiple Myeloma patients

    DEFF Research Database (Denmark)

    Sorrig, Rasmus; Klausen, Tobias W.; Salomo, Morten

    2017-01-01

    Immunoparesis (hypogammaglobulinemia) is associated to an unfavorable prognosis in newly diagnosed Multiple myeloma (MM) patients. However, this finding has not been validated in an unselected population-based cohort. We analyzed 2558 newly diagnosed MM patients in the Danish Multiple Myeloma...

  10. Development of a general learning algorithm with applications in nuclear reactor systems

    Energy Technology Data Exchange (ETDEWEB)

    Brittain, C.R.; Otaduy, P.J.; Perez, R.B.

    1989-12-01

    The objective of this study was development of a generalized learning algorithm that can learn to predict a particular feature of a process by observation of a set of representative input examples. The algorithm uses pattern matching and statistical analysis techniques to find a functional relationship between descriptive attributes of the input examples and the feature to be predicted. The algorithm was tested by applying it to a set of examples consisting of performance descriptions for 277 fuel cycles of Oak Ridge National Laboratory's High Flux Isotope Reactor (HFIR). The program learned to predict the critical rod position for the HFIR from core configuration data prior to reactor startup. The functional relationship bases its predictions on initial core reactivity, the number of certain targets placed in the center of the reactor, and the total exposure of the control plates. Twelve characteristic fuel cycle clusters were identified. Nine fuel cycles were diagnosed as having noisy data, and one could not be predicted by the functional relationship. 13 refs., 6 figs.

  11. Development of a general learning algorithm with applications in nuclear reactor systems

    International Nuclear Information System (INIS)

    Brittain, C.R.; Otaduy, P.J.; Perez, R.B.

    1989-12-01

    The objective of this study was development of a generalized learning algorithm that can learn to predict a particular feature of a process by observation of a set of representative input examples. The algorithm uses pattern matching and statistical analysis techniques to find a functional relationship between descriptive attributes of the input examples and the feature to be predicted. The algorithm was tested by applying it to a set of examples consisting of performance descriptions for 277 fuel cycles of Oak Ridge National Laboratory's High Flux Isotope Reactor (HFIR). The program learned to predict the critical rod position for the HFIR from core configuration data prior to reactor startup. The functional relationship bases its predictions on initial core reactivity, the number of certain targets placed in the center of the reactor, and the total exposure of the control plates. Twelve characteristic fuel cycle clusters were identified. Nine fuel cycles were diagnosed as having noisy data, and one could not be predicted by the functional relationship. 13 refs., 6 figs

  12. Developing operation algorithms for vision subsystems in autonomous mobile robots

    Science.gov (United States)

    Shikhman, M. V.; Shidlovskiy, S. V.

    2018-05-01

    The paper analyzes algorithms for selecting keypoints on the image for the subsequent automatic detection of people and obstacles. The algorithm is based on the histogram of oriented gradients and the support vector method. The combination of these methods allows successful selection of dynamic and static objects. The algorithm can be applied in various autonomous mobile robots.

  13. An integrated environment for fast development and performance assessment of sonar image processing algorithms - SSIE

    DEFF Research Database (Denmark)

    Henriksen, Lars

    1996-01-01

    The sonar simulator integrated environment (SSIE) is a tool for developing high performance processing algorithms for single or sequences of sonar images. The tool is based on MATLAB providing a very short lead time from concept to executable code and thereby assessment of the algorithms tested...... of the algorithms is the availability of sonar images. To accommodate this problem the SSIE has been equipped with a simulator capable of generating high fidelity sonar images for a given scene of objects, sea-bed AUV path, etc. In the paper the main components of the SSIE is described and examples of different...... processing steps are given...

  14. Development of real time diagnostics and feedback algorithms for JET in view of the next step

    Energy Technology Data Exchange (ETDEWEB)

    Murari, A.; Barana, O. [Consorzio RFX Associazione EURATOM ENEA per la Fusione, Corso Stati Uniti 4, Padua (Italy); Felton, R.; Zabeo, L.; Piccolo, F.; Sartori, F. [Euratom/UKAEA Fusion Assoc., Culham Science Centre, Abingdon, Oxon (United Kingdom); Joffrin, E.; Mazon, D.; Laborde, L.; Moreau, D. [Association EURATOM-CEA, CEA Cadarache, 13 - Saint-Paul-lez-Durance (France); Albanese, R. [Assoc. Euratom-ENEA-CREATE, Univ. Mediterranea RC (Italy); Arena, P.; Bruno, M. [Assoc. Euratom-ENEA-CREATE, Univ.di Catania (Italy); Ambrosino, G.; Ariola, M. [Assoc. Euratom-ENEA-CREATE, Univ. Napoli Federico Napoli (Italy); Crisanti, F. [Associazone EURATOM ENEA sulla Fusione, C.R. Frascati (Italy); Luna, E. de la; Sanchez, J. [Associacion EURATOM CIEMAT para Fusion, Madrid (Spain)

    2004-07-01

    Real time control of many plasma parameters will be an essential aspect in the development of reliable high performance operation of Next Step Tokamaks. The main prerequisites for any feedback scheme are the precise real-time determination of the quantities to be controlled, requiring top quality and highly reliable diagnostics, and the availability of robust control algorithms. A new set of real time diagnostics was recently implemented on JET to prove the feasibility of determining, with high accuracy and time resolution, the most important plasma quantities. With regard to feedback algorithms, new model-based controllers were developed to allow a more robust control of several plasma parameters. Both diagnostics and algorithms were successfully used in several experiments, ranging from H-mode plasmas to configuration with ITBs (internal thermal barriers). Since elaboration of computationally heavy measurements is often required, significant attention was devoted to non-algorithmic methods like Digital or Cellular Neural/Nonlinear Networks. The real time hardware and software adopted architectures are also described with particular attention to their relevance to ITER. (authors)

  15. Development of real time diagnostics and feedback algorithms for JET in view of the next step

    International Nuclear Information System (INIS)

    Murari, A.; Felton, R.; Zabeo, L.; Piccolo, F.; Sartori, F.; Murari, A.; Barana, O.; Albanese, R.; Joffrin, E.; Mazon, D.; Laborde, L.; Moreau, D.; Arena, P.; Bruno, M.; Ambrosino, G.; Ariola, M.; Crisanti, F.; Luna, E. de la; Sanchez, J.

    2004-01-01

    Real time control of many plasma parameters will be an essential aspect in the development of reliable high performance operation of Next Step Tokamaks. The main prerequisites for any feedback scheme are the precise real-time determination of the quantities to be controlled, requiring top quality and highly reliable diagnostics, and the availability of robust control algorithms. A new set of real time diagnostics was recently implemented on JET to prove the feasibility of determining, with high accuracy and time resolution, the most important plasma quantities. With regard to feedback algorithms, new model-based controllers were developed to allow a more robust control of several plasma parameters. Both diagnostics and algorithms were successfully used in several experiments, ranging from H-mode plasmas to configuration with internal transport barriers. Since elaboration of computationally heavy measurements is often required, significant attention was devoted to non-algorithmic methods like Digital or Cellular Neural/Nonlinear Networks. The real time hardware and software adopted architectures are also described with particular attention to their relevance to ITER. (authors)

  16. Development of real time diagnostics and feedback algorithms for JET in view of the next step

    International Nuclear Information System (INIS)

    Murari, A.; Barana, O.; Murari, A.; Felton, R.; Zabeo, L.; Piccolo, F.; Sartori, F.; Joffrin, E.; Mazon, D.; Laborde, L.; Moreau, D.; Albanese, R.; Arena, P.; Bruno, M.; Ambrosino, G.; Ariola, M.; Crisanti, F.; Luna, E. de la; Sanchez, J.

    2004-01-01

    Real time control of many plasma parameters will be an essential aspect in the development of reliable high performance operation of Next Step Tokamaks. The main prerequisites for any feedback scheme are the precise real-time determination of the quantities to be controlled, requiring top quality and highly reliable diagnostics, and the availability of robust control algorithms. A new set of real time diagnostics was recently implemented on JET to prove the feasibility of determining, with high accuracy and time resolution, the most important plasma quantities. With regard to feedback algorithms, new model-based controllers were developed to allow a more robust control of several plasma parameters. Both diagnostics and algorithms were successfully used in several experiments, ranging from H-mode plasmas to configuration with ITBs (internal thermal barriers). Since elaboration of computationally heavy measurements is often required, significant attention was devoted to non-algorithmic methods like Digital or Cellular Neural/Nonlinear Networks. The real time hardware and software adopted architectures are also described with particular attention to their relevance to ITER. (authors)

  17. Ripple-Spreading Network Model Optimization by Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Xiao-Bing Hu

    2013-01-01

    Full Text Available Small-world and scale-free properties are widely acknowledged in many real-world complex network systems, and many network models have been developed to capture these network properties. The ripple-spreading network model (RSNM is a newly reported complex network model, which is inspired by the natural ripple-spreading phenomenon on clam water surface. The RSNM exhibits good potential for describing both spatial and temporal features in the development of many real-world networks where the influence of a few local events spreads out through nodes and then largely determines the final network topology. However, the relationships between ripple-spreading related parameters (RSRPs of RSNM and small-world and scale-free topologies are not as obvious or straightforward as in many other network models. This paper attempts to apply genetic algorithm (GA to tune the values of RSRPs, so that the RSNM may generate these two most important network topologies. The study demonstrates that, once RSRPs are properly tuned by GA, the RSNM is capable of generating both network topologies and therefore has a great flexibility to study many real-world complex network systems.

  18. Bobcat 2013: a hyperspectral data collection supporting the development and evaluation of spatial-spectral algorithms

    Science.gov (United States)

    Kaufman, Jason; Celenk, Mehmet; White, A. K.; Stocker, Alan D.

    2014-06-01

    The amount of hyperspectral imagery (HSI) data currently available is relatively small compared to other imaging modalities, and what is suitable for developing, testing, and evaluating spatial-spectral algorithms is virtually nonexistent. In this work, a significant amount of coincident airborne hyperspectral and high spatial resolution panchromatic imagery that supports the advancement of spatial-spectral feature extraction algorithms was collected to address this need. The imagery was collected in April 2013 for Ohio University by the Civil Air Patrol, with their Airborne Real-time Cueing Hyperspectral Enhanced Reconnaissance (ARCHER) sensor. The target materials, shapes, and movements throughout the collection area were chosen such that evaluation of change detection algorithms, atmospheric compensation techniques, image fusion methods, and material detection and identification algorithms is possible. This paper describes the collection plan, data acquisition, and initial analysis of the collected imagery.

  19. Xeon Phi - A comparison between the newly introduced MIC architecture and a standard CPU through three types of problems.

    OpenAIRE

    Kristiansen, Joakim

    2016-01-01

    As Moore s law continues, processors keep getting more cores packed together on the chip. This thesis is an empirical study of the rather newly introduced Intel Many Integrated Core (IMIC) architecture found in the Intel Xeon Phi. With roughly 60 cores connected by a high performance on-die interconnect, the Intel Xeon Phi makes an interesting candidate for High Performance Computing. By digging into parallel algorithms solving three well known problems, our goal is to optimize, test and comp...

  20. Development of an image reconstruction algorithm for a few number of projection data

    International Nuclear Information System (INIS)

    Vieira, Wilson S.; Brandao, Luiz E.; Braz, Delson

    2007-01-01

    An image reconstruction algorithm was developed for specific cases of radiotracer applications in industry (rotating cylindrical mixers), involving a very few number of projection data. The algorithm was planned for imaging radioactive isotope distributions around the center of circular planes. The method consists of adapting the original expectation maximization algorithm (EM) to solve the ill-posed emission tomography inverse problem in order to reconstruct transversal 2D images of an object with only four projections. To achieve this aim, counts of photons emitted by selected radioactive sources in the plane, after they had been simulated using the commercial software MICROSHIELD 5.05, constitutes the projections and a computational code (SPECTEM) was developed to generate activity vectors or images related to those sources. SPECTEM is flexible to support simultaneous changes of the detectors's geometry, the medium under investigation and the properties of the gamma radiation. As a consequence of the code had been followed correctly the proposed method, good results were obtained and they encouraged us to continue the next step of the research: the validation of SPECTEM utilizing experimental data to check its real performance. We aim this code will improve considerably radiotracer methodology, making easier the diagnosis of fails in industrial processes. (author)

  1. Development of an image reconstruction algorithm for a few number of projection data

    Energy Technology Data Exchange (ETDEWEB)

    Vieira, Wilson S.; Brandao, Luiz E. [Instituto de Engenharia Nuclear (IEN-CNEN/RJ), Rio de Janeiro , RJ (Brazil)]. E-mails: wilson@ien.gov.br; brandao@ien.gov.br; Braz, Delson [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Coordenacao dos Programa de Pos-graduacao de Engenharia (COPPE). Lab. de Instrumentacao Nuclear]. E-mail: delson@mailhost.lin.ufrj.br

    2007-07-01

    An image reconstruction algorithm was developed for specific cases of radiotracer applications in industry (rotating cylindrical mixers), involving a very few number of projection data. The algorithm was planned for imaging radioactive isotope distributions around the center of circular planes. The method consists of adapting the original expectation maximization algorithm (EM) to solve the ill-posed emission tomography inverse problem in order to reconstruct transversal 2D images of an object with only four projections. To achieve this aim, counts of photons emitted by selected radioactive sources in the plane, after they had been simulated using the commercial software MICROSHIELD 5.05, constitutes the projections and a computational code (SPECTEM) was developed to generate activity vectors or images related to those sources. SPECTEM is flexible to support simultaneous changes of the detectors's geometry, the medium under investigation and the properties of the gamma radiation. As a consequence of the code had been followed correctly the proposed method, good results were obtained and they encouraged us to continue the next step of the research: the validation of SPECTEM utilizing experimental data to check its real performance. We aim this code will improve considerably radiotracer methodology, making easier the diagnosis of fails in industrial processes. (author)

  2. Development of estimation algorithm of loose parts and analysis of impact test data

    International Nuclear Information System (INIS)

    Kim, Jung Soo; Ham, Chang Sik; Jung, Chul Hwan; Hwang, In Koo; Kim, Tak Hwane; Kim, Tae Hwane; Park, Jin Ho

    1999-11-01

    Loose parts are produced by being parted from the structure of the reactor coolant system or by coming into RCS from the outside during test operation, refueling, and overhaul time. These loose parts are mixed with reactor coolant fluid and collide with RCS components. When loose parts are occurred within RCS, it is necessary to estimate the impact point and the mass of loose parts. In this report an analysis algorithm for the estimation of the impact point and mass of loose part is developed. The developed algorithm was tested with the impact test data of Yonggwang-3. The estimated impact point using the proposed algorithm in this report had 5 percent error to the real test data. The estimated mass was analyzed within 28 percent error bound using the same unit's data. We analyzed the characteristic frequency of each sensor because this frequency effected the estimation of impact point and mass. The characteristic frequency of the background noise during normal operation was compared with that of the impact test data. The result of the comparison illustrated that the characteristic frequency bandwidth of the impact test data was lower than that of the background noise during normal operation. by the comparison, the integrity of sensor and monitoring system could be checked, too. (author)

  3. AUTOMATION OF CALCULATION ALGORITHMS FOR EFFICIENCY ESTIMATION OF TRANSPORT INFRASTRUCTURE DEVELOPMENT

    Directory of Open Access Journals (Sweden)

    Sergey Kharitonov

    2015-06-01

    Full Text Available Optimum transport infrastructure usage is an important aspect of the development of the national economy of the Russian Federation. Thus, development of instruments for assessing the efficiency of infrastructure is impossible without constant monitoring of a number of significant indicators. This work is devoted to the selection of indicators and the method of their calculation in relation to the transport subsystem as airport infrastructure. The work also reflects aspects of the evaluation of the possibilities of algorithmic computational mechanisms to improve the tools of public administration transport subsystems.

  4. Rapid mental computation system as a tool for algorithmic thinking of elementary school students development

    OpenAIRE

    Ziatdinov, Rushan; Musa, Sajid

    2013-01-01

    In this paper, we describe the possibilities of using a rapid mental computation system in elementary education. The system consists of a number of readily memorized operations that allow one to perform arithmetic computations very quickly. These operations are actually simple algorithms which can develop or improve the algorithmic thinking of pupils. Using a rapid mental computation system allows forming the basis for the study of computer science in secondary school.

  5. Multiobjective optimization of building design using genetic algorithm and artificial neural network

    Energy Technology Data Exchange (ETDEWEB)

    Magnier, L.; Zhou, L.; Haghighat, F. [Concordia Univ., Centre for Building Studies, Montreal, PQ (Canada). Dept. of Building, Civil and Environmental Engineering

    2008-07-01

    This paper addressed the challenge of designing modern buildings that are energy efficient, affordable, environmentally sound and comfortable for occupants. Building optimization is a time consuming process when so many objectives must be met. In particular, the use of genetic algorithm (GA) for building design has limitations due to the high number of simulations required. This paper presented an efficient approach to overcome the limitations of GA for building design. The approach expanded the GA methodology to multiobjective optimization. The GA integrating neural network (GAINN) approach first uses a simulation-based artificial neural network (ANN) to characterize building behaviour, and then combines it with a GA for optimization. The process was shown to provide fast and reliable optimization. GAINN was further improved by integrating multiobjective evolutionary algorithms (MOEAs). Two new MOEAs named NSGAINN and PLAGUE were designed for the proposed methodology. The purpose of creating a new MOEA was to take advantage of GAINN fast evaluations. This paper presented bench test results and compared them with with NSGA-2. A previous case study using GAINN methodology was re-optimized with the newly developed MOEA. The design to be optimized was a ventilation system of a standard office room in the summer, with 2 occupants and 4 underfloor air distribution diffusers. The objectives included thermal comfort, indoor air quality, and energy conservation for cooling. The control variables were temperature of the air supply, speed of air supply, distance from the diffuser to the occupant, and the distance from the return grill to the contaminant source. The results showed that the newly presented GAINN methodology was better in both convergence and range of choices compared to a weighted sum GA. 13 refs., 2 tabs., 9 figs.

  6. Development of Nuclear Power Plant Safety Evaluation Method for the Automation Algorithm Application

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Seung Geun; Seong, Poong Hyun [KAIST, Daejeon (Korea, Republic of)

    2016-10-15

    It is commonly believed that replacing human operators to the automated system would guarantee greater efficiency, lower workloads, and fewer human error. Conventional machine learning techniques are considered as not capable to handle complex situations in NPP. Due to these kinds of issues, automation is not actively adopted although human error probability drastically increases during abnormal situations in NPP due to overload of information, high workload, and short time available for diagnosis. Recently, new machine learning techniques, which are known as ‘deep learning’ techniques have been actively applied to many fields, and the deep learning technique-based artificial intelligences (AIs) are showing better performance than conventional AIs. In 2015, deep Q-network (DQN) which is one of the deep learning techniques was developed and applied to train AI that automatically plays various Atari 2800 games, and this AI surpassed the human-level playing in many kind of games. Also in 2016, ‘Alpha-Go’, which was developed by ‘Google Deepmind’ based on deep learning technique to play the game of Go (i.e. Baduk), was defeated Se-dol Lee who is the World Go champion with score of 4:1. By the effort for reducing human error in NPPs, the ultimate goal of this study is the development of automation algorithm which can cover various situations in NPPs. As the first part, quantitative and real-time NPP safety evaluation method is being developed in order to provide the training criteria for automation algorithm. For that, EWS concept of medical field was adopted, and the applicability is investigated in this paper. Practically, the application of full automation (i.e. fully replaces human operators) may requires much more time for the validation and investigation of side-effects after the development of automation algorithm, and so the adoption in the form of full automation will take long time.

  7. Development of Nuclear Power Plant Safety Evaluation Method for the Automation Algorithm Application

    International Nuclear Information System (INIS)

    Kim, Seung Geun; Seong, Poong Hyun

    2016-01-01

    It is commonly believed that replacing human operators to the automated system would guarantee greater efficiency, lower workloads, and fewer human error. Conventional machine learning techniques are considered as not capable to handle complex situations in NPP. Due to these kinds of issues, automation is not actively adopted although human error probability drastically increases during abnormal situations in NPP due to overload of information, high workload, and short time available for diagnosis. Recently, new machine learning techniques, which are known as ‘deep learning’ techniques have been actively applied to many fields, and the deep learning technique-based artificial intelligences (AIs) are showing better performance than conventional AIs. In 2015, deep Q-network (DQN) which is one of the deep learning techniques was developed and applied to train AI that automatically plays various Atari 2800 games, and this AI surpassed the human-level playing in many kind of games. Also in 2016, ‘Alpha-Go’, which was developed by ‘Google Deepmind’ based on deep learning technique to play the game of Go (i.e. Baduk), was defeated Se-dol Lee who is the World Go champion with score of 4:1. By the effort for reducing human error in NPPs, the ultimate goal of this study is the development of automation algorithm which can cover various situations in NPPs. As the first part, quantitative and real-time NPP safety evaluation method is being developed in order to provide the training criteria for automation algorithm. For that, EWS concept of medical field was adopted, and the applicability is investigated in this paper. Practically, the application of full automation (i.e. fully replaces human operators) may requires much more time for the validation and investigation of side-effects after the development of automation algorithm, and so the adoption in the form of full automation will take long time

  8. Portable Health Algorithms Test System

    Science.gov (United States)

    Melcher, Kevin J.; Wong, Edmond; Fulton, Christopher E.; Sowers, Thomas S.; Maul, William A.

    2010-01-01

    A document discusses the Portable Health Algorithms Test (PHALT) System, which has been designed as a means for evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT system allows systems health management algorithms to be developed in a graphical programming environment, to be tested and refined using system simulation or test data playback, and to be evaluated in a real-time hardware-in-the-loop mode with a live test article. The integrated hardware and software development environment provides a seamless transition from algorithm development to real-time implementation. The portability of the hardware makes it quick and easy to transport between test facilities. This hard ware/software architecture is flexible enough to support a variety of diagnostic applications and test hardware, and the GUI-based rapid prototyping capability is sufficient to support development execution, and testing of custom diagnostic algorithms. The PHALT operating system supports execution of diagnostic algorithms under real-time constraints. PHALT can perform real-time capture and playback of test rig data with the ability to augment/ modify the data stream (e.g. inject simulated faults). It performs algorithm testing using a variety of data input sources, including real-time data acquisition, test data playback, and system simulations, and also provides system feedback to evaluate closed-loop diagnostic response and mitigation control.

  9. A hybrid genetic algorithm for the distributed permutation flowshop scheduling problem

    Directory of Open Access Journals (Sweden)

    Jian Gao

    2011-08-01

    Full Text Available Distributed Permutation Flowshop Scheduling Problem (DPFSP is a newly proposed scheduling problem, which is a generalization of classical permutation flow shop scheduling problem. The DPFSP is NP-hard in general. It is in the early stages of studies on algorithms for solving this problem. In this paper, we propose a GA-based algorithm, denoted by GA_LS, for solving this problem with objective to minimize the maximum completion time. In the proposed GA_LS, crossover and mutation operators are designed to make it suitable for the representation of DPFSP solutions, where the set of partial job sequences is employed. Furthermore, GA_LS utilizes an efficient local search method to explore neighboring solutions. The local search method uses three proposed rules that move jobs within a factory or between two factories. Intensive experiments on the benchmark instances, extended from Taillard instances, are carried out. The results indicate that the proposed hybrid genetic algorithm can obtain better solutions than all the existing algorithms for the DPFSP, since it obtains better relative percentage deviation and differences of the results are also statistically significant. It is also seen that best-known solutions for most instances are updated by our algorithm. Moreover, we also show the efficiency of the GA_LS by comparing with similar genetic algorithms with the existing local search methods.

  10. A Newly Developed Method for Computing Reliability Measures in a Water Supply Network

    Directory of Open Access Journals (Sweden)

    Jacek Malinowski

    2016-01-01

    Full Text Available A reliability model of a water supply network has beens examined. Its main features are: a topology that can be decomposed by the so-called state factorization into a (relativelysmall number of derivative networks, each having a series-parallel structure (1, binary-state components (either operative or failed with given flow capacities (2, a multi-state character of the whole network and its sub-networks - a network state is defined as the maximal flow between a source (sources and a sink (sinks (3, all capacities (component, network, and sub-network have integer values (4. As the network operates, its state changes due to component failures, repairs, and replacements. A newly developed method of computing the inter-state transition intensities has been presented. It is based on the so-called state factorization and series-parallel aggregation. The analysis of these intensities shows that the failure-repair process of the considered system is an asymptotically homogenous Markov process. It is also demonstrated how certain reliability parameters useful for the network maintenance planning can be determined on the basis of the asymptotic intensities. For better understanding of the presented method, an illustrative example is given. (original abstract

  11. Generational differences among newly licensed registered nurses.

    Science.gov (United States)

    Keepnews, David M; Brewer, Carol S; Kovner, Christine T; Shin, Juh Hyun

    2010-01-01

    Responses of 2369 newly licensed registered nurses from 3 generational cohorts-Baby Boomers, Generation X, and Generation Y-were studied to identify differences in their characteristics, work-related experiences, and attitudes. These responses revealed significant differences among generations in: job satisfaction, organizational commitment, work motivation, work-to-family conflict, family-to-work conflict, distributive justice, promotional opportunities, supervisory support, mentor support, procedural justice, and perceptions of local job opportunities. Health organizations and their leaders need to anticipate intergenerational differences among newly licensed nurses and should provide for supportive working environments that recognize those differences. Orientation and residency programs for newly licensed nurses should be tailored to the varying needs of different generations. Future research should focus on evaluating the effectiveness of orientation and residency programs with regard to different generations so that these programs can be tailored to meet the varying needs of newly licensed nurses at the start of their careers. Copyright 2010 Mosby, Inc. All rights reserved.

  12. Development of a generally applicable morphokinetic algorithm capable of predicting the implantation potential of embryos transferred on Day 3

    Science.gov (United States)

    Petersen, Bjørn Molt; Boel, Mikkel; Montag, Markus; Gardner, David K.

    2016-01-01

    STUDY QUESTION Can a generally applicable morphokinetic algorithm suitable for Day 3 transfers of time-lapse monitored embryos originating from different culture conditions and fertilization methods be developed for the purpose of supporting the embryologist's decision on which embryo to transfer back to the patient in assisted reproduction? SUMMARY ANSWER The algorithm presented here can be used independently of culture conditions and fertilization method and provides predictive power not surpassed by other published algorithms for ranking embryos according to their blastocyst formation potential. WHAT IS KNOWN ALREADY Generally applicable algorithms have so far been developed only for predicting blastocyst formation. A number of clinics have reported validated implantation prediction algorithms, which have been developed based on clinic-specific culture conditions and clinical environment. However, a generally applicable embryo evaluation algorithm based on actual implantation outcome has not yet been reported. STUDY DESIGN, SIZE, DURATION Retrospective evaluation of data extracted from a database of known implantation data (KID) originating from 3275 embryos transferred on Day 3 conducted in 24 clinics between 2009 and 2014. The data represented different culture conditions (reduced and ambient oxygen with various culture medium strategies) and fertilization methods (IVF, ICSI). The capability to predict blastocyst formation was evaluated on an independent set of morphokinetic data from 11 218 embryos which had been cultured to Day 5. PARTICIPANTS/MATERIALS, SETTING, METHODS The algorithm was developed by applying automated recursive partitioning to a large number of annotation types and derived equations, progressing to a five-fold cross-validation test of the complete data set and a validation test of different incubation conditions and fertilization methods. The results were expressed as receiver operating characteristics curves using the area under the

  13. The development of gamma energy identify algorithm for compact radiation sensors using stepwise refinement technique

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Hyun Jun [Div. of Radiation Regulation, Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of); Kim, Ye Won; Kim, Hyun Duk; Cho, Gyu Seong [Dept. of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Yi, Yun [Dept. of of Electronics and Information Engineering, Korea University, Seoul (Korea, Republic of)

    2017-06-15

    A gamma energy identifying algorithm using spectral decomposition combined with smoothing method was suggested to confirm the existence of the artificial radio isotopes. The algorithm is composed by original pattern recognition method and smoothing method to enhance the performance to identify gamma energy of radiation sensors that have low energy resolution. The gamma energy identifying algorithm for the compact radiation sensor is a three-step of refinement process. Firstly, the magnitude set is calculated by the original spectral decomposition. Secondly, the magnitude of modeling error in the magnitude set is reduced by the smoothing method. Thirdly, the expected gamma energy is finally decided based on the enhanced magnitude set as a result of the spectral decomposition with the smoothing method. The algorithm was optimized for the designed radiation sensor composed of a CsI (Tl) scintillator and a silicon pin diode. The two performance parameters used to estimate the algorithm are the accuracy of expected gamma energy and the number of repeated calculations. The original gamma energy was accurately identified with the single energy of gamma radiation by adapting this modeling error reduction method. Also the average error decreased by half with the multi energies of gamma radiation in comparison to the original spectral decomposition. In addition, the number of repeated calculations also decreased by half even in low fluence conditions under 104 (/0.09 cm{sup 2} of the scintillator surface). Through the development of this algorithm, we have confirmed the possibility of developing a product that can identify artificial radionuclides nearby using inexpensive radiation sensors that are easy to use by the public. Therefore, it can contribute to reduce the anxiety of the public exposure by determining the presence of artificial radionuclides in the vicinity.

  14. Geometric approximation algorithms

    CERN Document Server

    Har-Peled, Sariel

    2011-01-01

    Exact algorithms for dealing with geometric objects are complicated, hard to implement in practice, and slow. Over the last 20 years a theory of geometric approximation algorithms has emerged. These algorithms tend to be simple, fast, and more robust than their exact counterparts. This book is the first to cover geometric approximation algorithms in detail. In addition, more traditional computational geometry techniques that are widely used in developing such algorithms, like sampling, linear programming, etc., are also surveyed. Other topics covered include approximate nearest-neighbor search, shape approximation, coresets, dimension reduction, and embeddings. The topics covered are relatively independent and are supplemented by exercises. Close to 200 color figures are included in the text to illustrate proofs and ideas.

  15. Mentorship for newly appointed physicians: a strategy for enhancing patient safety?

    Science.gov (United States)

    Harrison, Reema; McClean, Serwaa; Lawton, Rebecca; Wright, John; Kay, Clive

    2014-09-01

    Mentorship is an increasingly popular innovation from business and industry that is being applied in health-care contexts. This paper explores the concept of mentorship for newly appointed physicians in their first substantive senior post, and specifically its utilization to enhance patient safety. Semi-structured face to face and telephone interviews with Medical Directors (n = 5), Deputy Medical Directors (n = 4), and Clinical Directors (n = 6) from 9 acute NHS Trusts in the Yorkshire and Humber region in the north of England. A focused thematic analysis was used. A number of beneficial outcomes were associated with mentorship for newly appointed physicians including greater personal and professional support, organizational commitment, and general well-being. Providing newly appointed senior physicians with support through mentorship was considered to enhance the safety of patient care. Mentorship may prevent or reduce active failures, be used to identify threats in the local working environment, and in the longer term, address latent threats to safety within the organization by encouraging a healthier safety culture. Offering mentorship to all newly appointed physicians in their first substantive post in health care may be a useful strategy to support the development of their clinical, professional, and personal skills in this transitional period that may also enhance the safety of patient care.

  16. Fluid-structure-coupling algorithm

    International Nuclear Information System (INIS)

    McMaster, W.H.; Gong, E.Y.; Landram, C.S.; Quinones, D.F.

    1980-01-01

    A fluid-structure-interaction algorithm has been developed and incorporated into the two dimensional code PELE-IC. This code combines an Eulerian incompressible fluid algorithm with a Lagrangian finite element shell algorithm and incorporates the treatment of complex free surfaces. The fluid structure, and coupling algorithms have been verified by the calculation of solved problems from the literature and from air and steam blowdown experiments. The code has been used to calculate loads and structural response from air blowdown and the oscillatory condensation of steam bubbles in water suppression pools typical of boiling water reactors. The techniques developed here have been extended to three dimensions and implemented in the computer code PELE-3D

  17. Fluid structure coupling algorithm

    International Nuclear Information System (INIS)

    McMaster, W.H.; Gong, E.Y.; Landram, C.S.; Quinones, D.F.

    1980-01-01

    A fluid-structure-interaction algorithm has been developed and incorporated into the two-dimensional code PELE-IC. This code combines an Eulerian incompressible fluid algorithm with a Lagrangian finite element shell algorithm and incorporates the treatment of complex free surfaces. The fluid structure and coupling algorithms have been verified by the calculation of solved problems from the literature and from air and steam blowdown experiments. The code has been used to calculate loads and structural response from air blowdown and the oscillatory condensation of steam bubbles in water suppression pools typical of boiling water reactors. The techniques developed have been extended to three dimensions and implemented in the computer code PELE-3D

  18. [An Introduction to A Newly-developed "Acupuncture Needle Manipulation Training-evaluation System" Based on Optical Motion Capture Technique].

    Science.gov (United States)

    Zhang, Ao; Yan, Xing-Ke; Liu, An-Guo

    2016-12-25

    In the present paper, the authors introduce a newly-developed "Acupuncture Needle Manipulation Training-evaluation System" based on optical motion capture technique. It is composed of two parts, sensor and software, and overcomes some shortages of mechanical motion capture technique. This device is able to analyze the data of operations of the pressing-hand and needle-insertion hand during acupuncture performance and its software contains personal computer (PC) version, Android version, and Internetwork Operating System (IOS) Apple version. It is competent in recording and analyzing information of any ope-rator's needling manipulations, and is quite helpful for teachers in teaching, training and examining students in clinical practice.

  19. The effect of strain distribution on microstructural developments during forging in a newly developed nickel base superalloy

    Energy Technology Data Exchange (ETDEWEB)

    Buckingham, R.C. [Institute of Structural Materials, Swansea University, Bay Campus, Fabian Way, Swansea SA1 8EN (United Kingdom); Argyrakis, C.; Hardy, M.C. [Rolls-Royce plc, PO Box 31, Derby DE24 8BJ (United Kingdom); Birosca, S., E-mail: 522042@swansea.ac.uk [Institute of Structural Materials, Swansea University, Bay Campus, Fabian Way, Swansea SA1 8EN (United Kingdom)

    2016-01-27

    In the current study, the effect of strain distribution in a simple forging geometry on the propensity for recrystallization, and its impact on mechanical properties has been investigated in a newly developed experimental nickel-based superalloy. The new alloy was produced via a Powder Metallurgy (PM) route and was subsequently Hot Isostatic Processed (HIP), isothermally forged, and heat treated to produce a coarse grain microstructure with average grain size of 23–32 μm. The alloy was examined by means of Electron Back-Scatter Diffraction (EBSD) to characterise the microstructural features such as grain orientation and morphology, grain boundary characteristics and the identification of potential Prior Particle Boundaries (PPBs) throughout each stage of the processing route. Results at the central region of the cross-section plane parallel to the loading direction showed significant microstructural differences across the forging depth. This microstructural variation was found to be highly dependent on the value of local strain imparted during forging such that areas of low effective strain showed partial recrystallisation and a necklace grain structure was observed following heat treatment. Meanwhile, a fully recrystallised microstructure with no PPBs was observed in the areas of high strain values, in the central region of the forging.

  20. Development of Human-level Decision Making Algorithm for NPPs through Deep Neural Networks : Conceptual Approach

    International Nuclear Information System (INIS)

    Kim, Seung Geun; Seong, Poong Hyun

    2017-01-01

    Development of operation support systems and automation systems are closely related to machine learning field. However, since it is hard to achieve human-level delicacy and flexibility for complex tasks with conventional machine learning technologies, only operation support systems with simple purposes were developed and high-level automation related studies were not actively conducted. As one of the efforts for reducing human error in NPPs and technical advance toward automation, the ultimate goal of this research is to develop human-level decision making algorithm for NPPs during emergency situations. The concepts of SL, RL, policy network, value network, and MCTS, which were applied to decision making algorithm for other fields are introduced and combined with nuclear field specifications. Since the research is currently at the conceptual stage, more research is warranted.

  1. Golden Sine Algorithm: A Novel Math-Inspired Algorithm

    Directory of Open Access Journals (Sweden)

    TANYILDIZI, E.

    2017-05-01

    Full Text Available In this study, Golden Sine Algorithm (Gold-SA is presented as a new metaheuristic method for solving optimization problems. Gold-SA has been developed as a new search algorithm based on population. This math-based algorithm is inspired by sine that is a trigonometric function. In the algorithm, random individuals are created as many as the number of search agents with uniform distribution for each dimension. The Gold-SA operator searches to achieve a better solution in each iteration by trying to bring the current situation closer to the target value. The solution space is narrowed by the golden section so that the areas that are supposed to give only good results are scanned instead of the whole solution space scan. In the tests performed, it is seen that Gold-SA has better results than other population based methods. In addition, Gold-SA has fewer algorithm-dependent parameters and operators than other metaheuristic methods, increasing the importance of this method by providing faster convergence of this new method.

  2. Guidelines for developing certification programs for newly generated TRU waste

    International Nuclear Information System (INIS)

    Whitty, W.J.; Ostenak, C.A.; Pillay, K.K.S.; Geoffrion, R.R.

    1983-05-01

    These guidelines were prepared with direction from the US Department of Energy (DOE) Transuranic (TRU) Waste Management Program in support of the DOE effort to certify that newly generated TRU wastes meet the Waste Isolation Pilot Plant (WIPP) Waste Acceptance Criteria. The guidelines provide instructions for generic Certification Program preparation for TRU-waste generators preparing site-specific Certification Programs in response to WIPP requirements. The guidelines address all major aspects of a Certification Program that are necessary to satisfy the WIPP Waste Acceptance Criteria and their associated Compliance Requirements and Certification Quality Assurance Requirements. The details of the major element of a Certification Program, namely, the Certification Plan, are described. The Certification Plan relies on supporting data and control documentation to provide a traceable, auditable account of certification activities. Examples of specific parts of the Certification Plan illustrate the recommended degree of detail. Also, a brief description of generic waste processes related to certification activities is included

  3. Newly democratic Mongolia offering exploration contracts

    International Nuclear Information System (INIS)

    Penttila, W.C.

    1992-01-01

    This paper reports that Mongolia, formerly the Mongolian People's Republic, is working to open its exploration prospects to international operators as it emerges as the world's 15th largest independent nation. The country, about the same size as Alaska with a population of 2 million, held its first free election in July 1990. The newly elected government drafted a constitution that took effect Feb. 12, 1992. The document modifies the previous government's structures to eliminate bureaucracy and allows for political pluralism. At the same time, the government is formulating energy policies, state oil company structure, and resource development philosophy

  4. Collaboration space division in collaborative product development based on a genetic algorithm

    Science.gov (United States)

    Qian, Xueming; Ma, Yanqiao; Feng, Huan

    2018-02-01

    The advance in the global environment, rapidly changing markets, and information technology has created a new stage for design. In such an environment, one strategy for success is the Collaborative Product Development (CPD). Organizing people effectively is the goal of Collaborative Product Development, and it solves the problem with certain foreseeability. The development group activities are influenced not only by the methods and decisions available, but also by correlation among personnel. Grouping the personnel according to their correlation intensity is defined as collaboration space division (CSD). Upon establishment of a correlation matrix (CM) of personnel and an analysis of the collaboration space, the genetic algorithm (GA) and minimum description length (MDL) principle may be used as tools in optimizing collaboration space. The MDL principle is used in setting up an object function, and the GA is used as a methodology. The algorithm encodes spatial information as a chromosome in binary. After repetitious crossover, mutation, selection and multiplication, a robust chromosome is found, which can be decoded into an optimal collaboration space. This new method can calculate the members in sub-spaces and individual groupings within the staff. Furthermore, the intersection of sub-spaces and public persons belonging to all sub-spaces can be determined simultaneously.

  5. Comparison of a newly developed binary typing with ribotyping and multilocus sequence typing methods for Clostridium difficile.

    Science.gov (United States)

    Li, Zhirong; Liu, Xiaolei; Zhao, Jianhong; Xu, Kaiyue; Tian, Tiantian; Yang, Jing; Qiang, Cuixin; Shi, Dongyan; Wei, Honglian; Sun, Suju; Cui, Qingqing; Li, Ruxin; Niu, Yanan; Huang, Bixing

    2018-04-01

    Clostridium difficile is the causative pathogen for antibiotic-related nosocomial diarrhea. For epidemiological study and identification of virulent clones, a new binary typing method was developed for C. difficile in this study. The usefulness of this newly developed optimized 10-loci binary typing method was compared with two widely used methods ribotyping and multilocus sequence typing (MLST) in 189 C. difficile samples. The binary typing, ribotyping and MLST typed the samples into 53 binary types (BTs), 26 ribotypes (RTs), and 33 MLST sequence types (STs), respectively. The typing ability of the binary method was better than that of either ribotyping or MLST expressed in Simpson Index (SI) at 0.937, 0.892 and 0.859, respectively. The ease of testing, portability and cost-effectiveness of the new binary typing would make it a useful typing alternative for outbreak investigations within healthcare facilities and epidemiological research. Copyright © 2018 Elsevier B.V. All rights reserved.

  6. Who Is Doing Well? A Typology of Newly Homeless Adolescents

    Science.gov (United States)

    Milburn, Norweeta; Liang, Li-Jung; Lee, Sung-Jae; Rotheram-Borus, Mary Jane; Rosenthal, Doreen; Mallett, Shelley; Lightfoot, Marguerita; Lester, Patricia

    2009-01-01

    There is growing evidence to support developing new typologies for homeless adolescents. Current typologies focus on the risks associated with being homeless, with less consideration of the positive attributes of homeless adolescents. The authors examined both risk and protective factors in a sample of newly homeless adolescents. Using cluster…

  7. Newly graduated nurses' empowerment regarding professional competence and other work-related factors.

    Science.gov (United States)

    Kuokkanen, Liisa; Leino-Kilpi, Helena; Numminen, Olivia; Isoaho, Hannu; Flinkman, Mervi; Meretoja, Riitta

    2016-01-01

    Although both nurse empowerment and competence are fundamental concepts of describing newly graduated nurses' professional development and job satisfaction, only few studies exist on the relationship between these concepts. Therefore, the purpose of this study was to determine how newly graduated nurses assess their empowerment and to clarify professional competence compared to other work-related factors. A descriptive, cross-sectional and correlational design was applied. The sample comprised newly graduated nurses (n = 318) in Finland. Empowerment was measured using the 19-item Qualities of an Empowered Nurse scale and the Nurse Competence Scale measured nurses' self-assessed generic competence. In addition to demographic data, the background data included employment sector (public/private), job satisfaction, intent to change/leave job, work schedule (shifts/business hours) and assessments of the quality of care in the workplace. The data were analysed statistically by using Spearman's correlation coefficient as well as the One-Way and Multivariate Analysis of Variance. Cronbach's alpha coefficient was used to estimate the internal consistency. Newly graduated nurses perceived their level of empowerment and competence fairly high. The association between nurse empowerment and professional competence was statistically significant. Other variables correlating positively to empowerment included employment sector, age, job satisfaction, intent to change job, work schedule, and satisfaction with the quality of care in the work unit. The study indicates competence had the strongest effect on newly graduated nurses' empowerment. New graduates need support and career opportunities. In the future, nurses' further education and nurse managers' resources for supporting and empowering nurses should respond to the newly graduated nurses' requisites for attractive and meaningful work.

  8. Use of a newly developed active thermal neutron detector for in-phantom measurements in a medical LINAC

    Energy Technology Data Exchange (ETDEWEB)

    Bodogni, R.; Sanchez-Doblado, F.; Pola, A.; Gentile, A.; Esposito, A.; Gomez-ros, J. M.; Pressello, M. C.; Lagares, J. I.; Terron, J. A.; Gomez, F.

    2013-07-01

    In this work a newly developed active thermal neutron detector, based on a solid state analog device, was used to determine the thermal neutron fluence in selected positions of a simplified human phantom undergoing radiotherapy with a 15 MV LINAC. The results are compared with TLD, the predictions from a Monte Carlo simulation and with measurements indirectly performed with a digital device, located far from the phantom, inside the treatment room. In this work only TLD comparison is presented. Since active neutron instruments are usually affected by systematic deviations when used in a pulsed field with large photon background, the new detector offered in this work may represent an innovative and useful tool for neutron evaluations in accelerator-based radiotherapy. (Author)

  9. Adapted to change: The rapid development of symbiosis in newly settled, fast-maturing chemosymbiotic mussels in the deep sea.

    Science.gov (United States)

    Laming, Sven R; Duperron, Sébastien; Gaudron, Sylvie M; Hilário, Ana; Cunha, Marina R

    2015-12-01

    Symbioses between microbiota and marine metazoa occur globally at chemosynthetic habitats facing imminent threat from anthropogenic disturbance, yet little is known concerning the role of symbiosis during early development in chemosymbiotic metazoans: a critical period in any benthic species' lifecycle. The emerging symbiosis of Idas (sensu lato) simpsoni mussels undergoing development is assessed over a post-larval-to-adult size spectrum using histology and fluorescence in situ hybridisation (FISH). Post-larval development shows similarities to that of both heterotrophic and chemosymbiotic mussels. Data from newly settled specimens confirm aposymbiotic, planktotrophic larval development. Sulphur-oxidising (SOX) symbionts subsequently colonise multiple exposed, non-ciliated epithelia shortly after metamorphosis, but only become abundant on gills as these expand with greater host size. This wide-spread bathymodiolin recorded from sulphidic wood, bone and cold-seep habitats, displays a suite of adaptive traits that could buffer against anthropogenic disturbance. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Chronic wrist pain: diagnosis and management. Development and use of a new algorithm

    NARCIS (Netherlands)

    van Vugt, R. M.; Bijlsma, J. W.; van Vugt, A. C.

    1999-01-01

    Chronic wrist pain can be difficult to manage and the differential diagnosis is extensive. To provide guidelines for assessment of the painful wrist an algorithm was developed to encourage a structured approach to the diagnosis and management of these patients. A review of the literature on causes

  11. Physicochemical properties of newly developed bioactive glass cement and its effects on various cells.

    Science.gov (United States)

    Washio, Ayako; Nakagawa, Aika; Nishihara, Tatsuji; Maeda, Hidefumi; Kitamura, Chiaki

    2015-02-01

    Biomaterials used in dental treatments are expected to have favorable properties such as biocompatibility and an ability to induce tissue formation in dental pulp and periapical tissue, as well as sealing to block external stimuli. Bioactive glasses have been applied in bone engineering, but rarely applied in the field of dentistry. In the present study, bioactive glass cement for dental treatment was developed, and then its physicochemical properties and effects on cell responses were analyzed. To clarify the physicochemical attributes of the cement, field emission scanning electron microscopy, X-ray diffraction, and pH measurement were carried out. Cell attachment, morphology, and viability to the cement were also examined to clarify the effects of the cement on odontoblast-like cells (KN-3 cells), osteoblastic cells (MC3T3-E1 cells), human periodontal ligament stem/progenitor cells and neuro-differentiative cells (PC-12 cells). Hydroxyapatite-like precipitation was formed on the surface of the hardened cement and the pH level changed from pH10 to pH9, then stabilized in simulate body fluid. The cement had no cytotxic effects on these cells, and particulary induced process elongation of PC-12 cells. Our results suggest that the newly developed bioactive glass cement have capability of the application in dental procedures as bioactive cement. © 2014 Wiley Periodicals, Inc.

  12. Newly developed low-temperature scanning tunneling microscope and its application to the study of superconducting materials

    International Nuclear Information System (INIS)

    Gao, F.; Dai, C.; Chen, Z.; Huang, G.; Bai, C.; Tao, H.; Yin, B.; Yang, Q.; Zhao, Z.

    1994-01-01

    A newly developed scanning tunneling microscope (STM) capable of operating at room temperature, 77 K, and 4.2 K is presented. This compact STM has a highly symmetric and rigid tunneling unit designed as an integral frame except the coarse and fine adjustment parts. The tunneling unit is incorporated into a small vacuum chamber that is usually pumped down to 2x10 -4 Pa to avoid water contamination. The fine mechanic adjustment makes the tip approach the sample in 5 nm steps. The coarse adjustment not only changes the distance between the tip and the sample, but also adjusts the tip to be normal to the surface of the sample. With this low-temperature STM atomic resolution images of Bi-2212 single-crystal and large-scale topographies of a YBa 2 Cu 3 O 7 thin film are observed at 77 K

  13. Extended great deluge algorithm for the imperfect preventive maintenance optimization of multi-state systems

    International Nuclear Information System (INIS)

    Nahas, Nabil; Khatab, Abdelhakim; Ait-Kadi, Daoud; Nourelfath, Mustapha

    2008-01-01

    This paper deals with preventive maintenance optimization problem for multi-state systems (MSS). This problem was initially addressed and solved by Levitin and Lisnianski [Optimization of imperfect preventive maintenance for multi-state systems. Reliab Eng Syst Saf 2000;67:193-203]. It consists on finding an optimal sequence of maintenance actions which minimizes maintenance cost while providing the desired system reliability level. This paper proposes an approach which improves the results obtained by genetic algorithm (GENITOR) in Levitin and Lisnianski [Optimization of imperfect preventive maintenance for multi-state systems. Reliab Eng Syst Saf 2000;67:193-203]. The considered MSS have a range of performance levels and their reliability is defined to be the ability to meet a given demand. This reliability is evaluated by using the universal generating function technique. An optimization method based on the extended great deluge algorithm is proposed. This method has the advantage over other methods to be simple and requires less effort for its implementation. The developed algorithm is compared to than in Levitin and Lisnianski [Optimization of imperfect preventive maintenance for multi-state systems. Reliab Eng Syst Saf 2000;67:193-203] by using a reference example and two newly generated examples. This comparison shows that the extended great deluge gives the best solutions (i.e. those with minimal costs) for 8 instances among 10

  14. An Effective Recommender Algorithm for Cold-Start Problem in Academic Social Networks

    Directory of Open Access Journals (Sweden)

    Vala Ali Rohani

    2014-01-01

    Full Text Available Abundance of information in recent years has become a serious challenge for web users. Recommender systems (RSs have been often utilized to alleviate this issue. RSs prune large information spaces to recommend the most relevant items to users by considering their preferences. Nonetheless, in situations where users or items have few opinions, the recommendations cannot be made properly. This notable shortcoming in practical RSs is called cold-start problem. In the present study, we propose a novel approach to address this problem by incorporating social networking features. Coined as enhanced content-based algorithm using social networking (ECSN, the proposed algorithm considers the submitted ratings of faculty mates and friends besides user’s own preferences. The effectiveness of ECSN algorithm was evaluated by implementing it in MyExpert, a newly designed academic social network (ASN for academics in Malaysia. Real feedbacks from live interactions of MyExpert users with the recommended items are recorded for 12 consecutive weeks in which four different algorithms, namely, random, collaborative, content-based, and ECSN were applied every three weeks. The empirical results show significant performance of ECSN in mitigating the cold-start problem besides improving the prediction accuracy of recommendations when compared with other studied recommender algorithms.

  15. Feasibility and preliminary effects of an intervention targeting schema development for caregivers of newly admitted hospice patients.

    Science.gov (United States)

    Lindstrom, Kathryn B; Mazurek Melnyk, Bernadette

    2013-06-01

    The transition to hospice care is a stressful experience for caregivers, who report high anxiety, unpreparedness, and lack of confidence. These sequelae are likely explained by the lack of an accurate cognitive schema, not knowing what to expect or how to help their loved one. Few interventions exist for this population and most do not measure preparedness, confidence, and anxiety using a schema building a conceptual framework for a new experience. The purpose of this study was to test the feasibility and preliminary effects of an intervention program, Education and Skill building Intervention for Caregivers of Hospice patients (ESI-CH), using an innovative conceptual design that targets cognitive schema development and basic skill building for caregivers of loved ones newly admitted to hospice services. A pre-experimental one-group pre- and post-test study design was used. Eighteen caregivers caring for loved ones in their homes were recruited and twelve completed the pilot study. Depression, anxiety, activity restriction, preparedness, and beliefs/confidence were measured. Caregivers reported increased preparedness, more helpful beliefs, and more confidence about their ability to care for their loved one. Preliminary trends suggested decreased anxiety levels for the intervention group. Caregivers who completed the intervention program rated the program very good or excellent, thought the information was helpful and timely, and would recommend it to friends. Results show promise that the ESI-CH program may assist as an evidence-based program to support caregivers in their role as a caregiver to a newly admitted hospice patient.

  16. Scheduling language and algorithm development study. Appendix: Study approach and activity summary

    Science.gov (United States)

    1974-01-01

    The approach and organization of the study to develop a high level computer programming language and a program library are presented. The algorithm and problem modeling analyses are summarized. The approach used to identify and specify the capabilities required in the basic language is described. Results of the analyses used to define specifications for the scheduling module library are presented.

  17. Development of a thermal control algorithm using artificial neural network models for improved thermal comfort and energy efficiency in accommodation buildings

    International Nuclear Information System (INIS)

    Moon, Jin Woo; Jung, Sung Kwon

    2016-01-01

    Highlights: • An ANN model for predicting optimal start moment of the cooling system was developed. • An ANN model for predicting the amount of cooling energy consumption was developed. • An optimal control algorithm was developed employing two ANN models. • The algorithm showed the advanced thermal comfort and energy efficiency. - Abstract: The aim of this study was to develop a control algorithm to demonstrate the improved thermal comfort and building energy efficiency of accommodation buildings in the cooling season. For this, two artificial neural network (ANN)-based predictive and adaptive models were developed and employed in the algorithm. One model predicted the cooling energy consumption during the unoccupied period for different setback temperatures and the other predicted the time required for restoring current indoor temperature to the normal set-point temperature. Using numerical simulation methods, the prediction accuracy of the two ANN models and the performance of the algorithm were tested. Through the test result analysis, the two ANN models showed their prediction accuracy with an acceptable error rate when applied in the control algorithm. In addition, the two ANN models based algorithm can be used to provide a more comfortable and energy efficient indoor thermal environment than the two conventional control methods, which respectively employed a fixed set-point temperature for the entire day and a setback temperature during the unoccupied period. Therefore, the operating range was 23–26 °C during the occupied period and 25–28 °C during the unoccupied period. Based on the analysis, it can be concluded that the optimal algorithm with two predictive and adaptive ANN models can be used to design a more comfortable and energy efficient indoor thermal environment for accommodation buildings in a comprehensive manner.

  18. Development of response models for the Earth Radiation Budget Experiment (ERBE) sensors. Part 4: Preliminary nonscanner models and count conversion algorithms

    Science.gov (United States)

    Halyo, Nesim; Choi, Sang H.

    1987-01-01

    Two count conversion algorithms and the associated dynamic sensor model for the M/WFOV nonscanner radiometers are defined. The sensor model provides and updates the constants necessary for the conversion algorithms, though the frequency with which these updates were needed was uncertain. This analysis therefore develops mathematical models for the conversion of irradiance at the sensor field of view (FOV) limiter into data counts, derives from this model two algorithms for the conversion of data counts to irradiance at the sensor FOV aperture and develops measurement models which account for a specific target source together with a sensor. The resulting algorithms are of the gain/offset and Kalman filter types. The gain/offset algorithm was chosen since it provided sufficient accuracy using simpler computations.

  19. Primary chromatic aberration elimination via optimization work with genetic algorithm

    Science.gov (United States)

    Wu, Bo-Wen; Liu, Tung-Kuan; Fang, Yi-Chin; Chou, Jyh-Horng; Tsai, Hsien-Lin; Chang, En-Hao

    2008-09-01

    Chromatic Aberration plays a part in modern optical systems, especially in digitalized and smart optical systems. Much effort has been devoted to eliminating specific chromatic aberration in order to match the demand for advanced digitalized optical products. Basically, the elimination of axial chromatic and lateral color aberration of an optical lens and system depends on the selection of optical glass. According to reports from glass companies all over the world, the number of various newly developed optical glasses in the market exceeds three hundred. However, due to the complexity of a practical optical system, optical designers have so far had difficulty in finding the right solution to eliminate small axial and lateral chromatic aberration except by the Damped Least Squares (DLS) method, which is limited in so far as the DLS method has not yet managed to find a better optical system configuration. In the present research, genetic algorithms are used to replace traditional DLS so as to eliminate axial and lateral chromatic, by combining the theories of geometric optics in Tessar type lenses and a technique involving Binary/Real Encoding, Multiple Dynamic Crossover and Random Gene Mutation to find a much better configuration for optical glasses. By implementing the algorithms outlined in this paper, satisfactory results can be achieved in eliminating axial and lateral color aberration.

  20. Development of a parallel genetic algorithm using MPI and its application in a nuclear reactor core. Design optimization

    International Nuclear Information System (INIS)

    Waintraub, Marcel; Pereira, Claudio M.N.A.; Baptista, Rafael P.

    2005-01-01

    This work presents the development of a distributed parallel genetic algorithm applied to a nuclear reactor core design optimization. In the implementation of the parallelism, a 'Message Passing Interface' (MPI) library, standard for parallel computation in distributed memory platforms, has been used. Another important characteristic of MPI is its portability for various architectures. The main objectives of this paper are: validation of the results obtained by the application of this algorithm in a nuclear reactor core optimization problem, through comparisons with previous results presented by Pereira et al.; and performance test of the Brazilian Nuclear Engineering Institute (IEN) cluster in reactors physics optimization problems. The experiments demonstrated that the developed parallel genetic algorithm using the MPI library presented significant gains in the obtained results and an accentuated reduction of the processing time. Such results ratify the use of the parallel genetic algorithms for the solution of nuclear reactor core optimization problems. (author)

  1. Developing Subdomain Allocation Algorithms Based on Spatial and Communicational Constraints to Accelerate Dust Storm Simulation

    Science.gov (United States)

    Gui, Zhipeng; Yu, Manzhu; Yang, Chaowei; Jiang, Yunfeng; Chen, Songqing; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Hassan, Mohammed Anowarul; Jin, Baoxuan

    2016-01-01

    Dust storm has serious disastrous impacts on environment, human health, and assets. The developments and applications of dust storm models have contributed significantly to better understand and predict the distribution, intensity and structure of dust storms. However, dust storm simulation is a data and computing intensive process. To improve the computing performance, high performance computing has been widely adopted by dividing the entire study area into multiple subdomains and allocating each subdomain on different computing nodes in a parallel fashion. Inappropriate allocation may introduce imbalanced task loads and unnecessary communications among computing nodes. Therefore, allocation is a key factor that may impact the efficiency of parallel process. An allocation algorithm is expected to consider the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire simulation. This research introduces three algorithms to optimize the allocation by considering the spatial and communicational constraints: 1) an Integer Linear Programming (ILP) based algorithm from combinational optimization perspective; 2) a K-Means and Kernighan-Lin combined heuristic algorithm (K&K) integrating geometric and coordinate-free methods by merging local and global partitioning; 3) an automatic seeded region growing based geometric and local partitioning algorithm (ASRG). The performance and effectiveness of the three algorithms are compared based on different factors. Further, we adopt the K&K algorithm as the demonstrated algorithm for the experiment of dust model simulation with the non-hydrostatic mesoscale model (NMM-dust) and compared the performance with the MPI default sequential allocation. The results demonstrate that K&K method significantly improves the simulation performance with better subdomain allocation. This method can also be adopted for other relevant atmospheric and numerical

  2. Fluorescent and radiolabelling of pepsin-digested human glomerular basement membrane with a newly developed hydroxy-coumarin derivative (CASE)

    International Nuclear Information System (INIS)

    Rand-Weaver, M.; Abuknesha, R.A.; Price, R.G.

    1985-01-01

    The labelling of pepsin-digested human glomerular basement membrane (pHGBM) with a newly developed fluorescent iodine acceptor 7-hydroxy-coumarin-3-acetic acid N-hydroxysucciniimydyl ester (CASE) is described. The binding of a monoclonal antibody to pHGBM was assessed by radiobinding assays, and when directly iodinated pHGBM was used there was no apparent binding. When CASE was conjugated to pHGBM prior to iodination 11% binding was achieved. CASE acting as an iodine acceptor may be useful for proteins containing few or inaccessible tyrosine residues or which are destroyed by introduction of 125 I. Since CASE is fluorescent, small amounts of material can be detected during isolation prior to iodination. (orig.)

  3. Measuring river from the cloud - River width algorithm development on Google Earth Engine

    Science.gov (United States)

    Yang, X.; Pavelsky, T.; Allen, G. H.; Donchyts, G.

    2017-12-01

    Rivers are some of the most dynamic features of the terrestrial land surface. They help distribute freshwater, nutrients, sediment, and they are also responsible for some of the greatest natural hazards. Despite their importance, our understanding of river behavior is limited at the global scale, in part because we do not have a river observational dataset that spans both time and space. Remote sensing data represent a rich, largely untapped resource for observing river dynamics. In particular, publicly accessible archives of satellite optical imagery, which date back to the 1970s, can be used to study the planview morphodynamics of rivers at the global scale. Here we present an image processing algorithm developed using the Google Earth Engine cloud-based platform, that can automatically extracts river centerlines and widths from Landsat 5, 7, and 8 scenes at 30 m resolution. Our algorithm makes use of the latest monthly global surface water history dataset and an existing Global River Width from Landsat (GRWL) dataset to efficiently extract river masks from each Landsat scene. Then a combination of distance transform and skeletonization techniques are used to extract river centerlines. Finally, our algorithm calculates wetted river width at each centerline pixel perpendicular to its local centerline direction. We validated this algorithm using in situ data estimated from 16 USGS gauge stations (N=1781). We find that 92% of the width differences are within 60 m (i.e. the minimum length of 2 Landsat pixels). Leveraging Earth Engine's infrastructure of collocated data and processing power, our goal is to use this algorithm to reconstruct the morphodynamic history of rivers globally by processing over 100,000 Landsat 5 scenes, covering from 1984 to 2013.

  4. SU-F-T-20: Novel Catheter Lumen Recognition Algorithm for Rapid Digitization

    Energy Technology Data Exchange (ETDEWEB)

    Dise, J; McDonald, D; Ashenafi, M; Peng, J; Mart, C; Koch, N; Vanek, K [Medical University of South Carolina, Charleston, SC (United States)

    2016-06-15

    Purpose: Manual catheter recognition remains a time-consuming aspect of high-dose-rate brachytherapy (HDR) treatment planning. In this work, a novel catheter lumen recognition algorithm was created for accurate and rapid digitization. Methods: MatLab v8.5 was used to create the catheter recognition algorithm. Initially, the algorithm searches the patient CT dataset using an intensity based k-means filter designed to locate catheters. Once the catheters have been located, seed points are manually selected to initialize digitization of each catheter. From each seed point, the algorithm searches locally in order to automatically digitize the remaining catheter. This digitization is accomplished by finding pixels with similar image curvature and divergence parameters compared to the seed pixel. Newly digitized pixels are treated as new seed positions, and hessian image analysis is used to direct the algorithm toward neighboring catheter pixels, and to make the algorithm insensitive to adjacent catheters that are unresolvable on CT, air pockets, and high Z artifacts. The algorithm was tested using 11 HDR treatment plans, including the Syed template, tandem and ovoid applicator, and multi-catheter lung brachytherapy. Digitization error was calculated by comparing manually determined catheter positions to those determined by the algorithm. Results: he digitization error was 0.23 mm ± 0.14 mm axially and 0.62 mm ± 0.13 mm longitudinally at the tip. The time of digitization, following initial seed placement was less than 1 second per catheter. The maximum total time required to digitize all tested applicators was 4 minutes (Syed template with 15 needles). Conclusion: This algorithm successfully digitizes HDR catheters for a variety of applicators with or without CT markers. The minimal axial error demonstrates the accuracy of the algorithm, and its insensitivity to image artifacts and challenging catheter positioning. Future work to automatically place initial seed

  5. SU-F-T-20: Novel Catheter Lumen Recognition Algorithm for Rapid Digitization

    International Nuclear Information System (INIS)

    Dise, J; McDonald, D; Ashenafi, M; Peng, J; Mart, C; Koch, N; Vanek, K

    2016-01-01

    Purpose: Manual catheter recognition remains a time-consuming aspect of high-dose-rate brachytherapy (HDR) treatment planning. In this work, a novel catheter lumen recognition algorithm was created for accurate and rapid digitization. Methods: MatLab v8.5 was used to create the catheter recognition algorithm. Initially, the algorithm searches the patient CT dataset using an intensity based k-means filter designed to locate catheters. Once the catheters have been located, seed points are manually selected to initialize digitization of each catheter. From each seed point, the algorithm searches locally in order to automatically digitize the remaining catheter. This digitization is accomplished by finding pixels with similar image curvature and divergence parameters compared to the seed pixel. Newly digitized pixels are treated as new seed positions, and hessian image analysis is used to direct the algorithm toward neighboring catheter pixels, and to make the algorithm insensitive to adjacent catheters that are unresolvable on CT, air pockets, and high Z artifacts. The algorithm was tested using 11 HDR treatment plans, including the Syed template, tandem and ovoid applicator, and multi-catheter lung brachytherapy. Digitization error was calculated by comparing manually determined catheter positions to those determined by the algorithm. Results: he digitization error was 0.23 mm ± 0.14 mm axially and 0.62 mm ± 0.13 mm longitudinally at the tip. The time of digitization, following initial seed placement was less than 1 second per catheter. The maximum total time required to digitize all tested applicators was 4 minutes (Syed template with 15 needles). Conclusion: This algorithm successfully digitizes HDR catheters for a variety of applicators with or without CT markers. The minimal axial error demonstrates the accuracy of the algorithm, and its insensitivity to image artifacts and challenging catheter positioning. Future work to automatically place initial seed

  6. Systems Engineering Approach to Develop Guidance, Navigation and Control Algorithms for Unmanned Ground Vehicle

    Science.gov (United States)

    2016-09-01

    Global Positioning System HNA hybrid navigation algorithm HRI human-robot interface IED Improvised Explosive Device IMU inertial measurement unit...Potential Field Method R&D research and development RDT&E Research, development, test and evaluation RF radiofrequency RGB red, green and blue ROE...were radiofrequency (RF) controlled and pneumatically actuated upon receiving the wireless commands from the radio operator. The pairing of such an

  7. An improved molecular dynamics algorithm to study thermodiffusion in binary hydrocarbon mixtures

    Science.gov (United States)

    Antoun, Sylvie; Saghir, M. Ziad; Srinivasan, Seshasai

    2018-03-01

    In multicomponent liquid mixtures, the diffusion flow of chemical species can be induced by temperature gradients, which leads to a separation of the constituent components. This cross effect between temperature and concentration is known as thermodiffusion or the Ludwig-Soret effect. The performance of boundary driven non-equilibrium molecular dynamics along with the enhanced heat exchange (eHEX) algorithm was studied by assessing the thermodiffusion process in n-pentane/n-decane (nC5-nC10) binary mixtures. The eHEX algorithm consists of an extended version of the HEX algorithm with an improved energy conservation property. In addition to this, the transferable potentials for phase equilibria-united atom force field were employed in all molecular dynamics (MD) simulations to precisely model the molecular interactions in the fluid. The Soret coefficients of the n-pentane/n-decane (nC5-nC10) mixture for three different compositions (at 300.15 K and 0.1 MPa) were calculated and compared with the experimental data and other MD results available in the literature. Results of our newly employed MD algorithm showed great agreement with experimental data and a better accuracy compared to other MD procedures.

  8. Comparing, optimizing, and benchmarking quantum-control algorithms in a unifying programming framework

    International Nuclear Information System (INIS)

    Machnes, S.; Sander, U.; Glaser, S. J.; Schulte-Herbrueggen, T.; Fouquieres, P. de; Gruslys, A.; Schirmer, S.

    2011-01-01

    For paving the way to novel applications in quantum simulation, computation, and technology, increasingly large quantum systems have to be steered with high precision. It is a typical task amenable to numerical optimal control to turn the time course of pulses, i.e., piecewise constant control amplitudes, iteratively into an optimized shape. Here, we present a comparative study of optimal-control algorithms for a wide range of finite-dimensional applications. We focus on the most commonly used algorithms: GRAPE methods which update all controls concurrently, and Krotov-type methods which do so sequentially. Guidelines for their use are given and open research questions are pointed out. Moreover, we introduce a unifying algorithmic framework, DYNAMO (dynamic optimization platform), designed to provide the quantum-technology community with a convenient matlab-based tool set for optimal control. In addition, it gives researchers in optimal-control techniques a framework for benchmarking and comparing newly proposed algorithms with the state of the art. It allows a mix-and-match approach with various types of gradients, update and step-size methods as well as subspace choices. Open-source code including examples is made available at http://qlib.info.

  9. Rapid Mental Сomputation System as a Tool for Algorithmic Thinking of Elementary School Students Development

    Directory of Open Access Journals (Sweden)

    Rushan Ziatdinov

    2012-07-01

    Full Text Available In this paper, we describe the possibilities of using a rapid mental computation system in elementary education. The system consists of a number of readily memorized operations that allow one to perform arithmetic computations very quickly. These operations are actually simple algorithms which can develop or improve the algorithmic thinking of pupils. Using a rapid mental computation system allows forming the basis for the study of computer science in secondary school.

  10. The development of a new algorithm to calculate a survival function in non-parametric ways

    International Nuclear Information System (INIS)

    Ahn, Kwang Won; Kim, Yoon Ik; Chung, Chang Hyun; Kim, Kil Yoo

    2001-01-01

    In this study, a generalized formula of the Kaplan-Meier method is developed. The idea of this algorithm is that the result of the Kaplan-Meier estimator is the same as that of the redistribute-to-the right algorithm. Hence, the result of the Kaplan-Meier estimator is used when we redistribute to the right. This can be explained as the following steps, at first, the same mass is distributed to all the points. At second, when you reach the censored points, you must redistribute the mass of that point to the right according to the following rule; to normalize the masses, which are located to the right of the censored point, and redistribute the mass of the censored point to the right according to the ratio of the normalized mass. Until now, we illustrate the main idea of this algorithm.The meaning of that idea is more efficient than PL-estimator in the sense that it decreases the mass of after that area. Just like a redistribute to the right algorithm, this method is enough for the probability theory

  11. Accuracy assessment of pharmacogenetically predictive warfarin dosing algorithms in patients of an academic medical center anticoagulation clinic.

    Science.gov (United States)

    Shaw, Paul B; Donovan, Jennifer L; Tran, Maichi T; Lemon, Stephenie C; Burgwinkle, Pamela; Gore, Joel

    2010-08-01

    The objectives of this retrospective cohort study are to evaluate the accuracy of pharmacogenetic warfarin dosing algorithms in predicting therapeutic dose and to determine if this degree of accuracy warrants the routine use of genotyping to prospectively dose patients newly started on warfarin. Seventy-one patients of an outpatient anticoagulation clinic at an academic medical center who were age 18 years or older on a stable, therapeutic warfarin dose with international normalized ratio (INR) goal between 2.0 and 3.0, and cytochrome P450 isoenzyme 2C9 (CYP2C9) and vitamin K epoxide reductase complex subunit 1 (VKORC1) genotypes available between January 1, 2007 and September 30, 2008 were included. Six pharmacogenetic warfarin dosing algorithms were identified from the medical literature. Additionally, a 5 mg fixed dose approach was evaluated. Three algorithms, Zhu et al. (Clin Chem 53:1199-1205, 2007), Gage et al. (J Clin Ther 84:326-331, 2008), and International Warfarin Pharmacogenetic Consortium (IWPC) (N Engl J Med 360:753-764, 2009) were similar in the primary accuracy endpoints with mean absolute error (MAE) ranging from 1.7 to 1.8 mg/day and coefficient of determination R (2) from 0.61 to 0.66. However, the Zhu et al. algorithm severely over-predicted dose (defined as >or=2x or >or=2 mg/day more than actual dose) in twice as many (14 vs. 7%) patients as Gage et al. 2008 and IWPC 2009. In conclusion, the algorithms published by Gage et al. 2008 and the IWPC 2009 were the two most accurate pharmacogenetically based equations available in the medical literature in predicting therapeutic warfarin dose in our study population. However, the degree of accuracy demonstrated does not support the routine use of genotyping to prospectively dose all patients newly started on warfarin.

  12. New Optimization Algorithms in Physics

    CERN Document Server

    Hartmann, Alexander K

    2004-01-01

    Many physicists are not aware of the fact that they can solve their problems by applying optimization algorithms. Since the number of such algorithms is steadily increasing, many new algorithms have not been presented comprehensively until now. This presentation of recently developed algorithms applied in physics, including demonstrations of how they work and related results, aims to encourage their application, and as such the algorithms selected cover concepts and methods from statistical physics to optimization problems emerging in theoretical computer science.

  13. Learning from nature: Nature-inspired algorithms

    DEFF Research Database (Denmark)

    Albeanu, Grigore; Madsen, Henrik; Popentiu-Vladicescu, Florin

    2016-01-01

    .), genetic and evolutionary strategies, artificial immune systems etc. Well-known examples of applications include: aircraft wing design, wind turbine design, bionic car, bullet train, optimal decisions related to traffic, appropriate strategies to survive under a well-adapted immune system etc. Based......During last decade, the nature has inspired researchers to develop new algorithms. The largest collection of nature-inspired algorithms is biology-inspired: swarm intelligence (particle swarm optimization, ant colony optimization, cuckoo search, bees' algorithm, bat algorithm, firefly algorithm etc...... on collective social behaviour of organisms, researchers have developed optimization strategies taking into account not only the individuals, but also groups and environment. However, learning from nature, new classes of approaches can be identified, tested and compared against already available algorithms...

  14. Algorithms

    Indian Academy of Sciences (India)

    ticians but also forms the foundation of computer science. Two ... with methods of developing algorithms for solving a variety of problems but ... applications of computers in science and engineer- ... numerical calculus are as important. We will ...

  15. Explaining algorithms using metaphors

    CERN Document Server

    Forišek, Michal

    2013-01-01

    There is a significant difference between designing a new algorithm, proving its correctness, and teaching it to an audience. When teaching algorithms, the teacher's main goal should be to convey the underlying ideas and to help the students form correct mental models related to the algorithm. This process can often be facilitated by using suitable metaphors. This work provides a set of novel metaphors identified and developed as suitable tools for teaching many of the 'classic textbook' algorithms taught in undergraduate courses worldwide. Each chapter provides exercises and didactic notes fo

  16. Critical thinking dispositions among newly graduated nurses

    Science.gov (United States)

    Wangensteen, Sigrid; Johansson, Inger S; Björkström, Monica E; Nordström, Gun

    2010-01-01

    wangensteen s., johansson i.s., björkström m.e. & nordström g. (2010) Critical thinking dispositions among newly graduated nurses. Journal of Advanced Nursing66(10), 2170–2181. Aim The aim of the study was to describe critical thinking dispositions among newly graduated nurses in Norway, and to study whether background data had any impact on critical thinking dispositions. Background Competence in critical thinking is one of the expectations of nursing education. Critical thinkers are described as well-informed, inquisitive, open-minded and orderly in complex matters. Critical thinking competence has thus been designated as an outcome for judging the quality of nursing education programmes and for the development of clinical judgement. The ability to think critically is also described as reducing the research–practice gap and fostering evidence-based nursing. Methods A cross-sectional descriptive study was performed. The data were collected between October 2006 and April 2007 using the California Critical Thinking Disposition Inventory. The response rate was 33% (n= 618). Pearson’s chi-square tests were used to analyse the data. Results Nearly 80% of the respondents reported a positive disposition towards critical thinking. The highest mean score was on the Inquisitiveness subscale and the lowest on the Truth-seeking subscale. A statistically significant higher proportion of nurses with high critical thinking scores were found among those older than 30 years, those with university education prior to nursing education, and those working in community health care. Conclusion Nurse leaders and nurse teachers should encourage and nurture critical thinking among newly graduated nurses and nursing students. The low Truth-seeking scores found may be a result of traditional teaching strategies in nursing education and might indicate a need for more student-active learning models. PMID:20384637

  17. Development and validation of a simple algorithm for initiation of CPAP in neonates with respiratory distress in Malawi.

    Science.gov (United States)

    Hundalani, Shilpa G; Richards-Kortum, Rebecca; Oden, Maria; Kawaza, Kondwani; Gest, Alfred; Molyneux, Elizabeth

    2015-07-01

    Low-cost bubble continuous positive airway pressure (bCPAP) systems have been shown to improve survival in neonates with respiratory distress, in developing countries including Malawi. District hospitals in Malawi implementing CPAP requested simple and reliable guidelines to enable healthcare workers with basic skills and minimal training to determine when treatment with CPAP is necessary. We developed and validated TRY (T: Tone is good, R: Respiratory Distress and Y=Yes) CPAP, a simple algorithm to identify neonates with respiratory distress who would benefit from CPAP. To validate the TRY CPAP algorithm for neonates with respiratory distress in a low-resource setting. We constructed an algorithm using a combination of vital signs, tone and birth weight to determine the need for CPAP in neonates with respiratory distress. Neonates admitted to the neonatal ward of Queen Elizabeth Central Hospital, in Blantyre, Malawi, were assessed in a prospective, cross-sectional study. Nurses and paediatricians-in-training assessed neonates to determine whether they required CPAP using the TRY CPAP algorithm. To establish the accuracy of the TRY CPAP algorithm in evaluating the need for CPAP, their assessment was compared with the decision of a neonatologist blinded to the TRY CPAP algorithm findings. 325 neonates were evaluated over a 2-month period; 13% were deemed to require CPAP by the neonatologist. The inter-rater reliability with the algorithm was 0.90 for nurses and 0.97 for paediatricians-in-training using the neonatologist's assessment as the reference standard. The TRY CPAP algorithm has the potential to be a simple and reliable tool to assist nurses and clinicians in identifying neonates who require treatment with CPAP in low-resource settings. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  18. Planning for the Management and Disposition of Newly Generated TRU Waste from REDC

    International Nuclear Information System (INIS)

    Coffey, D. E.; Forrester, T. W.; Krause, T.

    2002-01-01

    This paper describes the waste characteristics of newly generated transuranic waste from the Radiochemical Engineering and Development Center at the Oak Ridge National Laboratory and the basic certification structure that will be proposed by the University of Tennessee-Battelle and Bechtel Jacobs Company LLC to the Waste Isolation Pilot Plant for this waste stream. The characterization approach uses information derived from the active production operations as acceptable knowledge for the Radiochemical Engineering and Development Center transuranic waste. The characterization approach includes smear data taken from processing and waste staging hot cells, as well as analytical data on product and liquid waste streams going to liquid waste disposal. Bechtel Jacobs Company and University of Tennessee-Battelle are currently developing the elements of a Waste Isolation Pilot Plant-compliant program with a plan to be certified by the Waste Isolation Pilot Plant for shipment of newly generated transuranic waste in the next few years. The current activities include developing interface plans, program documents, and waste stream specific procedures

  19. DEVELOPMENT OF 2D HUMAN BODY MODELING USING THINNING ALGORITHM

    Directory of Open Access Journals (Sweden)

    K. Srinivasan

    2010-11-01

    Full Text Available Monitoring the behavior and activities of people in Video surveillance has gained more applications in Computer vision. This paper proposes a new approach to model the human body in 2D view for the activity analysis using Thinning algorithm. The first step of this work is Background subtraction which is achieved by the frame differencing algorithm. Thinning algorithm has been used to find the skeleton of the human body. After thinning, the thirteen feature points like terminating points, intersecting points, shoulder, elbow, and knee points have been extracted. Here, this research work attempts to represent the body model in three different ways such as Stick figure model, Patch model and Rectangle body model. The activities of humans have been analyzed with the help of 2D model for the pre-defined poses from the monocular video data. Finally, the time consumption and efficiency of our proposed algorithm have been evaluated.

  20. The Development of Advanced Processing and Analysis Algorithms for Improved Neutron Multiplicity Measurements

    International Nuclear Information System (INIS)

    Santi, P.; Favalli, A.; Hauck, D.; Henzl, V.; Henzlova, D.; Ianakiev, K.; Iliev, M.; Swinhoe, M.; Croft, S.; Worrall, L.

    2015-01-01

    One of the most distinctive and informative signatures of special nuclear materials is the emission of correlated neutrons from either spontaneous or induced fission. Because the emission of correlated neutrons is a unique and unmistakable signature of nuclear materials, the ability to effectively detect, process, and analyze these emissions will continue to play a vital role in the non-proliferation, safeguards, and security missions. While currently deployed neutron measurement techniques based on 3He proportional counter technology, such as neutron coincidence and multiplicity counters currently used by the International Atomic Energy Agency, have proven to be effective over the past several decades for a wide range of measurement needs, a number of technical and practical limitations exist in continuing to apply this technique to future measurement needs. In many cases, those limitations exist within the algorithms that are used to process and analyze the detected signals from these counters that were initially developed approximately 20 years ago based on the technology and computing power that was available at that time. Over the past three years, an effort has been undertaken to address the general shortcomings in these algorithms by developing new algorithms that are based on fundamental physics principles that should lead to the development of more sensitive neutron non-destructive assay instrumentation. Through this effort, a number of advancements have been made in correcting incoming data for electronic dead time, connecting the two main types of analysis techniques used to quantify the data (Shift register analysis and Feynman variance to mean analysis), and in the underlying physical model, known as the point model, that is used to interpret the data in terms of the characteristic properties of the item being measured. The current status of the testing and evaluation of these advancements in correlated neutron analysis techniques will be discussed

  1. Development of pattern recognition algorithms for particles detection from atmospheric images

    International Nuclear Information System (INIS)

    Khatchadourian, S.

    2010-01-01

    The HESS experiment consists of a system of telescopes destined to observe cosmic rays. Since the project has achieved a high level of performances, a second phase of the project has been initiated. This implies the addition of a new telescope which is more sensitive than its predecessors and which is capable of collecting a huge amount of images. In this context, all data collected by the telescope can not be retained because of storage limitations. Therefore, a new real-time system trigger must be designed in order to select interesting events on the fly. The purpose of this thesis was to propose a trigger solution to efficiently discriminate events (images) which are captured by the telescope. The first part of this thesis was to develop pattern recognition algorithms to be implemented within the trigger. A processing chain based on neural networks and Zernike moments has been validated. The second part of the thesis has focused on the implementation of the proposed algorithms onto an FPGA target, taking into account the application constraints in terms of resources and execution time. (author)

  2. Engineering fluorescent proteins towards ultimate performances: lessons from the newly developed cyan variants

    International Nuclear Information System (INIS)

    Mérola, Fabienne; Erard, Marie; Fredj, Asma; Pasquier, Hélène

    2016-01-01

    New fluorescent proteins (FPs) are constantly discovered from natural sources, and submitted to intensive engineering based on random mutagenesis and directed evolution. However, most of these newly developed FPs fail to achieve all the performances required for their bioimaging applications. The design of highly optimised FP-based reporters, simultaneously displaying appropriate colour, multimeric state, chromophore maturation, brightness, photostability and environmental sensitivity will require a better understanding of the structural and dynamic determinants of FP photophysics. The recent development of cyan fluorescent proteins (CFPs) like mCerulean3, mTurquoise2 and Aquamarine brings a different view on these questions, as in this particular case, a step by step evaluation of critical mutations has been performed within a family of spectrally identical and evolutionary close variants. These efforts have led to CFPs with quantum yields close to unity, near single exponential emission decays, high photostability and complete insensitivity to pH, making them ideal choices as energy transfer donors in FRET and FLIM imaging applications. During this process, it was found that a proper amino-acid choice at only two positions (148 and 65) is sufficient to transform the performances of CFPs: with the help of structural and theoretical investigations, we rationalise here how these two positions critically control the CFP photophysics, in the context of FPs derived from the Aequorea victoria species. Today, these results provide a useful toolbox for upgrading the different CFP donors carried by FRET biosensors. They also trace the route towards the de novo design of FP-based optogenetic devices that will be perfectly tailored to dedicated imaging and sensing applications. (topical review)

  3. A New Hybrid Whale Optimizer Algorithm with Mean Strategy of Grey Wolf Optimizer for Global Optimization

    Directory of Open Access Journals (Sweden)

    Narinder Singh

    2018-03-01

    Full Text Available The quest for an efficient nature-inspired optimization technique has continued over the last few decades. In this paper, a hybrid nature-inspired optimization technique has been proposed. The hybrid algorithm has been constructed using Mean Grey Wolf Optimizer (MGWO and Whale Optimizer Algorithm (WOA. We have utilized the spiral equation of Whale Optimizer Algorithm for two procedures in the Hybrid Approach GWO (HAGWO algorithm: (i firstly, we used the spiral equation in Grey Wolf Optimizer algorithm for balance between the exploitation and the exploration process in the new hybrid approach; and (ii secondly, we also applied this equation in the whole population in order to refrain from the premature convergence and trapping in local minima. The feasibility and effectiveness of the hybrid algorithm have been tested by solving some standard benchmarks, XOR, Baloon, Iris, Breast Cancer, Welded Beam Design, Pressure Vessel Design problems and comparing the results with those obtained through other metaheuristics. The solutions prove that the newly existing hybrid variant has higher stronger stability, faster convergence rate and computational accuracy than other nature-inspired metaheuristics on the maximum number of problems and can successfully resolve the function of constrained nonlinear optimization in reality.

  4. Thermodynamic analysis of refrigerant mixtures for possible replacements for CFCs by an algorithm compiling property data

    International Nuclear Information System (INIS)

    Arcaklioglu, Erol; Cavusoglu, Abdullah; Erisen, Ali

    2006-01-01

    In this study, we formed an algorithm to find refrigerant mixtures of equal volumetric cooling capacity (VCC) when compared to CFC based refrigerants in vapor compression refrigeration systems. To achieve this aim the point properties of the refrigerants are obtained from REFPROP where appropriate. We used replacement mixture ratios-of varying mass percentages-suggested by various authors along with our newly formed mixture ratios. In other words, we tried to see the effect of changing mass percentages of the suggested (i.e. in the literature) replacement refrigerants on the VCC of the cooling system. Secondly, we used this algorithm to calculate the coefficient of performance (COP) of the same refrigeration system. This mechanism has provided us the ability to compare the COP of the suggested refrigerant mixtures and our newly formed mixture ratios with the conventional CFC based ones. According to our results, for R12 R290/R600a (56/44) mixture, for R22 R32/R125/R134a (32.5/5/62.5) mixture, and for R502 R32/R125/R134a (43/5/52) mixture are appropriate and can be used as replacements

  5. An Improved Multiobjective Optimization Evolutionary Algorithm Based on Decomposition for Complex Pareto Fronts.

    Science.gov (United States)

    Jiang, Shouyong; Yang, Shengxiang

    2016-02-01

    The multiobjective evolutionary algorithm based on decomposition (MOEA/D) has been shown to be very efficient in solving multiobjective optimization problems (MOPs). In practice, the Pareto-optimal front (POF) of many MOPs has complex characteristics. For example, the POF may have a long tail and sharp peak and disconnected regions, which significantly degrades the performance of MOEA/D. This paper proposes an improved MOEA/D for handling such kind of complex problems. In the proposed algorithm, a two-phase strategy (TP) is employed to divide the whole optimization procedure into two phases. Based on the crowdedness of solutions found in the first phase, the algorithm decides whether or not to delicate computational resources to handle unsolved subproblems in the second phase. Besides, a new niche scheme is introduced into the improved MOEA/D to guide the selection of mating parents to avoid producing duplicate solutions, which is very helpful for maintaining the population diversity when the POF of the MOP being optimized is discontinuous. The performance of the proposed algorithm is investigated on some existing benchmark and newly designed MOPs with complex POF shapes in comparison with several MOEA/D variants and other approaches. The experimental results show that the proposed algorithm produces promising performance on these complex problems.

  6. Development and image quality assessment of a contrast-enhancement algorithm for display of digital chest radiographs

    International Nuclear Information System (INIS)

    Rehm, K.

    1992-01-01

    This dissertation presents a contrast-enhancement algorithm Artifact-Suppressed Adaptive Histogram Equalization (ASAHE). This algorithm was developed as part of a larger effort to replace the film radiographs currently used in radiology departments with digital images. Among the expected benefits of digital radiology are improved image management and greater diagnostic accuracy. Film radiographs record X-ray transmission data at high spatial resolution, and a wide dynamic range of signal. Current digital radiography systems record an image at reduced spatial resolution and with coarse sampling of the available dynamic range. These reductions have a negative impact on diagnostic accuracy. The contrast-enhancement algorithm presented in this dissertation is designed to boost diagnostic accuracy of radiologists using digital images. The ASAHE algorithm is an extension of an earlier technique called Adaptive Histogram Equalization (AHE). The AHE algorithm is unsuitable for chest radiographs because it over-enhances noise, and introduces boundary artifacts. The modifications incorporated in ASAHE suppress the artifacts and allow processing of chest radiographs. This dissertation describes the psychophysical methods used to evaluate the effects of processing algorithms on human observer performance. An experiment conducted with anthropomorphic phantoms and simulated nodules showed the ASAHE algorithm to be superior for human detection of nodules when compared to a computed radiography system's algorithm that is in current use. An experiment conducted using clinical images demonstrating pneumothoraces (partial lung collapse) indicated no difference in human observer accuracy when ASAHE images were compared to computed radiography images, but greater ease of diagnosis when ASAHE images were used. These results provide evidence to suggest that Artifact-Suppressed Adaptive Histogram Equalization can be effective in increasing diagnostic accuracy and efficiency

  7. Development and Comparative Study of Effects of Training Algorithms on Performance of Artificial Neural Network Based Analog and Digital Automatic Modulation Recognition

    Directory of Open Access Journals (Sweden)

    Jide Julius Popoola

    2015-11-01

    Full Text Available This paper proposes two new classifiers that automatically recognise twelve combined analog and digital modulated signals without any a priori knowledge of the modulation schemes and the modulation parameters. The classifiers are developed using pattern recognition approach. Feature keys extracted from the instantaneous amplitude, instantaneous phase and the spectrum symmetry of the simulated signals are used as inputs to the artificial neural network employed in developing the classifiers. The two developed classifiers are trained using scaled conjugate gradient (SCG and conjugate gradient (CONJGRAD training algorithms. Sample results of the two classifiers show good success recognition performance with an average overall recognition rate above 99.50% at signal-to-noise ratio (SNR value from 0 dB and above with the two training algorithms employed and an average overall recognition rate slightly above 99.00% and 96.40% respectively at - 5 dB SNR value for SCG and CONJGRAD training algorithms. The comparative performance evaluation of the two developed classifiers using the two training algorithms shows that the two training algorithms have different effects on both the response rate and efficiency of the two developed artificial neural networks classifiers. In addition, the result of the performance evaluation carried out on the overall success recognition rates between the two developed classifiers in this study using pattern recognition approach with the two training algorithms and one reported classifier in surveyed literature using decision-theoretic approach shows that the classifiers developed in this study perform favourably with regard to accuracy and performance probability as compared to classifier presented in previous study.

  8. A DIFFERENTIAL EVOLUTION ALGORITHM DEVELOPED FOR A NURSE SCHEDULING PROBLEM

    Directory of Open Access Journals (Sweden)

    Shahnazari-Shahrezaei, P.

    2012-11-01

    Full Text Available Nurse scheduling is a type of manpower allocation problem that tries to satisfy hospital managers objectives and nurses preferences as much as possible by generating fair shift schedules. This paper presents a nurse scheduling problem based on a real case study, and proposes two meta-heuristics a differential evolution algorithm (DE and a greedy randomised adaptive search procedure (GRASP to solve it. To investigate the efficiency of the proposed algorithms, two problems are solved. Furthermore, some comparison metrics are applied to examine the reliability of the proposed algorithms. The computational results in this paper show that the proposed DE outperforms the GRASP.

  9. Algorithm FIRE-Feynman Integral REduction

    International Nuclear Information System (INIS)

    Smirnov, A.V.

    2008-01-01

    The recently developed algorithm FIRE performs the reduction of Feynman integrals to master integrals. It is based on a number of strategies, such as applying the Laporta algorithm, the s-bases algorithm, region-bases and integrating explicitly over loop momenta when possible. Currently it is being used in complicated three-loop calculations.

  10. Status of photovoltaics in the Newly Associated States

    International Nuclear Information System (INIS)

    Pietruszko, S.M.; Mikolajuk, A.; Fara, L.; Fara, S.; Vitanov, P.; Stratieva, N.; Rehak, J.; Barinka, R.; Mellikov, E.; Palfy, M.; Shipkovs, P.; Krotkus, A.; Saly, V.; Nemac, F.; Swens, J.; Nowak, S.; Zachariou, A.; Fechner, H.; Passiniemi, P.

    2004-01-01

    The Status of Photovoltaics in the Central and Eastern Europe presents the state of the art of photovoltaics (PV) in the Newly Associated States (NAS): Bulgaria, the Czech Republic, Estonia, Hungary, Latvia, Lithuania, Poland, Romania, Slovakia, Slovenia. The attempt was made to cover all photovoltaics activities in NAS, from research to industry and markets as well as from technology development to dissemination and education. The document covers the following topics and issues: organization of PV research and demonstration activities, stakeholders involved in research and technology development (RTD), scientific potential of NAS PV community, PV activities carried out in NAS countries, PV policies and support mechanisms, achievements and barriers, challenges and needs to the development of PV in the NAS. (authors)

  11. The Activity of Carbohydrate-Degrading Enzymes in the Development of Brood and Newly Emerged workers and Drones of the Carniolan Honeybee, Apis mellifera carnica

    OpenAIRE

    Żółtowska, Krystyna; Lipiński, Zbigniew; Łopieńska-Biernat, Elżbieta; Farjan, Marek; Dmitryjuk, Małgorzata

    2012-01-01

    The activity of glycogen Phosphorylase and carbohydrate hydrolyzing enzymes α-amylase, glucoamylase, trehalase, and sucrase was studied in the development of the Carniolan honey bee, Apis mellifera carnica Pollman (Hymenoptera: Apidae), from newly hatched larva to freshly emerged imago of worker and drone. Phosphorolytic degradation of glycogen was significantly stronger than hydrolytic degradation in all developmental stages. Developmental profiles of hydrolase activity were similar in both ...

  12. Development of an algorithm for X-ray exposures using the Panasonic UD-802A thermoluminescent dosemeter

    International Nuclear Information System (INIS)

    McKittrick, Leo; Currivan, Lorraine; Pollard, David; Nicholls, Colyn; Romero, A.M.; Palethorpe, Jeffrey

    2008-01-01

    Full text: As part of its continuous quality improvement the Dosimetry Service of the Radiological Protection Institute of Ireland (RPII) in conjunction with Panasonic Industrial Europe (UK) has investigated further the use of the standard Panasonic algorithm for X-ray exposures using the Panasonic UD-802A TL dosemeter. Originally developed to satisfy the obsolete standard ANSI 13.11-1983, the standard Panasonic dose algorithm has undergone several revisions such as HPS N13.11-2001. This paper presents a dose algorithm that can be used to correct the dose response at low energies such as X-ray radiation using a four element TL dosemeter due to the behaviour of two different independent phosphors. A series of irradiations with a range of energies using N-20 up to Co-60 were carried out with our particular interest being in responses to X-ray irradiations. Irradiations were performed at: RRPPS, University Hospital Birmingham NHS Foundation Trust, U.K.; HPA, U.K. and CIEMAT, Madrid, Spain. Different irradiation conditions were employed which included: X-ray from narrow and wide spectra as described by ISO 4037-1 (1996), and ISO water slab phantom and PMMA slab phantom respectively. Using the UD-802A TLD and UD-854AT hanger combination, the response data from the series of irradiations was utilised to validate and if necessary, modify the photon/beta branches of the algorithm to: 1. Best estimate Hp(10) and Hp(0.07); 2. Provide information on irradiation energies; 3. Verification by performance tests. This work further advances the algorithm developed at CIEMAT whereby a best-fit, polynomial trend is used with the dose response variations between the independent phosphors. (author)

  13. A pipelined FPGA implementation of an encryption algorithm based on genetic algorithm

    Science.gov (United States)

    Thirer, Nonel

    2013-05-01

    With the evolution of digital data storage and exchange, it is essential to protect the confidential information from every unauthorized access. High performance encryption algorithms were developed and implemented by software and hardware. Also many methods to attack the cipher text were developed. In the last years, the genetic algorithm has gained much interest in cryptanalysis of cipher texts and also in encryption ciphers. This paper analyses the possibility to use the genetic algorithm as a multiple key sequence generator for an AES (Advanced Encryption Standard) cryptographic system, and also to use a three stages pipeline (with four main blocks: Input data, AES Core, Key generator, Output data) to provide a fast encryption and storage/transmission of a large amount of data.

  14. Evaluation of Two Robot Vision Control Algorithms Developed Based on N-R and EKG Methods for Slender Bar Placement

    Energy Technology Data Exchange (ETDEWEB)

    Son, Jae Kyung; Jang, Wan Shik; Hong, Sung Mun [Gwangju (Korea, Republic of)

    2013-04-15

    Many problems need to be solved before vision systems can actually be applied in industry, such as the precision of the kinematics model of the robot control algorithm based on visual information, active compensation of the camera's focal length and orientation during the movement of the robot, and understanding the mapping of the physical 3-D space into 2-D camera coordinates. An algorithm is proposed to enable robot to move actively even if the relative positions between the camera and the robot is unknown. To solve the correction problem, this study proposes vision system model with six camera parameters. To develop the robot vision control algorithm, the N-R and EKG methods are applied to the vision system model. Finally, the position accuracy and processing time of the two algorithms developed based based on the EKG and the N-R methods are compared experimentally by making the robot perform slender bar placement task.

  15. Evaluation of Two Robot Vision Control Algorithms Developed Based on N-R and EKG Methods for Slender Bar Placement

    International Nuclear Information System (INIS)

    Son, Jae Kyung; Jang, Wan Shik; Hong, Sung Mun

    2013-01-01

    Many problems need to be solved before vision systems can actually be applied in industry, such as the precision of the kinematics model of the robot control algorithm based on visual information, active compensation of the camera's focal length and orientation during the movement of the robot, and understanding the mapping of the physical 3-D space into 2-D camera coordinates. An algorithm is proposed to enable robot to move actively even if the relative positions between the camera and the robot is unknown. To solve the correction problem, this study proposes vision system model with six camera parameters. To develop the robot vision control algorithm, the N-R and EKG methods are applied to the vision system model. Finally, the position accuracy and processing time of the two algorithms developed based based on the EKG and the N-R methods are compared experimentally by making the robot perform slender bar placement task

  16. Three-dimensional quantification of cardiac surface motion: a newly developed three-dimensional digital motion-capture and reconstruction system for beating heart surgery.

    Science.gov (United States)

    Watanabe, Toshiki; Omata, Sadao; Odamura, Motoki; Okada, Masahumi; Nakamura, Yoshihiko; Yokoyama, Hitoshi

    2006-11-01

    This study aimed to evaluate our newly developed 3-dimensional digital motion-capture and reconstruction system in an animal experiment setting and to characterize quantitatively the three regional cardiac surface motions, in the left anterior descending artery, right coronary artery, and left circumflex artery, before and after stabilization using a stabilizer. Six pigs underwent a full sternotomy. Three tiny metallic markers (diameter 2 mm) coated with a reflective material were attached on three regional cardiac surfaces (left anterior descending, right coronary, and left circumflex coronary artery regions). These markers were captured by two high-speed digital video cameras (955 frames per second) as 2-dimensional coordinates and reconstructed to 3-dimensional data points (about 480 xyz-position data per second) by a newly developed computer program. The remaining motion after stabilization ranged from 0.4 to 1.01 mm at the left anterior descending, 0.91 to 1.52 mm at the right coronary artery, and 0.53 to 1.14 mm at the left circumflex regions. Significant differences before and after stabilization were evaluated in maximum moving velocity (left anterior descending 456.7 +/- 178.7 vs 306.5 +/- 207.4 mm/s; right coronary artery 574.9 +/- 161.7 vs 446.9 +/- 170.7 mm/s; left circumflex 578.7 +/- 226.7 vs 398.9 +/- 192.6 mm/s; P heart surface movement. This helps us better understand the complexity of the heart, its motion, and the need for developing a better stabilizer for beating heart surgery.

  17. Algorithm and simulation development in support of response strategies for contamination events in air and water systems.

    Energy Technology Data Exchange (ETDEWEB)

    Waanders, Bart Van Bloemen

    2006-01-01

    Chemical/Biological/Radiological (CBR) contamination events pose a considerable threat to our nation's infrastructure, especially in large internal facilities, external flows, and water distribution systems. Because physical security can only be enforced to a limited degree, deployment of early warning systems is being considered. However to achieve reliable and efficient functionality, several complex questions must be answered: (1) where should sensors be placed, (2) how can sparse sensor information be efficiently used to determine the location of the original intrusion, (3) what are the model and data uncertainties, (4) how should these uncertainties be handled, and (5) how can our algorithms and forward simulations be sufficiently improved to achieve real time performance? This report presents the results of a three year algorithmic and application development to support the identification, mitigation, and risk assessment of CBR contamination events. The main thrust of this investigation was to develop (1) computationally efficient algorithms for strategically placing sensors, (2) identification process of contamination events by using sparse observations, (3) characterization of uncertainty through developing accurate demands forecasts and through investigating uncertain simulation model parameters, (4) risk assessment capabilities, and (5) reduced order modeling methods. The development effort was focused on water distribution systems, large internal facilities, and outdoor areas.

  18. Performance and development plans for the Inner Detector trigger algorithms at ATLAS

    International Nuclear Information System (INIS)

    Martin-Haugh, Stewart

    2014-01-01

    A description of the algorithms and the performance of the ATLAS Inner Detector trigger for LHC Run 1 are presented, as well as prospects for a redesign of the tracking algorithms in Run 2. The Inner Detector trigger algorithms are vital for many trigger signatures at ATLAS. The performance of the algorithms for electrons is presented. The ATLAS trigger software will be restructured from two software levels into a single stage which poses a big challenge for the trigger algorithms in terms of execution time and maintaining the physics performance. Expected future improvements in the timing and efficiencies of the Inner Detector triggers are discussed, utilising the planned merging of the current two stages of the ATLAS trigger.

  19. Development and performance analysis of a lossless data reduction algorithm for voip

    International Nuclear Information System (INIS)

    Misbahuddin, S.; Boulejfen, N.

    2014-01-01

    VoIP (Voice Over IP) is becoming an alternative way of voice communications over the Internet. To better utilize voice call bandwidth, some standard compression algorithms are applied in VoIP systems. However, these algorithms affect the voice quality with high compression ratios. This paper presents a lossless data reduction technique to improve VoIP data transfer rate over the IP network. The proposed algorithm exploits the data redundancies in digitized VFs (Voice Frames) generated by VoIP systems. Performance of proposed data reduction algorithm has been presented in terms of compression ratio. The proposed algorithm will help retain the voice quality along with the improvement in VoIP data transfer rates. (author)

  20. Molecular descriptor subset selection in theoretical peptide quantitative structure-retention relationship model development using nature-inspired optimization algorithms.

    Science.gov (United States)

    Žuvela, Petar; Liu, J Jay; Macur, Katarzyna; Bączek, Tomasz

    2015-10-06

    In this work, performance of five nature-inspired optimization algorithms, genetic algorithm (GA), particle swarm optimization (PSO), artificial bee colony (ABC), firefly algorithm (FA), and flower pollination algorithm (FPA), was compared in molecular descriptor selection for development of quantitative structure-retention relationship (QSRR) models for 83 peptides that originate from eight model proteins. The matrix with 423 descriptors was used as input, and QSRR models based on selected descriptors were built using partial least squares (PLS), whereas root mean square error of prediction (RMSEP) was used as a fitness function for their selection. Three performance criteria, prediction accuracy, computational cost, and the number of selected descriptors, were used to evaluate the developed QSRR models. The results show that all five variable selection methods outperform interval PLS (iPLS), sparse PLS (sPLS), and the full PLS model, whereas GA is superior because of its lowest computational cost and higher accuracy (RMSEP of 5.534%) with a smaller number of variables (nine descriptors). The GA-QSRR model was validated initially through Y-randomization. In addition, it was successfully validated with an external testing set out of 102 peptides originating from Bacillus subtilis proteomes (RMSEP of 22.030%). Its applicability domain was defined, from which it was evident that the developed GA-QSRR exhibited strong robustness. All the sources of the model's error were identified, thus allowing for further application of the developed methodology in proteomics.

  1. Performances of new reconstruction algorithms for CT-TDLAS (computer tomography-tunable diode laser absorption spectroscopy)

    International Nuclear Information System (INIS)

    Jeon, Min-Gyu; Deguchi, Yoshihiro; Kamimoto, Takahiro; Doh, Deog-Hee; Cho, Gyeong-Rae

    2017-01-01

    Highlights: • The measured data were successfully used for generating absorption spectra. • Four different reconstruction algorithms, ART, MART, SART and SMART were evaluated. • The calculation speed of convergence by the SMART algorithm was the fastest. • SMART was the most reliable algorithm for reconstructing the multiple signals. - Abstract: Recent advent of the tunable lasers made to measure simultaneous temperature and concentration fields of the gases. CT-TDLAS (computed tomography-tunable diode laser absorption spectroscopy) is one the leading techniques for the measurements of temperature and concentration fields of the gases. In CT-TDLAS, the accuracies of the measurement results are strongly dependent upon the reconstruction algorithms. In this study, four different reconstruction algorithms have been tested numerically using experimental data sets measured by thermocouples for combustion fields. Three reconstruction algorithms, MART (multiplicative algebraic reconstruction technique) algorithm, SART (simultaneous algebraic reconstruction technique) algorithm and SMART (simultaneous multiplicative algebraic reconstruction technique) algorithm, are newly proposed for CT-TDLAS in this study. The calculation results obtained by the three algorithms have been compared with previous algorithm, ART (algebraic reconstruction technique) algorithm. Phantom data sets have been generated by the use of thermocouples data obtained in an actual experiment. The data of the Harvard HITRAN table in which the thermo-dynamical properties and the light spectrum of the H_2O are listed were used for the numerical test. The reconstructed temperature and concentration fields were compared with the original HITRAN data, through which the constructed methods are validated. The performances of the four reconstruction algorithms were demonstrated. This method is expected to enhance the practicality of CT-TDLAS.

  2. Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs.

    Science.gov (United States)

    Gulshan, Varun; Peng, Lily; Coram, Marc; Stumpe, Martin C; Wu, Derek; Narayanaswamy, Arunachalam; Venugopalan, Subhashini; Widner, Kasumi; Madams, Tom; Cuadros, Jorge; Kim, Ramasamy; Raman, Rajiv; Nelson, Philip C; Mega, Jessica L; Webster, Dale R

    2016-12-13

    Deep learning is a family of computational methods that allow an algorithm to program itself by learning from a large set of examples that demonstrate the desired behavior, removing the need to specify rules explicitly. Application of these methods to medical imaging requires further assessment and validation. To apply deep learning to create an algorithm for automated detection of diabetic retinopathy and diabetic macular edema in retinal fundus photographs. A specific type of neural network optimized for image classification called a deep convolutional neural network was trained using a retrospective development data set of 128 175 retinal images, which were graded 3 to 7 times for diabetic retinopathy, diabetic macular edema, and image gradability by a panel of 54 US licensed ophthalmologists and ophthalmology senior residents between May and December 2015. The resultant algorithm was validated in January and February 2016 using 2 separate data sets, both graded by at least 7 US board-certified ophthalmologists with high intragrader consistency. Deep learning-trained algorithm. The sensitivity and specificity of the algorithm for detecting referable diabetic retinopathy (RDR), defined as moderate and worse diabetic retinopathy, referable diabetic macular edema, or both, were generated based on the reference standard of the majority decision of the ophthalmologist panel. The algorithm was evaluated at 2 operating points selected from the development set, one selected for high specificity and another for high sensitivity. The EyePACS-1 data set consisted of 9963 images from 4997 patients (mean age, 54.4 years; 62.2% women; prevalence of RDR, 683/8878 fully gradable images [7.8%]); the Messidor-2 data set had 1748 images from 874 patients (mean age, 57.6 years; 42.6% women; prevalence of RDR, 254/1745 fully gradable images [14.6%]). For detecting RDR, the algorithm had an area under the receiver operating curve of 0.991 (95% CI, 0.988-0.993) for EyePACS-1 and 0

  3. Testing a polarimetric cloud imager aboard research vessel Polarstern: comparison of color-based and polarimetric cloud detection algorithms.

    Science.gov (United States)

    Barta, András; Horváth, Gábor; Horváth, Ákos; Egri, Ádám; Blahó, Miklós; Barta, Pál; Bumke, Karl; Macke, Andreas

    2015-02-10

    Cloud cover estimation is an important part of routine meteorological observations. Cloudiness measurements are used in climate model evaluation, nowcasting solar radiation, parameterizing the fluctuations of sea surface insolation, and building energy transfer models of the atmosphere. Currently, the most widespread ground-based method to measure cloudiness is based on analyzing the unpolarized intensity and color distribution of the sky obtained by digital cameras. As a new approach, we propose that cloud detection can be aided by the additional use of skylight polarization measured by 180° field-of-view imaging polarimetry. In the fall of 2010, we tested such a novel polarimetric cloud detector aboard the research vessel Polarstern during expedition ANT-XXVII/1. One of our goals was to test the durability of the measurement hardware under the extreme conditions of a trans-Atlantic cruise. Here, we describe the instrument and compare the results of several different cloud detection algorithms, some conventional and some newly developed. We also discuss the weaknesses of our design and its possible improvements. The comparison with cloud detection algorithms developed for traditional nonpolarimetric full-sky imagers allowed us to evaluate the added value of polarimetric quantities. We found that (1) neural-network-based algorithms perform the best among the investigated schemes and (2) global information (the mean and variance of intensity), nonoptical information (e.g., sun-view geometry), and polarimetric information (e.g., the degree of polarization) improve the accuracy of cloud detection, albeit slightly.

  4. Development of a 3D muon disappearance algorithm for muon scattering tomography

    Science.gov (United States)

    Blackwell, T. B.; Kudryavtsev, V. A.

    2015-05-01

    Upon passing through a material, muons lose energy, scatter off nuclei and atomic electrons, and can stop in the material. Muons will more readily lose energy in higher density materials. Therefore multiple muon disappearances within a localized volume may signal the presence of high-density materials. We have developed a new technique that improves the sensitivity of standard muon scattering tomography. This technique exploits these muon disappearances to perform non-destructive assay of an inspected volume. Muons that disappear have their track evaluated using a 3D line extrapolation algorithm, which is in turn used to construct a 3D tomographic image of the inspected volume. Results of Monte Carlo simulations that measure muon disappearance in different types of target materials are presented. The ability to differentiate between different density materials using the 3D line extrapolation algorithm is established. Finally the capability of this new muon disappearance technique to enhance muon scattering tomography techniques in detecting shielded HEU in cargo containers has been demonstrated.

  5. A Quantification of the 3D Modeling Capabilities of the Kinectfusion Algorithm

    Science.gov (United States)

    2014-03-27

    experiment, several tests of the experiment setup were run with the Kinect for Xbox 360 sensor, the only sensor on -hand at the start of the testing phase. As...pairing the KinectFusion algorithm with a higher fidelity sensor, such as a Light Distance and Ranging (LiDaR) or the newly released Xbox One Kinect...or three-fold improvement still be possible with LiDaR or Xbox One data? 5.1.3 KinectFusion and Vicon Info. Another source of noise (or error) in the

  6. A model for mentoring newly-appointed nurse educators in nursing education institutions in South Africa.

    Science.gov (United States)

    Seekoe, Eunice

    2014-04-24

    South Africa transformed higher education through the enactment of the Higher Education Act (No. 101 of 1997). The researcher identified the need to develop a model for the mentoring of newly-appointed nurse educators in nursing education institutions in South Africa.  To develop and describe the model for mentoring newly-appointed nurse educators in nursing education institutions in South Africa.  A qualitative and theory-generating design was used (following empirical findings regarding needs analysis) in order to develop the model. The conceptualisation of the framework focused on the context, content, process and the theoretical domains that influenced the model. Ideas from different theories were borrowed from and integrated with the literature and deductive and inductive strategies were applied.  The structure of the model is multidimensional and complex in nature (macro, mesoand micro) based on the philosophy of reflective practice, competency-based practice andcritical learning theories. The assumptions are in relation to stakeholders, context, mentoring, outcome, process and dynamic. The stakeholders are the mentor and mentee within an interactive participatory relationship. The mentoring takes place within the process with a sequence of activities such as relationship building, development, engagement, reflective process and assessment. Capacity building and empowerment are outcomes of mentoring driven by motivation.  The implication for nurse managers is that the model can be used to develop mentoring programmes for newly-appointed nurse educators.

  7. A Contribution to Nyquist-Rate ADC Modeling - Detailed Algorithm Description

    Directory of Open Access Journals (Sweden)

    J. Zidek

    2012-04-01

    Full Text Available In this article, the innovative ADC modeling algorithm is described. It is well suitable for nyquist-rate ADC error back annotation. This algorithm is the next step of building a support tool for IC design engineers. The inspiration for us was the work [2]. Here, the ADC behavior is divided into HCF (High Code Frequency and LCF (Low Code Frequency separated independent parts. This paper is based on the same concept but the model coefficients are estimated in a different way only from INL data. The HCF order recognition part was newly added as well. Thanks to that the HCF coefficients number is lower in comparison with the original Grimaldi’s work (especially for converters with low ratio between HCF and “random” part of INL. Modeling results are demonstrated on a real data set measured by ASICentrum on chargeredistribution type SAR ADC chip. Results are showed not only by coefficient values but also by the Model Coverage metrics. Model limitations are also discussed.

  8. THE APPROACHING TRAIN DETECTION ALGORITHM

    OpenAIRE

    S. V. Bibikov

    2015-01-01

    The paper deals with detection algorithm for rail vibroacoustic waves caused by approaching train on the background of increased noise. The urgency of algorithm development for train detection in view of increased rail noise, when railway lines are close to roads or road intersections is justified. The algorithm is based on the method of weak signals detection in a noisy environment. The information statistics ultimate expression is adjusted. We present the results of algorithm research and t...

  9. Comparison study of noise reduction algorithms in dual energy chest digital tomosynthesis

    Science.gov (United States)

    Lee, D.; Kim, Y.-S.; Choi, S.; Lee, H.; Choi, S.; Kim, H.-J.

    2018-04-01

    Dual energy chest digital tomosynthesis (CDT) is a recently developed medical technique that takes advantage of both tomosynthesis and dual energy X-ray images. However, quantum noise, which occurs in dual energy X-ray images, strongly interferes with diagnosis in various clinical situations. Therefore, noise reduction is necessary in dual energy CDT. In this study, noise-compensating algorithms, including a simple smoothing of high-energy images (SSH) and anti-correlated noise reduction (ACNR), were evaluated in a CDT system. We used a newly developed prototype CDT system and anthropomorphic chest phantom for experimental studies. The resulting images demonstrated that dual energy CDT can selectively image anatomical structures, such as bone and soft tissue. Among the resulting images, those acquired with ACNR showed the best image quality. Both coefficient of variation and contrast to noise ratio (CNR) were the highest in ACNR among the three different dual energy techniques, and the CNR of bone was significantly improved compared to the reconstructed images acquired at a single energy. This study demonstrated the clinical value of dual energy CDT and quantitatively showed that ACNR is the most suitable among the three developed dual energy techniques, including standard log subtraction, SSH, and ACNR.

  10. Unsupervised learning algorithms

    CERN Document Server

    Aydin, Kemal

    2016-01-01

    This book summarizes the state-of-the-art in unsupervised learning. The contributors discuss how with the proliferation of massive amounts of unlabeled data, unsupervised learning algorithms, which can automatically discover interesting and useful patterns in such data, have gained popularity among researchers and practitioners. The authors outline how these algorithms have found numerous applications including pattern recognition, market basket analysis, web mining, social network analysis, information retrieval, recommender systems, market research, intrusion detection, and fraud detection. They present how the difficulty of developing theoretically sound approaches that are amenable to objective evaluation have resulted in the proposal of numerous unsupervised learning algorithms over the past half-century. The intended audience includes researchers and practitioners who are increasingly using unsupervised learning algorithms to analyze their data. Topics of interest include anomaly detection, clustering,...

  11. Nonlinear inversion of resistivity sounding data for 1-D earth models using the Neighbourhood Algorithm

    Science.gov (United States)

    Ojo, A. O.; Xie, Jun; Olorunfemi, M. O.

    2018-01-01

    To reduce ambiguity related to nonlinearities in the resistivity model-data relationships, an efficient direct-search scheme employing the Neighbourhood Algorithm (NA) was implemented to solve the 1-D resistivity problem. In addition to finding a range of best-fit models which are more likely to be global minimums, this method investigates the entire multi-dimensional model space and provides additional information about the posterior model covariance matrix, marginal probability density function and an ensemble of acceptable models. This provides new insights into how well the model parameters are constrained and make assessing trade-offs between them possible, thus avoiding some common interpretation pitfalls. The efficacy of the newly developed program is tested by inverting both synthetic (noisy and noise-free) data and field data from other authors employing different inversion methods so as to provide a good base for comparative performance. In all cases, the inverted model parameters were in good agreement with the true and recovered model parameters from other methods and remarkably correlate with the available borehole litho-log and known geology for the field dataset. The NA method has proven to be useful whilst a good starting model is not available and the reduced number of unknowns in the 1-D resistivity inverse problem makes it an attractive alternative to the linearized methods. Hence, it is concluded that the newly developed program offers an excellent complementary tool for the global inversion of the layered resistivity structure.

  12. RELATIONSHIP BETWEEN DIFFERENT ALLOGAMIC ASSOCIATED TRAIT CHARACTERISTICS OF THE FIVE NEWLY DEVELOPED CYTOPLASMIC MALE STERILE (CMS LINES IN RICE

    Directory of Open Access Journals (Sweden)

    Nematzadeh GHORBAN ALI

    2006-10-01

    Full Text Available Five suitable maintainer varieties were identifi ed through testcrosses with IR58025A and the transfer of wild abortive cytoplasm was carried out by seven successive backcrosses. Five new CMS lines were developed by this approach in well adapted high yielding improved varietal background such as ‘Nemat’, ‘Neda’, ‘Dasht’, ‘Amol3’ and ‘Champa’. Agronomical characterization and allogamy-associated traits of the fi ve newly developed CMS lines were studied for their interrelationship. Anther length had a signifi cant positive correlation with the duration of glume opening (0.759 and high correlation of (0.698 with the angle between lemma and palea. The results indicated that ‘Nemat’ A, ‘Neda’ A, ‘Dasht’ A are more suitable as parents for hybrid seed production due to their favorable and superior fl oral characteristics in comparison to IR58025A.

  13. METHODOLOGICAL GROUNDS ABOUT ALGORITHM OF DEVELOPMENT ORGANIZATIONAL ANALYSIS OF RAILWAYS OPERATION

    Directory of Open Access Journals (Sweden)

    H. D. Eitutis

    2010-12-01

    Full Text Available It was established that the initial stage of reorganization is to run diagnostics of the enterprise, under which a decision on development of an algorithm for structural transformations shall be made. Organizational and management analysis is an important component of diagnostics and is aimed at defining the methods and principles for the enterprise management system. The results of the carried out organizational analysis allow defining the problems and «bottle necks» in the system of strategic management of Ukrainian railways as a whole and in different directions of their economic activities.

  14. Multidisciplinary Design, Analysis, and Optimization Tool Development Using a Genetic Algorithm

    Science.gov (United States)

    Pak, Chan-gi; Li, Wesley

    2009-01-01

    Multidisciplinary design, analysis, and optimization using a genetic algorithm is being developed at the National Aeronautics and Space Administration Dryden Flight Research Center (Edwards, California) to automate analysis and design process by leveraging existing tools to enable true multidisciplinary optimization in the preliminary design stage of subsonic, transonic, supersonic, and hypersonic aircraft. This is a promising technology, but faces many challenges in large-scale, real-world application. This report describes current approaches, recent results, and challenges for multidisciplinary design, analysis, and optimization as demonstrated by experience with the Ikhana fire pod design.!

  15. Development of In-Core Protection System

    International Nuclear Information System (INIS)

    Cho, J. H; Kim, C. H.; Kim, J. H.; Jeong, S. H.; Sohn, S. D.; BaeK, S. M.; YOON, J. H.

    2016-01-01

    In-core Protection System (ICOPS) is an on-line digital computer system which continuously calculates Departure from Nucleate Boiling Ratio (DNBR) and Local Power Density (LPD) based on plant parameters to make trip decisions based on the computations. The function of the system is the same as that of Core Protection Calculator System (CPCS) and Reactor Core Protection System (RCOPS) which are applied to Optimized Power Reactor 1000 (OPR1000) and Advanced Power Reactor 1400 (APR1400). The ICOPS has been developed to overcome the algorithm related obstacles in overseas project. To achieve this goal, several algorithms were newly developed and hardware and software design was updated. The functional design requirements document was developed by KEPCO-NF and the component design was conducted by Doosan. System design and software implementation were performed by KEPCO-E and C, and software Verification and Validation (V and V) was performed by KEPCO-E and C and Sure Softtech. The ICOPS has been developed to overcome the algorithm related obstacles in overseas project. The function of I/O simulator was improved even though the hardware platform is the same as that of RCOPS for Shin-Hanul 1 and 2. SCADE was applied to the implementation of ICOPS software, and the V and V system for ICOPS which satisfies international standards was developed. Although several further detailed design works remain, the function of ICOPS has been confirmed. The ICOPS will be applied to APR+ project, and the further works will be performed in following project

  16. Development of In-Core Protection System

    Energy Technology Data Exchange (ETDEWEB)

    Cho, J. H; Kim, C. H.; Kim, J. H.; Jeong, S. H.; Sohn, S. D.; BaeK, S. M.; YOON, J. H. [KEPCO Engineering and Construction Co., Deajeon (Korea, Republic of)

    2016-10-15

    In-core Protection System (ICOPS) is an on-line digital computer system which continuously calculates Departure from Nucleate Boiling Ratio (DNBR) and Local Power Density (LPD) based on plant parameters to make trip decisions based on the computations. The function of the system is the same as that of Core Protection Calculator System (CPCS) and Reactor Core Protection System (RCOPS) which are applied to Optimized Power Reactor 1000 (OPR1000) and Advanced Power Reactor 1400 (APR1400). The ICOPS has been developed to overcome the algorithm related obstacles in overseas project. To achieve this goal, several algorithms were newly developed and hardware and software design was updated. The functional design requirements document was developed by KEPCO-NF and the component design was conducted by Doosan. System design and software implementation were performed by KEPCO-E and C, and software Verification and Validation (V and V) was performed by KEPCO-E and C and Sure Softtech. The ICOPS has been developed to overcome the algorithm related obstacles in overseas project. The function of I/O simulator was improved even though the hardware platform is the same as that of RCOPS for Shin-Hanul 1 and 2. SCADE was applied to the implementation of ICOPS software, and the V and V system for ICOPS which satisfies international standards was developed. Although several further detailed design works remain, the function of ICOPS has been confirmed. The ICOPS will be applied to APR+ project, and the further works will be performed in following project.

  17. Marshall Rosenbluth and the Metropolis algorithm

    International Nuclear Information System (INIS)

    Gubernatis, J.E.

    2005-01-01

    The 1953 publication, 'Equation of State Calculations by Very Fast Computing Machines' by N. Metropolis, A. W. Rosenbluth and M. N. Rosenbluth, and M. Teller and E. Teller [J. Chem. Phys. 21, 1087 (1953)] marked the beginning of the use of the Monte Carlo method for solving problems in the physical sciences. The method described in this publication subsequently became known as the Metropolis algorithm, undoubtedly the most famous and most widely used Monte Carlo algorithm ever published. As none of the authors made subsequent use of the algorithm, they became unknown to the large simulation physics community that grew from this publication and their roles in its development became the subject of mystery and legend. At a conference marking the 50th anniversary of the 1953 publication, Marshall Rosenbluth gave his recollections of the algorithm's development. The present paper describes the algorithm, reconstructs the historical context in which it was developed, and summarizes Marshall's recollections

  18. Clinical Prediction Model for Time in Therapeutic Range While on Warfarin in Newly Diagnosed Atrial Fibrillation.

    Science.gov (United States)

    Williams, Brent A; Evans, Michael A; Honushefsky, Ashley M; Berger, Peter B

    2017-10-12

    Though warfarin has historically been the primary oral anticoagulant for stroke prevention in newly diagnosed atrial fibrillation (AF), several new direct oral anticoagulants may be preferred when anticoagulation control with warfarin is expected to be poor. This study developed a prediction model for time in therapeutic range (TTR) among newly diagnosed AF patients on newly initiated warfarin as a tool to assist decision making between warfarin and direct oral anticoagulants. This electronic medical record-based, retrospective study included newly diagnosed, nonvalvular AF patients with no recent warfarin exposure receiving primary care services through a large healthcare system in rural Pennsylvania. TTR was estimated as the percentage of time international normalized ratio measurements were between 2.0 and 3.0 during the first year following warfarin initiation. Candidate predictors of TTR were chosen from data elements collected during usual clinical care. A TTR prediction model was developed and temporally validated and its predictive performance was compared with the SAMe-TT 2 R 2 score (sex, age, medical history, treatment, tobacco, race) using R 2 and c-statistics. A total of 7877 newly diagnosed AF patients met study inclusion criteria. Median (interquartile range) TTR within the first year of starting warfarin was 51% (32, 67). Of 85 candidate predictors evaluated, 15 were included in the final validated model with an R 2 of 15.4%. The proposed model showed better predictive performance than the SAMe-TT 2 R 2 score ( R 2 =3.0%). The proposed prediction model may assist decision making on the proper mode of oral anticoagulant among newly diagnosed AF patients. However, predicting TTR on warfarin remains challenging. © 2017 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley.

  19. The activity of carbohydrate-degrading enzymes in the development of brood and newly emerged workers and drones of the Carniolan honeybee, Apis mellifera carnica.

    Science.gov (United States)

    Żółtowska, Krystyna; Lipiński, Zbigniew; Łopieńska-Biernat, Elżbieta; Farjan, Marek; Dmitryjuk, Małgorzata

    2012-01-01

    The activity of glycogen Phosphorylase and carbohydrate hydrolyzing enzymes α-amylase, glucoamylase, trehalase, and sucrase was studied in the development of the Carniolan honey bee, Apis mellifera carnica Pollman (Hymenoptera: Apidae), from newly hatched larva to freshly emerged imago of worker and drone. Phosphorolytic degradation of glycogen was significantly stronger than hydrolytic degradation in all developmental stages. Developmental profiles of hydrolase activity were similar in both sexes of brood; high activity was found in unsealed larvae, the lowest in prepupae followed by an increase in enzymatic activity. Especially intensive increases in activity occurred in the last stage of pupae and newly emerged imago. Besides α-amylase, the activities of other enzymes were higher in drone than in worker broods. Among drones, activity of glucoamylase was particularly high, ranging from around three times higher in the youngest larvae to 13 times higher in the oldest pupae. This confirms earlier suggestions about higher rates of metabolism in drone broods than in worker broods.

  20. RFID Location Algorithm

    Directory of Open Access Journals (Sweden)

    Wang Zi Min

    2016-01-01

    Full Text Available With the development of social services, people’s living standards improve further requirements, there is an urgent need for a way to adapt to the complex situation of the new positioning technology. In recent years, RFID technology have a wide range of applications in all aspects of life and production, such as logistics tracking, car alarm, security and other items. The use of RFID technology to locate, it is a new direction in the eyes of the various research institutions and scholars. RFID positioning technology system stability, the error is small and low-cost advantages of its location algorithm is the focus of this study.This article analyzes the layers of RFID technology targeting methods and algorithms. First, RFID common several basic methods are introduced; Secondly, higher accuracy to political network location method; Finally, LANDMARC algorithm will be described. Through this it can be seen that advanced and efficient algorithms play an important role in increasing RFID positioning accuracy aspects.Finally, the algorithm of RFID location technology are summarized, pointing out the deficiencies in the algorithm, and put forward a follow-up study of the requirements, the vision of a better future RFID positioning technology.

  1. Confidence in leadership among the newly qualified.

    Science.gov (United States)

    Bayliss-Pratt, Lisa; Morley, Mary; Bagley, Liz; Alderson, Steven

    2013-10-23

    The Francis report highlighted the importance of strong leadership from health professionals but it is unclear how prepared those who are newly qualified feel to take on a leadership role. We aimed to assess the confidence of newly qualified health professionals working in the West Midlands in the different competencies of the NHS Leadership Framework. Most respondents felt confident in their abilities to demonstrate personal qualities and work with others, but less so at managing or improving services or setting direction.

  2. Evaluation of nine HIV rapid test kits to develop a national HIV testing algorithm in Nigeria

    Directory of Open Access Journals (Sweden)

    Orji Bassey

    2015-05-01

    Full Text Available Background: Non-cold chain-dependent HIV rapid testing has been adopted in many resource-constrained nations as a strategy for reaching out to populations. HIV rapid test kits (RTKs have the advantage of ease of use, low operational cost and short turnaround times. Before 2005, different RTKs had been used in Nigeria without formal evaluation. Between 2005 and 2007, a study was conducted to formally evaluate a number of RTKs and construct HIV testing algorithms. Objectives: The objectives of this study were to assess and select HIV RTKs and develop national testing algorithms. Method: Nine RTKs were evaluated using 528 well-characterised plasma samples. These comprised 198 HIV-positive specimens (37.5% and 330 HIV-negative specimens (62.5%, collected nationally. Sensitivity and specificity were calculated with 95% confidence intervals for all nine RTKs singly and for serial and parallel combinations of six RTKs; and relative costs were estimated. Results: Six of the nine RTKs met the selection criteria, including minimum sensitivity and specificity (both ≥ 99.0% requirements. There were no significant differences in sensitivities or specificities of RTKs in the serial and parallel algorithms, but the cost of RTKs in parallel algorithms was twice that in serial algorithms. Consequently, three serial algorithms, comprising four test kits (BundiTM, DetermineTM, Stat-Pak® and Uni-GoldTM with 100.0% sensitivity and 99.1% – 100.0% specificity, were recommended and adopted as national interim testing algorithms in 2007. Conclusion: This evaluation provides the first evidence for reliable combinations of RTKs for HIV testing in Nigeria. However, these RTKs need further evaluation in the field (Phase II to re-validate their performance.

  3. A divide-and-conquer algorithm for large-scale de novo transcriptome assembly through combining small assemblies from existing algorithms.

    Science.gov (United States)

    Sze, Sing-Hoi; Parrott, Jonathan J; Tarone, Aaron M

    2017-12-06

    While the continued development of high-throughput sequencing has facilitated studies of entire transcriptomes in non-model organisms, the incorporation of an increasing amount of RNA-Seq libraries has made de novo transcriptome assembly difficult. Although algorithms that can assemble a large amount of RNA-Seq data are available, they are generally very memory-intensive and can only be used to construct small assemblies. We develop a divide-and-conquer strategy that allows these algorithms to be utilized, by subdividing a large RNA-Seq data set into small libraries. Each individual library is assembled independently by an existing algorithm, and a merging algorithm is developed to combine these assemblies by picking a subset of high quality transcripts to form a large transcriptome. When compared to existing algorithms that return a single assembly directly, this strategy achieves comparable or increased accuracy as memory-efficient algorithms that can be used to process a large amount of RNA-Seq data, and comparable or decreased accuracy as memory-intensive algorithms that can only be used to construct small assemblies. Our divide-and-conquer strategy allows memory-intensive de novo transcriptome assembly algorithms to be utilized to construct large assemblies.

  4. Development of a stereolithography (STL input and computer numerical control (CNC output algorithm for an entry-level 3-D printer

    Directory of Open Access Journals (Sweden)

    Brown, Andrew

    2014-08-01

    Full Text Available This paper presents a prototype Stereolithography (STL file format slicing and tool-path generation algorithm, which serves as a data front-end for a Rapid Prototyping (RP entry- level three-dimensional (3-D printer. Used mainly in Additive Manufacturing (AM, 3-D printers are devices that apply plastic, ceramic, and metal, layer by layer, in all three dimensions on a flat surface (X, Y, and Z axis. 3-D printers, unfortunately, cannot print an object without a special algorithm that is required to create the Computer Numerical Control (CNC instructions for printing. An STL algorithm therefore forms a critical component for Layered Manufacturing (LM, also referred to as RP. The purpose of this study was to develop an algorithm that is capable of processing and slicing an STL file or multiple files, resulting in a tool-path, and finally compiling a CNC file for an entry-level 3- D printer. The prototype algorithm was implemented for an entry-level 3-D printer that utilises the Fused Deposition Modelling (FDM process or Solid Freeform Fabrication (SFF process; an AM technology. Following an experimental method, the full data flow path for the prototype algorithm was developed, starting with STL data files, and then processing the STL data file into a G-code file format by slicing the model and creating a tool-path. This layering method is used by most 3-D printers to turn a 2-D object into a 3-D object. The STL algorithm developed in this study presents innovative opportunities for LM, since it allows engineers and architects to transform their ideas easily into a solid model in a fast, simple, and cheap way. This is accomplished by allowing STL models to be sliced rapidly, effectively, and without error, and finally to be processed and prepared into a G-code print file.

  5. Ambient occlusion - A powerful algorithm to segment shell and skeletal intrapores in computed tomography data

    Science.gov (United States)

    Titschack, J.; Baum, D.; Matsuyama, K.; Boos, K.; Färber, C.; Kahl, W.-A.; Ehrig, K.; Meinel, D.; Soriano, C.; Stock, S. R.

    2018-06-01

    During the last decades, X-ray (micro-)computed tomography has gained increasing attention for the description of porous skeletal and shell structures of various organism groups. However, their quantitative analysis is often hampered by the difficulty to discriminate cavities and pores within the object from the surrounding region. Herein, we test the ambient occlusion (AO) algorithm and newly implemented optimisations for the segmentation of cavities (implemented in the software Amira). The segmentation accuracy is evaluated as a function of (i) changes in the ray length input variable, and (ii) the usage of AO (scalar) field and other AO-derived (scalar) fields. The results clearly indicate that the AO field itself outperforms all other AO-derived fields in terms of segmentation accuracy and robustness against variations in the ray length input variable. The newly implemented optimisations improved the AO field-based segmentation only slightly, while the segmentations based on the AO-derived fields improved considerably. Additionally, we evaluated the potential of the AO field and AO-derived fields for the separation and classification of cavities as well as skeletal structures by comparing them with commonly used distance-map-based segmentations. For this, we tested the zooid separation within a bryozoan colony, the stereom classification of an ophiuroid tooth, the separation of bioerosion traces within a marble block and the calice (central cavity)-pore separation within a dendrophyllid coral. The obtained results clearly indicate that the ideal input field depends on the three-dimensional morphology of the object of interest. The segmentations based on the AO-derived fields often provided cavity separations and skeleton classifications that were superior to or impossible to obtain with commonly used distance-map-based segmentations. The combined usage of various AO-derived fields by supervised or unsupervised segmentation algorithms might provide a promising

  6. Developing Information Power Grid Based Algorithms and Software

    Science.gov (United States)

    Dongarra, Jack

    1998-01-01

    This was an exploratory study to enhance our understanding of problems involved in developing large scale applications in a heterogeneous distributed environment. It is likely that the large scale applications of the future will be built by coupling specialized computational modules together. For example, efforts now exist to couple ocean and atmospheric prediction codes to simulate a more complete climate system. These two applications differ in many respects. They have different grids, the data is in different unit systems and the algorithms for inte,-rating in time are different. In addition the code for each application is likely to have been developed on different architectures and tend to have poor performance when run on an architecture for which the code was not designed, if it runs at all. Architectural differences may also induce differences in data representation which effect precision and convergence criteria as well as data transfer issues. In order to couple such dissimilar codes some form of translation must be present. This translation should be able to handle interpolation from one grid to another as well as construction of the correct data field in the correct units from available data. Even if a code is to be developed from scratch, a modular approach will likely be followed in that standard scientific packages will be used to do the more mundane tasks such as linear algebra or Fourier transform operations. This approach allows the developers to concentrate on their science rather than becoming experts in linear algebra or signal processing. Problems associated with this development approach include difficulties associated with data extraction and translation from one module to another, module performance on different nodal architectures, and others. In addition to these data and software issues there exists operational issues such as platform stability and resource management.

  7. Development and validation of a risk prediction algorithm for the recurrence of suicidal ideation among general population with low mood.

    Science.gov (United States)

    Liu, Y; Sareen, J; Bolton, J M; Wang, J L

    2016-03-15

    Suicidal ideation is one of the strongest predictors of recent and future suicide attempt. This study aimed to develop and validate a risk prediction algorithm for the recurrence of suicidal ideation among population with low mood 3035 participants from U.S National Epidemiologic Survey on Alcohol and Related Conditions with suicidal ideation at their lowest mood at baseline were included. The Alcohol Use Disorder and Associated Disabilities Interview Schedule, based on the DSM-IV criteria was used. Logistic regression modeling was conducted to derive the algorithm. Discrimination and calibration were assessed in the development and validation cohorts. In the development data, the proportion of recurrent suicidal ideation over 3 years was 19.5 (95% CI: 17.7, 21.5). The developed algorithm consisted of 6 predictors: age, feelings of emptiness, sudden mood changes, self-harm history, depressed mood in the past 4 weeks, interference with social activities in the past 4 weeks because of physical health or emotional problems and emptiness was the most important risk factor. The model had good discriminative power (C statistic=0.8273, 95% CI: 0.8027, 0.8520). The C statistic was 0.8091 (95% CI: 0.7786, 0.8395) in the external validation dataset and was 0.8193 (95% CI: 0.8001, 0.8385) in the combined dataset. This study does not apply to people with suicidal ideation who are not depressed. The developed risk algorithm for predicting the recurrence of suicidal ideation has good discrimination and excellent calibration. Clinicians can use this algorithm to stratify the risk of recurrence in patients and thus improve personalized treatment approaches, make advice and further intensive monitoring. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Quantum random-walk search algorithm

    International Nuclear Information System (INIS)

    Shenvi, Neil; Whaley, K. Birgitta; Kempe, Julia

    2003-01-01

    Quantum random walks on graphs have been shown to display many interesting properties, including exponentially fast hitting times when compared with their classical counterparts. However, it is still unclear how to use these novel properties to gain an algorithmic speedup over classical algorithms. In this paper, we present a quantum search algorithm based on the quantum random-walk architecture that provides such a speedup. It will be shown that this algorithm performs an oracle search on a database of N items with O(√(N)) calls to the oracle, yielding a speedup similar to other quantum search algorithms. It appears that the quantum random-walk formulation has considerable flexibility, presenting interesting opportunities for development of other, possibly novel quantum algorithms

  9. A “Tuned” Mask Learnt Approach Based on Gravitational Search Algorithm

    Directory of Open Access Journals (Sweden)

    Youchuan Wan

    2016-01-01

    Full Text Available Texture image classification is an important topic in many applications in machine vision and image analysis. Texture feature extracted from the original texture image by using “Tuned” mask is one of the simplest and most effective methods. However, hill climbing based training methods could not acquire the satisfying mask at a time; on the other hand, some commonly used evolutionary algorithms like genetic algorithm (GA and particle swarm optimization (PSO easily fall into the local optimum. A novel approach for texture image classification exemplified with recognition of residential area is detailed in the paper. In the proposed approach, “Tuned” mask is viewed as a constrained optimization problem and the optimal “Tuned” mask is acquired by maximizing the texture energy via a newly proposed gravitational search algorithm (GSA. The optimal “Tuned” mask is achieved through the convergence of GSA. The proposed approach has been, respectively, tested on some public texture and remote sensing images. The results are then compared with that of GA, PSO, honey-bee mating optimization (HBMO, and artificial immune algorithm (AIA. Moreover, feature extracted by Gabor wavelet is also utilized to make a further comparison. Experimental results show that the proposed method is robust and adaptive and exhibits better performance than other methods involved in the paper in terms of fitness value and classification accuracy.

  10. Problem solving with genetic algorithms and Splicer

    Science.gov (United States)

    Bayer, Steven E.; Wang, Lui

    1991-01-01

    Genetic algorithms are highly parallel, adaptive search procedures (i.e., problem-solving methods) loosely based on the processes of population genetics and Darwinian survival of the fittest. Genetic algorithms have proven useful in domains where other optimization techniques perform poorly. The main purpose of the paper is to discuss a NASA-sponsored software development project to develop a general-purpose tool for using genetic algorithms. The tool, called Splicer, can be used to solve a wide variety of optimization problems and is currently available from NASA and COSMIC. This discussion is preceded by an introduction to basic genetic algorithm concepts and a discussion of genetic algorithm applications.

  11. Algorithm of developing competitive strategies and the trends of realizing them for agricultural enterprises

    Directory of Open Access Journals (Sweden)

    Viktoriia Boiko

    2016-02-01

    Full Text Available The paper specifies basic stages of developing and realizing the strategy of enhancing competitiveness of enterprises and represents an appropriate algorithm. The study analyzes the economic indexes and results of the activity of the agrarian enterprises in Kherson region and provides competitive strategies of efficient development of agrarian enterprises with different levels of competitiveness and specifies the ways of realizing them which will contribute to the optimal use of the available strategic potential

  12. The development of a 3D mesoscopic model of metallic foam based on an improved watershed algorithm

    Science.gov (United States)

    Zhang, Jinhua; Zhang, Yadong; Wang, Guikun; Fang, Qin

    2018-06-01

    The watershed algorithm has been used widely in the x-ray computed tomography (XCT) image segmentation. It provides a transformation defined on a grayscale image and finds the lines that separate adjacent images. However, distortion occurs in developing a mesoscopic model of metallic foam based on XCT image data. The cells are oversegmented at some events when the traditional watershed algorithm is used. The improved watershed algorithm presented in this paper can avoid oversegmentation and is composed of three steps. Firstly, it finds all of the connected cells and identifies the junctions of the corresponding cell walls. Secondly, the image segmentation is conducted to separate the adjacent cells. It generates the lost cell walls between the adjacent cells. Optimization is then performed on the segmentation image. Thirdly, this improved algorithm is validated when it is compared with the image of the metallic foam, which shows that it can avoid the image segmentation distortion. A mesoscopic model of metallic foam is thus formed based on the improved algorithm, and the mesoscopic characteristics of the metallic foam, such as cell size, volume and shape, are identified and analyzed.

  13. Innovative applications of genetic algorithms to problems in accelerator physics

    Directory of Open Access Journals (Sweden)

    Alicia Hofler

    2013-01-01

    Full Text Available The genetic algorithm (GA is a powerful technique that implements the principles nature uses in biological evolution to optimize a multidimensional nonlinear problem. The GA works especially well for problems with a large number of local extrema, where traditional methods (such as conjugate gradient, steepest descent, and others fail or, at best, underperform. The field of accelerator physics, among others, abounds with problems which lend themselves to optimization via GAs. In this paper, we report on the successful application of GAs in several problems related to the existing Continuous Electron Beam Accelerator Facility nuclear physics machine, the proposed Medium-energy Electron-Ion Collider at Jefferson Lab, and a radio frequency gun-based injector. These encouraging results are a step forward in optimizing accelerator design and provide an impetus for application of GAs to other problems in the field. To that end, we discuss the details of the GAs used, include a newly devised enhancement which leads to improved convergence to the optimum, and make recommendations for future GA developments and accelerator applications.

  14. A model for mentoring newly-appointed nurse educators in nursing education institutions in South Africa

    Directory of Open Access Journals (Sweden)

    Eunice Seekoe

    2014-02-01

    Full Text Available Background: South Africa transformed higher education through the enactment of the Higher Education Act (No. 101 of 1997. The researcher identified the need to develop a model for the mentoring of newly-appointed nurse educators in nursing education institutions in South Africa. Objectives: To develop and describe the model for mentoring newly-appointed nurse educators in nursing education institutions in South Africa. Method: A qualitative and theory-generating design was used (following empirical findings regarding needs analysis in order to develop the model. The conceptualisation of the framework focused on the context, content, process and the theoretical domains that influenced the model. Ideas from different theories were borrowed from and integrated with the literature and deductive and inductive strategies were applied. Results: The structure of the model is multidimensional and complex in nature (macro, mesoand micro based on the philosophy of reflective practice, competency-based practice andcritical learning theories. The assumptions are in relation to stakeholders, context, mentoring, outcome, process and dynamic. The stakeholders are the mentor and mentee within an interactive participatory relationship. The mentoring takes place within the process with a sequence of activities such as relationship building, development, engagement, reflective process and assessment. Capacity building and empowerment are outcomes of mentoring driven by motivation. Conclusion: The implication for nurse managers is that the model can be used to develop mentoring programmes for newly-appointed nurse educators.

  15. Los Alamos Plutonium Facility newly generated TRU waste certification

    International Nuclear Information System (INIS)

    Gruetzmacher, K.; Montoya, A.; Sinkule, B.; Maez, M.

    1997-01-01

    This paper presents an overview of the activities being planned and implemented to certify newly generated contact handled transuranic (TRU) waste produced by Los Alamos National Laboratory's (LANL's) Plutonium Facility. Certifying waste at the point of generation is the most important cost and labor saving step in the WIPP certification process. The pedigree of a waste item is best known by the originator of the waste and frees a site from expensive characterization activities such as those associated with legacy waste. Through a cooperative agreement with LANLs Waste Management Facility and under the umbrella of LANLs WIPP-related certification and quality assurance documents, the Plutonium Facility will be certifying its own newly generated waste. Some of the challenges faced by the Plutonium Facility in preparing to certify TRU waste include the modification and addition of procedures to meet WIPP requirements, standardizing packaging for TRU waste, collecting processing documentation from operations which produce TRU waste, and developing ways to modify waste streams which are not certifiable in their present form

  16. Development of a Smart Release Algorithm for Mid-Air Separation of Parachute Test Articles

    Science.gov (United States)

    Moore, James W.

    2011-01-01

    The Crew Exploration Vehicle Parachute Assembly System (CPAS) project is currently developing an autonomous method to separate a capsule-shaped parachute test vehicle from an air-drop platform for use in the test program to develop and validate the parachute system for the Orion spacecraft. The CPAS project seeks to perform air-drop tests of an Orion-like boilerplate capsule. Delivery of the boilerplate capsule to the test condition has proven to be a critical and complicated task. In the current concept, the boilerplate vehicle is extracted from an aircraft on top of a Type V pallet and then separated from the pallet in mid-air. The attitude of the vehicles at separation is critical to avoiding re-contact and successfully deploying the boilerplate into a heatshield-down orientation. Neither the pallet nor the boilerplate has an active control system. However, the attitude of the mated vehicle as a function of time is somewhat predictable. CPAS engineers have designed an avionics system to monitor the attitude of the mated vehicle as it is extracted from the aircraft and command a release when the desired conditions are met. The algorithm includes contingency capabilities designed to release the test vehicle before undesirable orientations occur. The algorithm was verified with simulation and ground testing. The pre-flight development and testing is discussed and limitations of ground testing are noted. The CPAS project performed a series of three drop tests as a proof-of-concept of the release technique. These tests helped to refine the attitude instrumentation and software algorithm to be used on future tests. The drop tests are described in detail and the evolution of the release system with each test is described.

  17. Phenology of Lymantria monacha (Lepidoptera:Lymantriidae) laboratory reared on spruce foliage or a newly developed artificial diet

    Science.gov (United States)

    Melody A. Keena; Alice Vandel; Oldrich. Pultar

    2010-01-01

    Lymantria monacha (L.) (Lepidoptera: Lymantriidae) is a Eurasian pest of conifers that has potential for accidental introduction into North America. The phenology over the entire life cycle for L. monacha individuals from the Czech Republic was compared on Picea glauca (Moench) Voss (white spruce) and a newly...

  18. Development of regularized expectation maximization algorithms for fan-beam SPECT data

    International Nuclear Information System (INIS)

    Kim, Soo Mee; Lee, Jae Sung; Lee, Dong Soo; Lee, Soo Jin; Kim, Kyeong Min

    2005-01-01

    SPECT using a fan-beam collimator improves spatial resolution and sensitivity. For the reconstruction from fan-beam projections, it is necessary to implement direct fan-beam reconstruction methods without transforming the data into the parallel geometry. In this study, various fan-beam reconstruction algorithms were implemented and their performances were compared. The projector for fan-beam SPECT was implemented using a ray-tracing method. The direct reconstruction algorithms implemented for fan-beam projection data were FBP (filtered backprojection), EM (expectation maximization), OS-EM (ordered subsets EM) and MAP-EM OSL (maximum a posteriori EM using the one-step late method) with membrane and thin-plate models as priors. For comparison, the fan-beam projection data were also rebinned into the parallel data using various interpolation methods, such as the nearest neighbor, bilinear and bicubic interpolations, and reconstructed using the conventional EM algorithm for parallel data. Noiseless and noisy projection data from the digital Hoffman brain and Shepp/Logan phantoms were reconstructed using the above algorithms. The reconstructed images were compared in terms of a percent error metric. For the fan-beam data with Poisson noise, the MAP-EM OSL algorithm with the thin-plate prior showed the best result in both percent error and stability. Bilinear interpolation was the most effective method for rebinning from the fan-beam to parallel geometry when the accuracy and computation load were considered. Direct fan-beam EM reconstructions were more accurate than the standard EM reconstructions obtained from rebinned parallel data. Direct fan-beam reconstruction algorithms were implemented, which provided significantly improved reconstructions

  19. Problems faced by newly diagnosed diabetes mellitus patients at ...

    African Journals Online (AJOL)

    Diabetes mellitus can be a frightening experience for newly diagnosed patients. The aim of this study was to determine and describe the problems faced by newly diagnosed diabetes mellitus patients at primary healthcare facilities at Mopani district, Limpopo Province. A qualitative, descriptive and contextual research ...

  20. Development and Evaluation of Model Algorithms to Account for Chemical Transformation in the Nearroad Environment

    Science.gov (United States)

    We describe the development and evaluation of two new model algorithms for NOx chemistry in the R-LINE near-road dispersion model for traffic sources. With increased urbanization, there is increased mobility leading to higher amount of traffic related activity on a global scale. ...

  1. Newly graduated nurses' use of knowledge sources in clinical decision-making

    DEFF Research Database (Denmark)

    Voldbjerg, Siri Lygum; Grønkjaer, Mette; Wiechula, Rick

    2017-01-01

    AIMS AND OBJECTIVES: To explore which knowledge sources newly graduated nurses' use in clinical decision-making and why and how they are used. BACKGROUND: In spite of an increased educational focus on skills and competencies within evidence based practice newly graduated nurses' ability to use...... approaches to strengthen the knowledgebase used in clinical decision-making. DESIGN AND METHODS: Ethnographic study using participant-observation and individual semi-structured interviews of nine Danish newly graduated nurses in medical and surgical hospital settings. RESULTS: Newly graduates use...... in clinical decision-making. If newly graduates are to be supported in an articulate and reflective use of a variety of sources, they have to be allocated to experienced nurses who model a reflective, articulate and balanced use of knowledge sources. This article is protected by copyright. All rights reserved....

  2. Development of imaging and reconstructions algorithms on parallel processing architectures for applications in non-destructive testing

    International Nuclear Information System (INIS)

    Pedron, Antoine

    2013-01-01

    This thesis work is placed between the scientific domain of ultrasound non-destructive testing and algorithm-architecture adequation. Ultrasound non-destructive testing includes a group of analysis techniques used in science and industry to evaluate the properties of a material, component, or system without causing damage. In order to characterise possible defects, determining their position, size and shape, imaging and reconstruction tools have been developed at CEA-LIST, within the CIVA software platform. Evolution of acquisition sensors implies a continuous growth of datasets and consequently more and more computing power is needed to maintain interactive reconstructions. General purpose processors (GPP) evolving towards parallelism and emerging architectures such as GPU allow large acceleration possibilities than can be applied to these algorithms. The main goal of the thesis is to evaluate the acceleration than can be obtained for two reconstruction algorithms on these architectures. These two algorithms differ in their parallelization scheme. The first one can be properly parallelized on GPP whereas on GPU, an intensive use of atomic instructions is required. Within the second algorithm, parallelism is easier to express, but loop ordering on GPP, as well as thread scheduling and a good use of shared memory on GPU are necessary in order to obtain efficient results. Different API or libraries, such as OpenMP, CUDA and OpenCL are evaluated through chosen benchmarks. An integration of both algorithms in the CIVA software platform is proposed and different issues related to code maintenance and durability are discussed. (author) [fr

  3. Newly graduated nurses' occupational commitment and its associations with professional competence and work-related factors.

    Science.gov (United States)

    Numminen, Olivia; Leino-Kilpi, Helena; Isoaho, Hannu; Meretoja, Riitta

    2016-01-01

    To explore newly graduated nurses' occupational commitment and its associations with their self-assessed professional competence and other work-related factors. As a factor affecting nurse turnover, newly graduated nurses' occupational commitment and its associations with work-related factors needs exploring to retain adequate workforce. Nurses' commitment has mainly been studied as organisational commitment, but newly graduated nurses' occupational commitment and its association with work-related factors needs further studying. This study used descriptive, cross-sectional, correlation design. A convenience sample of 318 newly graduated nurses in Finland participated responding to an electronic questionnaire. Statistical software, NCSS version 9, was used in data analysis. Frequencies, percentages, ranges, means and standard deviations summarised the data. Multivariate Analyses of Variance estimated associations between occupational commitment and work-related variables. IBM SPSS Amos version 22 estimated the model fit of Occupational Commitment Scale and Nurse Competence Scale. Newly graduated nurses' occupational commitment was good, affective commitment reaching the highest mean score. There was a significant difference between the nurse groups in favour of nurses at higher competence levels in all subscales except in limited alternatives occupational commitment. Multivariate analyses revealed significant associations between subscales of commitment and competence, turnover intentions, job satisfaction, earlier professional education and work sector, competence counting only through affective dimension. The association between occupational commitment and low turnover intentions and satisfaction with nursing occupation was strong. Higher general competence indicated higher overall occupational commitment. Managers' recognition of the influence of all dimensions of occupational commitment in newly graduated nurses' professional development is important. Follow

  4. Ethical climate and nurse competence - newly graduated nurses' perceptions.

    Science.gov (United States)

    Numminen, Olivia; Leino-Kilpi, Helena; Isoaho, Hannu; Meretoja, Riitta

    2015-12-01

    is also a need for knowledge of newly graduated nurses' views of factors which act as enhancers or barriers to positive ethical climates to develop. Interventions, continuing education courses, and discussions designed to promote positive ethical climates should be developed for managers, nurses, and multi-professional teams. © The Author(s) 2014.

  5. Low cost MATLAB-based pulse oximeter for deployment in research and development applications.

    Science.gov (United States)

    Shokouhian, M; Morling, R C S; Kale, I

    2013-01-01

    Problems such as motion artifact and effects of ambient lights have forced developers to design different signal processing techniques and algorithms to increase the reliability and accuracy of the conventional pulse oximeter device. To evaluate the robustness of these techniques, they are applied either to recorded data or are implemented on chip to be applied to real-time data. Recorded data is the most common method of evaluating however it is not as reliable as real-time measurements. On the other hand, hardware implementation can be both expensive and time consuming. This paper presents a low cost MATLAB-based pulse oximeter that can be used for rapid evaluation of newly developed signal processing techniques and algorithms. Flexibility to apply different signal processing techniques, providing both processed and unprocessed data along with low implementation cost are the important features of this design which makes it ideal for research and development purposes, as well as commercial, hospital and healthcare application.

  6. Algorithm of Particle Data Association for SLAM Based on Improved Ant Algorithm

    Directory of Open Access Journals (Sweden)

    KeKe Gen

    2015-01-01

    Full Text Available The article considers a problem of data association algorithm for simultaneous localization and mapping guidelines in determining the route of unmanned aerial vehicles (UAVs. Currently, these equipments are already widely used, but mainly controlled from the remote operator. An urgent task is to develop a control system that allows for autonomous flight. Algorithm SLAM (simultaneous localization and mapping, which allows to predict the location, speed, the ratio of flight parameters and the coordinates of landmarks and obstacles in an unknown environment, is one of the key technologies to achieve real autonomous UAV flight. The aim of this work is to study the possibility of solving this problem by using an improved ant algorithm.The data association for SLAM algorithm is meant to establish a matching set of observed landmarks and landmarks in the state vector. Ant algorithm is one of the widely used optimization algorithms with positive feedback and the ability to search in parallel, so the algorithm is suitable for solving the problem of data association for SLAM. But the traditional ant algorithm in the process of finding routes easily falls into local optimum. Adding random perturbations in the process of updating the global pheromone to avoid local optima. Setting limits pheromone on the route can increase the search space with a reasonable amount of calculations for finding the optimal route.The paper proposes an algorithm of the local data association for SLAM algorithm based on an improved ant algorithm. To increase the speed of calculation, local data association is used instead of the global data association. The first stage of the algorithm defines targets in the matching space and the observed landmarks with the possibility of association by the criterion of individual compatibility (IC. The second stage defines the matched landmarks and their coordinates using improved ant algorithm. Simulation results confirm the efficiency and

  7. Development and validation of a prediction algorithm for the onset of common mental disorders in a working population.

    Science.gov (United States)

    Fernandez, Ana; Salvador-Carulla, Luis; Choi, Isabella; Calvo, Rafael; Harvey, Samuel B; Glozier, Nicholas

    2018-01-01

    Common mental disorders are the most common reason for long-term sickness absence in most developed countries. Prediction algorithms for the onset of common mental disorders may help target indicated work-based prevention interventions. We aimed to develop and validate a risk algorithm to predict the onset of common mental disorders at 12 months in a working population. We conducted a secondary analysis of the Household, Income and Labour Dynamics in Australia Survey, a longitudinal, nationally representative household panel in Australia. Data from the 6189 working participants who did not meet the criteria for a common mental disorders at baseline were non-randomly split into training and validation databases, based on state of residence. Common mental disorders were assessed with the mental component score of 36-Item Short Form Health Survey questionnaire (score ⩽45). Risk algorithms were constructed following recommendations made by the Transparent Reporting of a multivariable prediction model for Prevention Or Diagnosis statement. Different risk factors were identified among women and men for the final risk algorithms. In the training data, the model for women had a C-index of 0.73 and effect size (Hedges' g) of 0.91. In men, the C-index was 0.76 and the effect size was 1.06. In the validation data, the C-index was 0.66 for women and 0.73 for men, with positive predictive values of 0.28 and 0.26, respectively Conclusion: It is possible to develop an algorithm with good discrimination for the onset identifying overall and modifiable risks of common mental disorders among working men. Such models have the potential to change the way that prevention of common mental disorders at the workplace is conducted, but different models may be required for women.

  8. Improved gravitational search algorithm for parameter identification of water turbine regulation system

    International Nuclear Information System (INIS)

    Chen, Zhihuan; Yuan, Xiaohui; Tian, Hao; Ji, Bin

    2014-01-01

    Highlights: • We propose an improved gravitational search algorithm (IGSA). • IGSA is applied to parameter identification of water turbine regulation system (WTRS). • WTRS is modeled by considering the impact of turbine speed on torque and water flow. • Weighted objective function strategy is applied to parameter identification of WTRS. - Abstract: Parameter identification of water turbine regulation system (WTRS) is crucial in precise modeling hydropower generating unit (HGU) and provides support for the adaptive control and stability analysis of power system. In this paper, an improved gravitational search algorithm (IGSA) is proposed and applied to solve the identification problem for WTRS system under load and no-load running conditions. This newly algorithm which is based on standard gravitational search algorithm (GSA) accelerates convergence speed with combination of the search strategy of particle swarm optimization and elastic-ball method. Chaotic mutation which is devised to stepping out the local optimal with a certain probability is also added into the algorithm to avoid premature. Furthermore, a new kind of model associated to the engineering practices is built and analyzed in the simulation tests. An illustrative example for parameter identification of WTRS is used to verify the feasibility and effectiveness of the proposed IGSA, as compared with standard GSA and particle swarm optimization in terms of parameter identification accuracy and convergence speed. The simulation results show that IGSA performs best for all identification indicators

  9. Practices influenced by policy? An exploration of newly hired science teachers at sites in South Africa and the United States

    Science.gov (United States)

    Navy, S. L.; Luft, J. A.; Toerien, R.; Hewson, P. W.

    2018-05-01

    In many parts of the world, newly hired science teachers' practices are developing in a complex policy environment. However, little is known about how newly hired science teachers' practices are enacted throughout a cycle of instruction and how these practices can be influenced by macro-, meso-, and micro-policies. Knowing how policies impact practice can result in better policies or better support for certain policies in order to enhance the instruction of newly hired teachers. This comparative study investigated how 12 newly hired science teachers at sites in South Africa (SA) and the United States (US) progressed through an instructional cycle of planning, teaching, and reflection. The qualitative data were analysed through beginning teacher competency frameworks, the cycle of instruction, and institutional theory. Data analysis revealed prevailing areas of practice and connections to levels of policy within the instructional cycle phases. There were some differences between the SA and US teachers and among first-, second-, and third-year teachers. More importantly, this study indicates that newly hired teachers are susceptible to micro-policies and are progressively developing their practice. It also shows the importance of meso-level connectors. It suggests that teacher educators and policy makers must consider how to prepare and support newly hired science teachers to achieve the shared global visions of science teaching.

  10. Development of an algorithm for heartbeats detection and classification in Holter records based on temporal and morphological features

    International Nuclear Information System (INIS)

    García, A; Romano, H; Laciar, E; Correa, R

    2011-01-01

    In this work a detection and classification algorithm for heartbeats analysis in Holter records was developed. First, a QRS complexes detector was implemented and their temporal and morphological characteristics were extracted. A vector was built with these features; this vector is the input of the classification module, based on discriminant analysis. The beats were classified in three groups: Premature Ventricular Contraction beat (PVC), Atrial Premature Contraction beat (APC) and Normal Beat (NB). These beat categories represent the most important groups of commercial Holter systems. The developed algorithms were evaluated in 76 ECG records of two validated open-access databases 'arrhythmias MIT BIH database' and M IT BIH supraventricular arrhythmias database . A total of 166343 beats were detected and analyzed, where the QRS detection algorithm provides a sensitivity of 99.69 % and a positive predictive value of 99.84 %. The classification stage gives sensitivities of 97.17% for NB, 97.67% for PCV and 92.78% for APC.

  11. Performance and development for the Inner Detector Trigger Algorithms at ATLAS

    CERN Document Server

    Penc, Ondrej; The ATLAS collaboration

    2015-01-01

    A redesign of the tracking algorithms for the ATLAS trigger for Run 2 starting in spring 2015 is in progress. The ATLAS HLT software has been restructured to run as a more flexible single stage HLT, instead of two separate stages (Level 2 and Event Filter) as in Run 1. The new tracking strategy employed for Run 2 will use a Fast Track Finder (FTF) algorithm to seed subsequent Precision Tracking, and will result in improved track parameter resolution and faster execution times than achieved during Run 1. The performance of the new algorithms has been evaluated to identify those aspects where code optimisation would be most beneficial. The performance and timing of the algorithms for electron and muon reconstruction in the trigger are presented. The profiling infrastructure, constructed to provide prompt feedback from the optimisation, is described, including the methods used to monitor the relative performance improvements as the code evolves.

  12. NEWLY-PACKAGED BALI TOURIST PERFORMING ARTS IN THE PERSPECTIVE OF CULTURAL STUDIES

    Directory of Open Access Journals (Sweden)

    Ni Made Ruastiti

    2012-11-01

    Full Text Available This research is focused on the newly packaged tourist performing arts; they are anew concept and seem to be different from the general tourist performing arts. They arepackaged from various components of Balinese arts and managed as large scale-touristperforming arts in terms of materials, space, and time of their performances. The researchercalls them new types of Bali tourist performing arts because how they are presented isnew and different from the traditional tourist performing arts which are simply performed.In this research, the newly-packaged performing arts are analyzed in the perspective ofcultural studies.The research was carried out at three palaces in Bali; they are Mengwi Palace inBadung regency, Anyar Palace at Kerambitan, Tabanan regency, and Banyuning Palace atBongkasa, Badung regency. There are three main problems to be discussed: firstly, how dothe tourist performing arts emerge in all the palaces? Secondly, are they related to thetourist industry developed in the palaces?, thirdly, what is the impact and meaning of themfor the sake of the palaces, society, and Balinese culture? The researcher uses a qualitativemethod and an interdisciplinary approach as characteristics of cultural studies. The theoriesused are hegemony, deconstruction, and structuration.The result shows that the tourism development at all the palaces has made the localsociety become more critical. The money-oriented economy based on the spirit of gettingbenefit has made the emergence of comodification in all sectors of life. The emergence oftourist industry at the palaces has led to the idea of showing all of the useful art and culturalpotentials which at the palaces and their surroundings. Theoretically, the palaces can bestated to have deconstructed the concept of presenting the Bali tourist performing arts into anew one, that is, “the newly packaged Bali tourist performing arts”.It has been observed that all the palaces have developed t “Newly

  13. Accessing primary care Big Data: the development of a software algorithm to explore the rich content of consultation records.

    Science.gov (United States)

    MacRae, J; Darlow, B; McBain, L; Jones, O; Stubbe, M; Turner, N; Dowell, A

    2015-08-21

    To develop a natural language processing software inference algorithm to classify the content of primary care consultations using electronic health record Big Data and subsequently test the algorithm's ability to estimate the prevalence and burden of childhood respiratory illness in primary care. Algorithm development and validation study. To classify consultations, the algorithm is designed to interrogate clinical narrative entered as free text, diagnostic (Read) codes created and medications prescribed on the day of the consultation. Thirty-six consenting primary care practices from a mixed urban and semirural region of New Zealand. Three independent sets of 1200 child consultation records were randomly extracted from a data set of all general practitioner consultations in participating practices between 1 January 2008-31 December 2013 for children under 18 years of age (n=754,242). Each consultation record within these sets was independently classified by two expert clinicians as respiratory or non-respiratory, and subclassified according to respiratory diagnostic categories to create three 'gold standard' sets of classified records. These three gold standard record sets were used to train, test and validate the algorithm. Sensitivity, specificity, positive predictive value and F-measure were calculated to illustrate the algorithm's ability to replicate judgements of expert clinicians within the 1200 record gold standard validation set. The algorithm was able to identify respiratory consultations in the 1200 record validation set with a sensitivity of 0.72 (95% CI 0.67 to 0.78) and a specificity of 0.95 (95% CI 0.93 to 0.98). The positive predictive value of algorithm respiratory classification was 0.93 (95% CI 0.89 to 0.97). The positive predictive value of the algorithm classifying consultations as being related to specific respiratory diagnostic categories ranged from 0.68 (95% CI 0.40 to 1.00; other respiratory conditions) to 0.91 (95% CI 0.79 to 1

  14. Efficacy of Intra-articular Injection of a Newly Developed Plasma Rich in Growth Factor (PRGF) Versus Hyaluronic Acid on Pain and Function of Patients with Knee Osteoarthritis: A Single-Blinded Randomized Clinical Trial.

    Science.gov (United States)

    Raeissadat, Seyed Ahmad; Rayegani, Seyed Mansoor; Ahangar, Azadeh Gharooee; Abadi, Porya Hassan; Mojgani, Parviz; Ahangar, Omid Gharooi

    2017-01-01

    Knee osteoarthritis is the most common joint disease. We aimed to compare the efficacy and safety of intra-articular injection of a newly developed plasma rich in growth factor (PRGF) versus hyaluronic acid (HA) on pain and function of patients with knee osteoarthritis. In this single-blinded randomized clinical trial, patients with symptomatic osteoarthritis of knee were assigned to receive 2 intra-articular injections of our newly developed PRGF in 3 weeks or 3 weekly injections of HA. Our primary outcome was the mean change from baseline until 2 and 6 months post intervention in scores of visual analog scale, Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC), and Lequesne index. We used analysis of variance for repeated-measures statistical test. A total of 69 patients entered final analysis. The mean age of patients was 58.2 ± 7.41 years and 81.2% were women. In particular, total WOMAC index decreased from 42.9 ± 13.51 to 26.8 ± 13.45 and 24.4 ± 16.54 at 2 and 6 months in the newly developed PRGF group (within subjects P  = .001), and from 38.8 ± 12.62 to 27.8 ± 11.01 and 27.4 ± 11.38 at 2 and 6 months in the HA group (within subjects P  = .001), respectively (between subjects P  = .631). There was no significant difference between PRGF and HA groups in patients' satisfaction and minor complications of injection, whereas patients in HA group reported significantly lower injection-induced pain. In 6 months follow up, our newly developed PRGF and HA, both are effective options to decrease pain and improvement of function in patients with symptomatic mild to moderate knee osteoarthritis.

  15. Development of embedded real-time and high-speed vision platform

    Science.gov (United States)

    Ouyang, Zhenxing; Dong, Yimin; Yang, Hua

    2015-12-01

    Currently, high-speed vision platforms are widely used in many applications, such as robotics and automation industry. However, a personal computer (PC) whose over-large size is not suitable and applicable in compact systems is an indispensable component for human-computer interaction in traditional high-speed vision platforms. Therefore, this paper develops an embedded real-time and high-speed vision platform, ER-HVP Vision which is able to work completely out of PC. In this new platform, an embedded CPU-based board is designed as substitution for PC and a DSP and FPGA board is developed for implementing image parallel algorithms in FPGA and image sequential algorithms in DSP. Hence, the capability of ER-HVP Vision with size of 320mm x 250mm x 87mm can be presented in more compact condition. Experimental results are also given to indicate that the real-time detection and counting of the moving target at a frame rate of 200 fps at 512 x 512 pixels under the operation of this newly developed vision platform are feasible.

  16. Next Generation Suspension Dynamics Algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Schunk, Peter Randall [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Higdon, Jonathon [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Chen, Steven [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-12-01

    This research project has the objective to extend the range of application, improve the efficiency and conduct simulations with the Fast Lubrication Dynamics (FLD) algorithm for concentrated particle suspensions in a Newtonian fluid solvent. The research involves a combination of mathematical development, new computational algorithms, and application to processing flows of relevance in materials processing. The mathematical developments clarify the underlying theory, facilitate verification against classic monographs in the field and provide the framework for a novel parallel implementation optimized for an OpenMP shared memory environment. The project considered application to consolidation flows of major interest in high throughput materials processing and identified hitherto unforeseen challenges in the use of FLD in these applications. Extensions to the algorithm have been developed to improve its accuracy in these applications.

  17. Behavioural modelling using the MOESP algorithm, dynamic neural networks and the Bartels-Stewart algorithm

    NARCIS (Netherlands)

    Schilders, W.H.A.; Meijer, P.B.L.; Ciggaar, E.

    2008-01-01

    In this paper we discuss the use of the state-space modelling MOESP algorithm to generate precise information about the number of neurons and hidden layers in dynamic neural networks developed for the behavioural modelling of electronic circuits. The Bartels–Stewart algorithm is used to transform

  18. Iterative algorithms for large sparse linear systems on parallel computers

    Science.gov (United States)

    Adams, L. M.

    1982-01-01

    Algorithms for assembling in parallel the sparse system of linear equations that result from finite difference or finite element discretizations of elliptic partial differential equations, such as those that arise in structural engineering are developed. Parallel linear stationary iterative algorithms and parallel preconditioned conjugate gradient algorithms are developed for solving these systems. In addition, a model for comparing parallel algorithms on array architectures is developed and results of this model for the algorithms are given.

  19. [Bioethical analysis of the use of newly dead patients in medical learning].

    Science.gov (United States)

    Gomes, Andréia Patrícia; Rego, Sergio; Palácios, Marisa; Siqueira-Batista, Rodrigo

    2010-01-01

    The purpose of this article is to carry out, a discussion on the subject of bioethics and cadavers based upon a critic review of literature. A review of literature, was made with a survey of articles between 1977 and 2007 in the sites 'Biblioteca Virtual de Saúde', PubMed and SciElo, utilizing the keywords: newly deceased patients, newly dead patients, simulators,. This was complemented by a critical evaluation of books published in the area of ethics and bioethics. The possibility to develop learning without orientation by a supervisor is doubtful.. The utilization of newly dead for learning invasive procedures is very frequent and seldom admitted. These procedures, are usually, carried out secretly, without the knowledge and consent of the family. The ethical aspects of these practices are not discussed in the practical medical education. It essential that the ethics of use of recent deceased become a necessary content of graduate education. Performance of these procedures by students should always be authorized by family members. The simulators meet the requirements of training. Discussions about the ethical and bioethical aspects cannot be separated from practical considerations during the students learning time.

  20. Algebraic Algorithm Design and Local Search

    National Research Council Canada - National Science Library

    Graham, Robert

    1996-01-01

    .... Algebraic techniques have been applied successfully to algorithm synthesis by the use of algorithm theories and design tactics, an approach pioneered in the Kestrel Interactive Development System (KIDS...

  1. Demonstration of glucose-6-phosphate dehydrogenase in rat Kupffer cells by a newly-developed ultrastructural enzyme-cytochemistry

    Directory of Open Access Journals (Sweden)

    S Matsubara

    2009-06-01

    Full Text Available Although various tissue macrophages possess high glucose- 6-phosphate dehydrogenase (G6PD activity, which is reported to be closely associated with their phagocytotic/bactericidal function, the fine subcellular localization of this enzyme in liver resident macrophages (Kupffer cells has not been determined.We have investigated the subcellular localization of G6PD in Kupffer cells in rat liver, using a newly developed enzyme-cytochemical (copper-ferrocyanide method. Electron-dense precipitates indicating G6PD activity were clearly visible in the cytoplasm and on the cytosolic side of the endoplasmic reticulum of Kupffer cells. Cytochemical controls ensured specific detection of the enzymatic activity. Rat Kupffer cells abundantly possessed enzyme-cytochemically detectable G6PD activity. Kupffer cell G6PD may play a role in liver defense by delivering NADPH to NADPH-dependent enzymes. G6PD enzyme-cytochemistry may be a useful tool for the study of Kupffer cell functions.

  2. R/D and implement of temper bead welding as newly developed maintenance technique in nuclear power plant

    International Nuclear Information System (INIS)

    Hirano, Shinro; Sera, Takehiko; Chigusa, Naoki; Okimura, Koji; Nishimoto, Kazutoshi

    2011-01-01

    Japanese government has recently addressed a policy to increase capacity factor of existing nuclear PPs to achieve the goal to decrease the emission of CO 2 . Numerous preventive measures have taken in nuclear power plants to minimize the risk of unexpected long shutdown. Newly developed mitigation measures or repair methods need to be qualified to satisfy regulatory standards, before it is implemented to nuclear power plants. The qualification process needs to comply regulatory standards though it may consume time to go through each of the required steps. This paper describes such cases namely ambient temper-bead welding and clarifies the issues that need to be resolved regarding qualification process. The qualification process for new methods that has not been prescribed in regulatory standards temporarily completed by go through confirm testing by JAPEIC, RNP and issuance of no action letter in rush. Currently, the qualification process can only be applied on limited area so generalized qualification process needs to be established. (author)

  3. Algebraic dynamics algorithm: Numerical comparison with Runge-Kutta algorithm and symplectic geometric algorithm

    Institute of Scientific and Technical Information of China (English)

    WANG ShunJin; ZHANG Hua

    2007-01-01

    Based on the exact analytical solution of ordinary differential equations,a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm.A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models.The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision,and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.

  4. Algebraic dynamics algorithm:Numerical comparison with Runge-Kutta algorithm and symplectic geometric algorithm

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Based on the exact analytical solution of ordinary differential equations, a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm. A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models. The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision, and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.

  5. Description of ALARMA: the alarm algorithm developed for the Nuclear Car Wash

    International Nuclear Information System (INIS)

    Luu, T; Biltoft, P; Church, J; Descalle, M; Hall, J; Manatt, D; Mauger, J; Norman, E; Petersen, D; Pruet, J; Prussin, S; Slaughter, D

    2006-01-01

    The goal of any alarm algorithm should be that it provide the necessary tools to derive confidence limits on whether the existence of fissile materials is present in cargo containers. It should be able to extract these limits from (usually) noisy and/or weak data while maintaining a false alarm rate (FAR) that is economically suitable for port operations. It should also be able to perform its analysis within a reasonably short amount of time (i.e. ∼ seconds). To achieve this, it is essential that the algorithm be able to identify and subtract any interference signature that might otherwise be confused with a fissile signature. Lastly, the algorithm itself should be user-intuitive and user-friendly so that port operators with little or no experience with detection algorithms may use it with relative ease. In support of the Nuclear Car Wash project at Lawrence Livermore Laboratory, we have developed an alarm algorithm that satisfies the above requirements. The description of the this alarm algorithm, dubbed ALARMA, is the purpose of this technical report. The experimental setup of the nuclear car wash has been well documented [1, 2, 3]. The presence of fissile materials is inferred by examining the β-delayed gamma spectrum induced after a brief neutron irradiation of cargo, particularly in the high-energy region above approximately 2.5 MeV. In this region naturally occurring gamma rays are virtually non-existent. Thermal-neutron induced fission of 235 U and 239 P, on the other hand, leaves a unique β-delayed spectrum [4]. This spectrum comes from decays of fission products having half-lives as large as 30 seconds, many of which have high Q-values. Since high-energy photons penetrate matter more freely, it is natural to look for unique fissile signatures in this energy region after neutron irradiation. The goal of this interrogation procedure is a 95% success rate of detection of as little as 5 kilograms of fissile material while retaining at most .1% false alarm

  6. A multi-parametric particle-pairing algorithm for particle tracking in single and multiphase flows

    International Nuclear Information System (INIS)

    Cardwell, Nicholas D; Vlachos, Pavlos P; Thole, Karen A

    2011-01-01

    Multiphase flows (MPFs) offer a rich area of fundamental study with many practical applications. Examples of such flows range from the ingestion of foreign particulates in gas turbines to transport of particles within the human body. Experimental investigation of MPFs, however, is challenging, and requires techniques that simultaneously resolve both the carrier and discrete phases present in the flowfield. This paper presents a new multi-parametric particle-pairing algorithm for particle tracking velocimetry (MP3-PTV) in MPFs. MP3-PTV improves upon previous particle tracking algorithms by employing a novel variable pair-matching algorithm which utilizes displacement preconditioning in combination with estimated particle size and intensity to more effectively and accurately match particle pairs between successive images. To improve the method's efficiency, a new particle identification and segmentation routine was also developed. Validation of the new method was initially performed on two artificial data sets: a traditional single-phase flow published by the Visualization Society of Japan (VSJ) and an in-house generated MPF data set having a bi-modal distribution of particles diameters. Metrics of the measurement yield, reliability and overall tracking efficiency were used for method comparison. On the VSJ data set, the newly presented segmentation routine delivered a twofold improvement in identifying particles when compared to other published methods. For the simulated MPF data set, measurement efficiency of the carrier phases improved from 9% to 41% for MP3-PTV as compared to a traditional hybrid PTV. When employed on experimental data of a gas–solid flow, the MP3-PTV effectively identified the two particle populations and reported a vector efficiency and velocity measurement error comparable to measurements for the single-phase flow images. Simultaneous measurement of the dispersed particle and the carrier flowfield velocities allowed for the calculation of

  7. Comprehensive eye evaluation algorithm

    Science.gov (United States)

    Agurto, C.; Nemeth, S.; Zamora, G.; Vahtel, M.; Soliz, P.; Barriga, S.

    2016-03-01

    In recent years, several research groups have developed automatic algorithms to detect diabetic retinopathy (DR) in individuals with diabetes (DM), using digital retinal images. Studies have indicated that diabetics have 1.5 times the annual risk of developing primary open angle glaucoma (POAG) as do people without DM. Moreover, DM patients have 1.8 times the risk for age-related macular degeneration (AMD). Although numerous investigators are developing automatic DR detection algorithms, there have been few successful efforts to create an automatic algorithm that can detect other ocular diseases, such as POAG and AMD. Consequently, our aim in the current study was to develop a comprehensive eye evaluation algorithm that not only detects DR in retinal images, but also automatically identifies glaucoma suspects and AMD by integrating other personal medical information with the retinal features. The proposed system is fully automatic and provides the likelihood of each of the three eye disease. The system was evaluated in two datasets of 104 and 88 diabetic cases. For each eye, we used two non-mydriatic digital color fundus photographs (macula and optic disc centered) and, when available, information about age, duration of diabetes, cataracts, hypertension, gender, and laboratory data. Our results show that the combination of multimodal features can increase the AUC by up to 5%, 7%, and 8% in the detection of AMD, DR, and glaucoma respectively. Marked improvement was achieved when laboratory results were combined with retinal image features.

  8. Algorithm Development for Multi-Energy SXR based Electron Temperature Profile Reconstruction

    Science.gov (United States)

    Clayton, D. J.; Tritz, K.; Finkenthal, M.; Kumar, D.; Stutman, D.

    2012-10-01

    New techniques utilizing computational tools such as neural networks and genetic algorithms are being developed to infer plasma electron temperature profiles on fast time scales (> 10 kHz) from multi-energy soft-x-ray (ME-SXR) diagnostics. Traditionally, a two-foil SXR technique, using the ratio of filtered continuum emission measured by two SXR detectors, has been employed on fusion devices as an indirect method of measuring electron temperature. However, these measurements can be susceptible to large errors due to uncertainties in time-evolving impurity density profiles, leading to unreliable temperature measurements. To correct this problem, measurements using ME-SXR diagnostics, which use three or more filtered SXR arrays to distinguish line and continuum emission from various impurities, in conjunction with constraints from spectroscopic diagnostics, can be used to account for unknown or time evolving impurity profiles [K. Tritz et al, Bull. Am. Phys. Soc. Vol. 56, No. 12 (2011), PP9.00067]. On NSTX, ME-SXR diagnostics can be used for fast (10-100 kHz) temperature profile measurements, using a Thomson scattering diagnostic (60 Hz) for periodic normalization. The use of more advanced algorithms, such as neural network processing, can decouple the reconstruction of the temperature profile from spectral modeling.

  9. The Algorithm for Algorithms: An Evolutionary Algorithm Based on Automatic Designing of Genetic Operators

    Directory of Open Access Journals (Sweden)

    Dazhi Jiang

    2015-01-01

    Full Text Available At present there is a wide range of evolutionary algorithms available to researchers and practitioners. Despite the great diversity of these algorithms, virtually all of the algorithms share one feature: they have been manually designed. A fundamental question is “are there any algorithms that can design evolutionary algorithms automatically?” A more complete definition of the question is “can computer construct an algorithm which will generate algorithms according to the requirement of a problem?” In this paper, a novel evolutionary algorithm based on automatic designing of genetic operators is presented to address these questions. The resulting algorithm not only explores solutions in the problem space like most traditional evolutionary algorithms do, but also automatically generates genetic operators in the operator space. In order to verify the performance of the proposed algorithm, comprehensive experiments on 23 well-known benchmark optimization problems are conducted. The results show that the proposed algorithm can outperform standard differential evolution algorithm in terms of convergence speed and solution accuracy which shows that the algorithm designed automatically by computers can compete with the algorithms designed by human beings.

  10. 10-GHz return-to-zero pulse source tunable in wavelength with a single- or multiwavelength output based on four-wave mixing in a newly developed highly nonlinear fiber

    DEFF Research Database (Denmark)

    Clausen, A. T.; Oxenlowe, L.; Peucheret, Christophe

    2001-01-01

    In this letter, a novel scheme for a wavelength-tunable pulse source (WTPS) is proposed and characterized. It is based on four-wave mixing (FWM) in a newly developed highly nonlinear fiber between a return-to-zero (RZ) pulsed signal at a fixed wavelength and a continuous wave probe tunable...

  11. PREVALENCE OF SLEEP DISORDERED BREATHING IN PATIENTS WITH NEWLY DIAGNOSED ACROMEGALY

    Directory of Open Access Journals (Sweden)

    U. A. Tsoy

    2014-01-01

    Full Text Available Background: Obstructive sleep disordered breathing or obstructive sleep apnea (OSA is the most common respiratory impairment in acromegaly. OSA is bound up with heightened cardiovascular mortality. Aim: Тo study frequency, features, and structure of sleep disordered breathing in patients with newly diagnosed acromegaly and to elucidate the factors influencing their development. Materials and methods: 38 patients (10 men, 28 women, median age 53 (28-76 years, median body mass index (BMI 29 (19.9-44.3 kg/m² with newly diagnosed acromegaly were recruited into the study. All subjects underwent full polysomnography (Embla N7000, Natus, USA and Remlogica software (USA. Results: Sleep disordered breathing was found in 28 (73.7% patients. OSA was revealed in all cases, in 11 (39.3% subjects it was mixed. In 10 (35.7% patients OSA was mild, in 8 (28.6% moderate, and in 10 (35.7% severe. BMI (р<0.01, disease duration (р=0.003, and insulin-like growth factor-1 (IGF-1 level (р=0.04 were different in patients without OSA and patients with moderate-to-severe OSA. No difference was found in sex (р=0.4, age (р=0.064, and growth hormone level (р=0.6. Frequency of arterial hypertension, diabetes mellitus, and other glucose metabolism impairments was the same in subjects without OSA and with severe-to-moderate OSA. Conclusion: All patients with newly diagnosed acromegaly should undergo polysomnography. BMI, disease duration, and IGF-1 level are significant risk factors for OSA development. Correlation OSA with arterial hypertension and glucose metabolism impairments needs to be further investigated.

  12. Performance of a Real-time Multipurpose 2-Dimensional Clustering Algorithm Developed for the ATLAS Experiment

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00372074; The ATLAS collaboration; Sotiropoulou, Calliope Louisa; Annovi, Alberto; Kordas, Kostantinos

    2016-01-01

    In this paper the performance of the 2D pixel clustering algorithm developed for the Input Mezzanine card of the ATLAS Fast TracKer system is presented. Fast TracKer is an approved ATLAS upgrade that has the goal to provide a complete list of tracks to the ATLAS High Level Trigger for each level-1 accepted event, at up to 100 kHz event rate with a very small latency, in the order of 100µs. The Input Mezzanine card is the input stage of the Fast TracKer system. Its role is to receive data from the silicon detector and perform real time clustering, thus to reduce the amount of data propagated to the subsequent processing levels with minimal information loss. We focus on the most challenging component on the Input Mezzanine card, the 2D clustering algorithm executed on the pixel data. We compare two different implementations of the algorithm. The first is one called the ideal one which searches clusters of pixels in the whole silicon module at once and calculates the cluster centroids exploiting the whole avail...

  13. Performance of a Real-time Multipurpose 2-Dimensional Clustering Algorithm Developed for the ATLAS Experiment

    CERN Document Server

    Gkaitatzis, Stamatios; The ATLAS collaboration

    2016-01-01

    In this paper the performance of the 2D pixel clustering algorithm developed for the Input Mezzanine card of the ATLAS Fast TracKer system is presented. Fast TracKer is an approved ATLAS upgrade that has the goal to provide a complete list of tracks to the ATLAS High Level Trigger for each level-1 accepted event, at up to 100 kHz event rate with a very small latency, in the order of 100 µs. The Input Mezzanine card is the input stage of the Fast TracKer system. Its role is to receive data from the silicon detector and perform real time clustering, thus to reduce the amount of data propagated to the subsequent processing levels with minimal information loss. We focus on the most challenging component on the Input Mezzanine card, the 2D clustering algorithm executed on the pixel data. We compare two different implementations of the algorithm. The first is one called the ideal one which searches clusters of pixels in the whole silicon module at once and calculates the cluster centroids exploiting the whole avai...

  14. Artifact removal algorithms for stroke detection using a multistatic MIST beamforming algorithm.

    Science.gov (United States)

    Ricci, E; Di Domenico, S; Cianca, E; Rossi, T

    2015-01-01

    Microwave imaging (MWI) has been recently proved as a promising imaging modality for low-complexity, low-cost and fast brain imaging tools, which could play a fundamental role to efficiently manage emergencies related to stroke and hemorrhages. This paper focuses on the UWB radar imaging approach and in particular on the processing algorithms of the backscattered signals. Assuming the use of the multistatic version of the MIST (Microwave Imaging Space-Time) beamforming algorithm, developed by Hagness et al. for the early detection of breast cancer, the paper proposes and compares two artifact removal algorithms. Artifacts removal is an essential step of any UWB radar imaging system and currently considered artifact removal algorithms have been shown not to be effective in the specific scenario of brain imaging. First of all, the paper proposes modifications of a known artifact removal algorithm. These modifications are shown to be effective to achieve good localization accuracy and lower false positives. However, the main contribution is the proposal of an artifact removal algorithm based on statistical methods, which allows to achieve even better performance but with much lower computational complexity.

  15. Application of a newly developed software program for image quality assessment in cone-beam computed tomography.

    Science.gov (United States)

    de Oliveira, Marcus Vinicius Linhares; Santos, António Carvalho; Paulo, Graciano; Campos, Paulo Sergio Flores; Santos, Joana

    2017-06-01

    The purpose of this study was to apply a newly developed free software program, at low cost and with minimal time, to evaluate the quality of dental and maxillofacial cone-beam computed tomography (CBCT) images. A polymethyl methacrylate (PMMA) phantom, CQP-IFBA, was scanned in 3 CBCT units with 7 protocols. A macro program was developed, using the free software ImageJ, to automatically evaluate the image quality parameters. The image quality evaluation was based on 8 parameters: uniformity, the signal-to-noise ratio (SNR), noise, the contrast-to-noise ratio (CNR), spatial resolution, the artifact index, geometric accuracy, and low-contrast resolution. The image uniformity and noise depended on the protocol that was applied. Regarding the CNR, high-density structures were more sensitive to the effect of scanning parameters. There were no significant differences between SNR and CNR in centered and peripheral objects. The geometric accuracy assessment showed that all the distance measurements were lower than the real values. Low-contrast resolution was influenced by the scanning parameters, and the 1-mm rod present in the phantom was not depicted in any of the 3 CBCT units. Smaller voxel sizes presented higher spatial resolution. There were no significant differences among the protocols regarding artifact presence. This software package provided a fast, low-cost, and feasible method for the evaluation of image quality parameters in CBCT.

  16. Application of a newly developed software program for image quality assessment in cone-beam computed tomography

    International Nuclear Information System (INIS)

    De Oliveira, Marcus Vinicius Linhares; Campos, Paulo Sergio Flores; Paulo, Graciano; Santos, Antonio Carvalho; Santos, Joana

    2017-01-01

    The purpose of this study was to apply a newly developed free software program, at low cost and with minimal time, to evaluate the quality of dental and maxillofacial cone-beam computed tomography (CBCT) images. A polymethyl methacrylate (PMMA) phantom, CQP-IFBA, was scanned in 3 CBCT units with 7 protocols. A macro program was developed, using the free software ImageJ, to automatically evaluate the image quality parameters. The image quality evaluation was based on 8 parameters: uniformity, the signal-to-noise ratio (SNR), noise, the contrast-to-noise ratio (CNR), spatial resolution, the artifact index, geometric accuracy, and low-contrast resolution. The image uniformity and noise depended on the protocol that was applied. Regarding the CNR, high-density structures were more sensitive to the effect of scanning parameters. There were no significant differences between SNR and CNR in centered and peripheral objects. The geometric accuracy assessment showed that all the distance measurements were lower than the real values. Low-contrast resolution was influenced by the scanning parameters, and the 1-mm rod present in the phantom was not depicted in any of the 3 CBCT units. Smaller voxel sizes presented higher spatial resolution. There were no significant differences among the protocols regarding artifact presence. This software package provided a fast, low-cost, and feasible method for the evaluation of image quality parameters in CBCT

  17. Application of a newly developed software program for image quality assessment in cone-beam computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    De Oliveira, Marcus Vinicius Linhares; Campos, Paulo Sergio Flores [Federal Institute of Bahia, Salvador (Brazil); Paulo, Graciano; Santos, Antonio Carvalho; Santos, Joana [Coimbra Health School, Polytechnic Institute of Coimbra, Coimbra (Portugal)

    2017-06-15

    The purpose of this study was to apply a newly developed free software program, at low cost and with minimal time, to evaluate the quality of dental and maxillofacial cone-beam computed tomography (CBCT) images. A polymethyl methacrylate (PMMA) phantom, CQP-IFBA, was scanned in 3 CBCT units with 7 protocols. A macro program was developed, using the free software ImageJ, to automatically evaluate the image quality parameters. The image quality evaluation was based on 8 parameters: uniformity, the signal-to-noise ratio (SNR), noise, the contrast-to-noise ratio (CNR), spatial resolution, the artifact index, geometric accuracy, and low-contrast resolution. The image uniformity and noise depended on the protocol that was applied. Regarding the CNR, high-density structures were more sensitive to the effect of scanning parameters. There were no significant differences between SNR and CNR in centered and peripheral objects. The geometric accuracy assessment showed that all the distance measurements were lower than the real values. Low-contrast resolution was influenced by the scanning parameters, and the 1-mm rod present in the phantom was not depicted in any of the 3 CBCT units. Smaller voxel sizes presented higher spatial resolution. There were no significant differences among the protocols regarding artifact presence. This software package provided a fast, low-cost, and feasible method for the evaluation of image quality parameters in CBCT.

  18. Newly Generated Liquid Waste Processing Alternatives Study, Volume 1

    Energy Technology Data Exchange (ETDEWEB)

    Landman, William Henry; Bates, Steven Odum; Bonnema, Bruce Edward; Palmer, Stanley Leland; Podgorney, Anna Kristine; Walsh, Stephanie

    2002-09-01

    This report identifies and evaluates three options for treating newly generated liquid waste at the Idaho Nuclear Technology and Engineering Center of the Idaho National Engineering and Environmental Laboratory. The three options are: (a) treat the waste using processing facilities designed for treating sodium-bearing waste, (b) treat the waste using subcontractor-supplied mobile systems, or (c) treat the waste using a special facility designed and constructed for that purpose. In studying these options, engineers concluded that the best approach is to store the newly generated liquid waste until a sodium-bearing waste treatment facility is available and then to co-process the stored inventory of the newly generated waste with the sodium-bearing waste. After the sodium-bearing waste facility completes its mission, two paths are available. The newly generated liquid waste could be treated using the subcontractor-supplied system or the sodium-bearing waste facility or a portion of it. The final decision depends on the design of the sodium-bearing waste treatment facility, which will be completed in coming years.

  19. The development of a line-scan imaging algorithm for the detection of fecal contamination on leafy geens

    Science.gov (United States)

    Yang, Chun-Chieh; Kim, Moon S.; Chuang, Yung-Kun; Lee, Hoyoung

    2013-05-01

    This paper reports the development of a multispectral algorithm, using the line-scan hyperspectral imaging system, to detect fecal contamination on leafy greens. Fresh bovine feces were applied to the surfaces of washed loose baby spinach leaves. A hyperspectral line-scan imaging system was used to acquire hyperspectral fluorescence images of the contaminated leaves. Hyperspectral image analysis resulted in the selection of the 666 nm and 688 nm wavebands for a multispectral algorithm to rapidly detect feces on leafy greens, by use of the ratio of fluorescence intensities measured at those two wavebands (666 nm over 688 nm). The algorithm successfully distinguished most of the lowly diluted fecal spots (0.05 g feces/ml water and 0.025 g feces/ml water) and some of the highly diluted spots (0.0125 g feces/ml water and 0.00625 g feces/ml water) from the clean spinach leaves. The results showed the potential of the multispectral algorithm with line-scan imaging system for application to automated food processing lines for food safety inspection of leafy green vegetables.

  20. Chinese handwriting recognition an algorithmic perspective

    CERN Document Server

    Su, Tonghua

    2013-01-01

    This book provides an algorithmic perspective on the recent development of Chinese handwriting recognition. Two technically sound strategies, the segmentation-free and integrated segmentation-recognition strategy, are investigated and algorithms that have worked well in practice are primarily focused on. Baseline systems are initially presented for these strategies and are subsequently expanded on and incrementally improved. The sophisticated algorithms covered include: 1) string sample expansion algorithms which synthesize string samples from isolated characters or distort realistic string samples; 2) enhanced feature representation algorithms, e.g. enhanced four-plane features and Delta features; 3) novel learning algorithms, such as Perceptron learning with dynamic margin, MPE training and distributed training; and lastly 4) ensemble algorithms, that is, combining the two strategies using both parallel structure and serial structure. All the while, the book moves from basic to advanced algorithms, helping ...

  1. A Comprehensive Training Data Set for the Development of Satellite-Based Volcanic Ash Detection Algorithms

    Science.gov (United States)

    Schmidl, Marius

    2017-04-01

    We present a comprehensive training data set covering a large range of atmospheric conditions, including disperse volcanic ash and desert dust layers. These data sets contain all information required for the development of volcanic ash detection algorithms based on artificial neural networks, urgently needed since volcanic ash in the airspace is a major concern of aviation safety authorities. Selected parts of the data are used to train the volcanic ash detection algorithm VADUGS. They contain atmospheric and surface-related quantities as well as the corresponding simulated satellite data for the channels in the infrared spectral range of the SEVIRI instrument on board MSG-2. To get realistic results, ECMWF, IASI-based, and GEOS-Chem data are used to calculate all parameters describing the environment, whereas the software package libRadtran is used to perform radiative transfer simulations returning the brightness temperatures for each atmospheric state. As optical properties are a prerequisite for radiative simulations accounting for aerosol layers, the development also included the computation of optical properties for a set of different aerosol types from different sources. A description of the developed software and the used methods is given, besides an overview of the resulting data sets.

  2. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-10-01

    The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.

  3. Effect of reconstruction algorithm on image quality and identification of ground-glass opacities and partly solid nodules on low-dose thin-section CT: Experimental study using chest phantom

    International Nuclear Information System (INIS)

    Koyama, Hisanobu; Ohno, Yoshiharu; Kono, Atsushi A.; Kusaka, Akiko; Konishi, Minoru; Yoshii, Masaru; Sugimura, Kazuro

    2010-01-01

    Purpose: The purpose of this study was to assess the influence of reconstruction algorithm on identification and image quality of ground-glass opacities (GGOs) and partly solid nodules on low-dose thin-section CT. Materials and methods: A chest CT phantom including simulated GGOs and partly solid nodules was scanned with five different tube currents and reconstructed by using standard (A) and newly developed (B) high-resolution reconstruction algorithms, followed by visually assessment of identification and image quality of GGOs and partly solid nodules by two chest radiologists. Inter-observer agreement, ROC analysis and ANOVA were performed to compare identification and image quality of each data set with those of the standard reference. The standard reference used 120 mA s in conjunction with reconstruction algorithm A. Results: Kappa values (κ) of overall identification and image qualities were substantial or almost perfect (0.60 < κ). Assessment of identification showed that area under the curve of 25 mA reconstructed with reconstruction algorithm A was significantly lower than that of standard reference (p < 0.05), while assessment of image quality indicated that 50 mA s reconstructed with reconstruction algorithm A and 25 mA s reconstructed with both reconstruction algorithms were significantly lower than standard reference (p < 0.05). Conclusion: Reconstruction algorithm may be an important factor for identification and image quality of ground-glass opacities and partly solid nodules on low-dose CT examination.

  4. Development of an algorithm for assessing the risk to food safety posed by a new animal disease.

    Science.gov (United States)

    Parker, E M; Jenson, I; Jordan, D; Ward, M P

    2012-05-01

    An algorithm was developed as a tool to rapidly assess the potential for a new or emerging disease of livestock to adversely affect humans via consumption or handling of meat product, so that the risks and uncertainties can be understood and appropriate risk management and communication implemented. An algorithm describing the sequence of events from occurrence of the disease in livestock, release of the causative agent from an infected animal, contamination of fresh meat and then possible adverse effects in humans following meat handling and consumption was created. A list of questions complements the algorithm to help the assessors address the issues of concern at each step of the decision pathway. The algorithm was refined and validated through consultation with a panel of experts and a review group of animal health and food safety policy advisors via five case studies of potential emerging diseases of cattle. Tasks for model validation included describing the path taken in the algorithm and stating an outcome. Twenty-nine per cent of the 62 experts commented on the model, and one-third of those responding also completed the tasks required for model validation. The feedback from the panel of experts and the review group was used to further develop the tool and remove redundancies and ambiguities. There was agreement in the pathways and assessments for diseases in which the causative agent was well understood (for example, bovine pneumonia due to Mycoplasma bovis). The stated pathways and assessments of other diseases (for example, bovine Johne's disease) were not as consistent. The framework helps to promote objectivity by requiring questions to be answered sequentially and providing the opportunity to record consensus or differences of opinion. Areas for discussion and future investigation are highlighted by the points of diversion on the pathway taken by different assessors. © 2011 Blackwell Verlag GmbH.

  5. Fast algorithms for transport models. Final report

    International Nuclear Information System (INIS)

    Manteuffel, T.A.

    1994-01-01

    This project has developed a multigrid in space algorithm for the solution of the S N equations with isotropic scattering in slab geometry. The algorithm was developed for the Modified Linear Discontinuous (MLD) discretization in space which is accurate in the thick diffusion limit. It uses a red/black two-cell μ-line relaxation. This relaxation solves for all angles on two adjacent spatial cells simultaneously. It takes advantage of the rank-one property of the coupling between angles and can perform this inversion in O(N) operations. A version of the multigrid in space algorithm was programmed on the Thinking Machines Inc. CM-200 located at LANL. It was discovered that on the CM-200 a block Jacobi type iteration was more efficient than the block red/black iteration. Given sufficient processors all two-cell block inversions can be carried out simultaneously with a small number of parallel steps. The bottleneck is the need for sums of N values, where N is the number of discrete angles, each from a different processor. These are carried out by machine intrinsic functions and are well optimized. The overall algorithm has computational complexity O(log(M)), where M is the number of spatial cells. The algorithm is very efficient and represents the state-of-the-art for isotropic problems in slab geometry. For anisotropic scattering in slab geometry, a multilevel in angle algorithm was developed. A parallel version of the multilevel in angle algorithm has also been developed. Upon first glance, the shifted transport sweep has limited parallelism. Once the right-hand-side has been computed, the sweep is completely parallel in angle, becoming N uncoupled initial value ODE's. The author has developed a cyclic reduction algorithm that renders it parallel with complexity O(log(M)). The multilevel in angle algorithm visits log(N) levels, where shifted transport sweeps are performed. The overall complexity is O(log(N)log(M))

  6. A genomic audit of newly-adopted autosomal STRs for forensic identification.

    Science.gov (United States)

    Phillips, C

    2017-07-01

    In preparation for the growing use of massively parallel sequencing (MPS) technology to genotype forensic STRs, a comprehensive genomic audit of 73 STRs was made in 2016 [Parson et al., Forensic Sci. Int. Genet. 22, 54-63]. The loci examined included miniSTRs that were not in widespread use, but had been incorporated into MPS kits or were under consideration for this purpose. The current study expands the genomic analysis of autosomal STRs that are not commonly used, to include the full set of developed miniSTRs and an additional 24 STRs, most of which have been recently included in several supplementary forensic multiplex kits for capillary electrophoresis. The genomic audit of these 47 newly-adopted STRs examined the linkage status of new loci on the same chromosome as established forensic STRs; analyzed world-wide population variation of the newly-adopted STRs using published data; assessed their forensic informativeness; and compiled the sequence characteristics, repeat structures and flanking regions of each STR. A further 44 autosomal STRs developed for forensic analyses but not incorporated into commercial kits, are also briefly described. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Developing a NIR multispectral imaging for prediction and visualization of peanut protein content using variable selection algorithms

    Science.gov (United States)

    Cheng, Jun-Hu; Jin, Huali; Liu, Zhiwei

    2018-01-01

    The feasibility of developing a multispectral imaging method using important wavelengths from hyperspectral images selected by genetic algorithm (GA), successive projection algorithm (SPA) and regression coefficient (RC) methods for modeling and predicting protein content in peanut kernel was investigated for the first time. Partial least squares regression (PLSR) calibration model was established between the spectral data from the selected optimal wavelengths and the reference measured protein content ranged from 23.46% to 28.43%. The RC-PLSR model established using eight key wavelengths (1153, 1567, 1972, 2143, 2288, 2339, 2389 and 2446 nm) showed the best predictive results with the coefficient of determination of prediction (R2P) of 0.901, and root mean square error of prediction (RMSEP) of 0.108 and residual predictive deviation (RPD) of 2.32. Based on the obtained best model and image processing algorithms, the distribution maps of protein content were generated. The overall results of this study indicated that developing a rapid and online multispectral imaging system using the feature wavelengths and PLSR analysis is potential and feasible for determination of the protein content in peanut kernels.

  8. Game-based programming towards developing algorithmic thinking skills in primary education

    Directory of Open Access Journals (Sweden)

    Hariklia Tsalapatas

    2012-06-01

    Full Text Available This paper presents cMinds, a learning intervention that deploys game-based visual programming towards building analytical, computational, and critical thinking skills in primary education. The proposed learning method exploits the structured nature of programming, which is inherently logical and transcends cultural barriers, towards inclusive learning that exposes learners to algorithmic thinking. A visual programming environment, entitled ‘cMinds Learning Suite’, has been developed aimed for classroom use. Feedback from the deployment of the learning methods and tools in classrooms in several European countries demonstrates elevated learner motivation for engaging in logical learning activities, fostering of creativity and an entrepreneurial spirit, and promotion of problem-solving capacity

  9. Development of test algorithm for semiconductor package with defects by using probabilistic neural network

    International Nuclear Information System (INIS)

    Kim, Jae Yeol; Sim, Jae Gi; Ko, Myoung Soo; Kim, Chang Hyun; Kim, Hun Cho

    2001-01-01

    In this study, researchers developing the estimative algorithm for artificial defects in semiconductor packages and performing it by pattern recognition technology. For this purpose, the estimative algorithm was included that researchers made software with MATLAB. The software consists of some procedures including ultrasonic image acquisition, equalization filtering, Self-Organizing Map and Probabilistic Neural Network. Self-Organizing Map and Probabilistic Neural Network are belong to methods of Neural Networks. And the pattern recognition technology has applied to classify three kinds of detective patterns in semiconductor packages. This study presumes probability density function from a sample of learning and present which is automatically determine method. PNN can distinguish flaws very difficult distinction as well as. This can do parallel process to stand in a row we confirm that is very efficiently classifier if we applied many data real the process.

  10. A Coulomb collision algorithm for weighted particle simulations

    Science.gov (United States)

    Miller, Ronald H.; Combi, Michael R.

    1994-01-01

    A binary Coulomb collision algorithm is developed for weighted particle simulations employing Monte Carlo techniques. Charged particles within a given spatial grid cell are pair-wise scattered, explicitly conserving momentum and implicitly conserving energy. A similar algorithm developed by Takizuka and Abe (1977) conserves momentum and energy provided the particles are unweighted (each particle representing equal fractions of the total particle density). If applied as is to simulations incorporating weighted particles, the plasma temperatures equilibrate to an incorrect temperature, as compared to theory. Using the appropriate pairing statistics, a Coulomb collision algorithm is developed for weighted particles. The algorithm conserves energy and momentum and produces the appropriate relaxation time scales as compared to theoretical predictions. Such an algorithm is necessary for future work studying self-consistent multi-species kinetic transport.

  11. An advanced algorithm for deformation estimation in non-urban areas

    Science.gov (United States)

    Goel, Kanika; Adam, Nico

    2012-09-01

    This paper presents an advanced differential SAR interferometry stacking algorithm for high resolution deformation monitoring in non-urban areas with a focus on distributed scatterers (DSs). Techniques such as the Small Baseline Subset Algorithm (SBAS) have been proposed for processing DSs. SBAS makes use of small baseline differential interferogram subsets. Singular value decomposition (SVD), i.e. L2 norm minimization is applied to link independent subsets separated by large baselines. However, the interferograms used in SBAS are multilooked using a rectangular window to reduce phase noise caused for instance by temporal decorrelation, resulting in a loss of resolution and the superposition of topography and deformation signals from different objects. Moreover, these have to be individually phase unwrapped and this can be especially difficult in natural terrains. An improved deformation estimation technique is presented here which exploits high resolution SAR data and is suitable for rural areas. The implemented method makes use of small baseline differential interferograms and incorporates an object adaptive spatial phase filtering and residual topography removal for an accurate phase and coherence estimation, while preserving the high resolution provided by modern satellites. This is followed by retrieval of deformation via the SBAS approach, wherein, the phase inversion is performed using an L1 norm minimization which is more robust to the typical phase unwrapping errors encountered in non-urban areas. Meter resolution TerraSAR-X data of an underground gas storage reservoir in Germany is used for demonstrating the effectiveness of this newly developed technique in rural areas.

  12. Capacity of non-invasive hepatic fibrosis algorithms to replace transient elastography to exclude cirrhosis in people with hepatitis C virus infection: A multi-centre observational study.

    Science.gov (United States)

    Kelly, Melissa Louise; Riordan, Stephen M; Bopage, Rohan; Lloyd, Andrew R; Post, Jeffrey John

    2018-01-01

    Achievement of the 2030 World Health Organisation (WHO) global hepatitis C virus (HCV) elimination targets will be underpinned by scale-up of testing and use of direct-acting antiviral treatments. In Australia, despite publically-funded testing and treatment, less than 15% of patients were treated in the first year of treatment access, highlighting the need for greater efficiency of health service delivery. To this end, non-invasive fibrosis algorithms were examined to reduce reliance on transient elastography (TE) which is currently utilised for the assessment of cirrhosis in most Australian clinical settings. This retrospective and prospective study, with derivation and validation cohorts, examined consecutive patients in a tertiary referral centre, a sexual health clinic, and a prison-based hepatitis program. The negative predictive value (NPV) of seven non-invasive algorithms were measured using published and newly derived cut-offs. The number of TEs avoided for each algorithm, or combination of algorithms, was determined. The 850 patients included 780 (92%) with HCV mono-infection, and 70 (8%) co-infected with HIV or hepatitis B. The mono-infected cohort included 612 men (79%), with an overall prevalence of cirrhosis of 16% (125/780). An 'APRI' algorithm cut-off of 1.0 had a 94% NPV (95%CI: 91-96%). Newly derived cut-offs of 'APRI' (0.49), 'FIB-4' (0.93) and 'GUCI' (0.5) algorithms each had NPVs of 99% (95%CI: 97-100%), allowing avoidance of TE in 40% (315/780), 40% (310/780) and 40% (298/749) respectively. When used in combination, NPV was retained and TE avoidance reached 54% (405/749), regardless of gender or co-infection. Non-invasive algorithms can reliably exclude cirrhosis in many patients, allowing improved efficiency of HCV assessment services in Australia and worldwide.

  13. Development of a deep convolutional neural network to predict grading of canine meningiomas from magnetic resonance images.

    Science.gov (United States)

    Banzato, T; Cherubini, G B; Atzori, M; Zotti, A

    2018-05-01

    An established deep neural network (DNN) based on transfer learning and a newly designed DNN were tested to predict the grade of meningiomas from magnetic resonance (MR) images in dogs and to determine the accuracy of classification of using pre- and post-contrast T1-weighted (T1W), and T2-weighted (T2W) MR images. The images were randomly assigned to a training set, a validation set and a test set, comprising 60%, 10% and 30% of images, respectively. The combination of DNN and MR sequence displaying the highest discriminating accuracy was used to develop an image classifier to predict the grading of new cases. The algorithm based on transfer learning using the established DNN did not provide satisfactory results, whereas the newly designed DNN had high classification accuracy. On the basis of classification accuracy, an image classifier built on the newly designed DNN using post-contrast T1W images was developed. This image classifier correctly predicted the grading of 8 out of 10 images not included in the data set. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  14. A formal analysis of a dynamic distributed spanning tree algorithm

    NARCIS (Netherlands)

    Mooij, A.J.; Wesselink, J.W.

    2003-01-01

    Abstract. We analyze the spanning tree algorithm in the IEEE 1394.1 draft standard, which correctness has not previously been proved. This algorithm is a fully-dynamic distributed graph algorithm, which, in general, is hard to develop. The approach we use is to formally develop an algorithm that is

  15. Development of accurate standardized algorithms for conversion between SRP grid coordinates and latitude/longitude

    International Nuclear Information System (INIS)

    Looney, B.B.; Marsh, J.T. Jr.; Hayes, D.W.

    1987-01-01

    The Savannah Rive Plant (SRP) is a nuclear production facility operated by E.I. du Pont de Nemours and Co. for the United States Department of Energy. SRP is located along the Savannah River in South Carolina. Construction of SRP began in the early 1950's. At the time the plant was built, a local coordinate system was developed to assist in defining the locations of plant facilities. Over the years, large quantities of data have been developed using ''SRP Coordinates.'' These data include: building locations, plant boundaries, environmental sampling locations, waste disposal area locations, and a wide range of other geographical information. Currently, staff persons at SRP are organizing these data into automated information systems to allow more rapid, more robust and higher quality interpretation, interchange and presentation of spatial data. A key element in this process is the ability to incorporate outside data bases into the systems, as well as to share SRP data with interested organizations outside as SRP. Most geographical information outside of SRP is organized using latitude and longitude. Thus, straightforward, accurate and consistent algorithms to convert SRP Coordinates to/from latitude and longitude are needed. Appropriate algorithms are presented in this document

  16. DEVELOPMENT OF A PEDESTRIAN INDOOR NAVIGATION SYSTEM BASED ON MULTI-SENSOR FUSION AND FUZZY LOGIC ESTIMATION ALGORITHMS

    Directory of Open Access Journals (Sweden)

    Y. C. Lai

    2015-05-01

    Full Text Available This paper presents a pedestrian indoor navigation system based on the multi-sensor fusion and fuzzy logic estimation algorithms. The proposed navigation system is a self-contained dead reckoning navigation that means no other outside signal is demanded. In order to achieve the self-contained capability, a portable and wearable inertial measure unit (IMU has been developed. Its adopted sensors are the low-cost inertial sensors, accelerometer and gyroscope, based on the micro electro-mechanical system (MEMS. There are two types of the IMU modules, handheld and waist-mounted. The low-cost MEMS sensors suffer from various errors due to the results of manufacturing imperfections and other effects. Therefore, a sensor calibration procedure based on the scalar calibration and the least squares methods has been induced in this study to improve the accuracy of the inertial sensors. With the calibrated data acquired from the inertial sensors, the step length and strength of the pedestrian are estimated by multi-sensor fusion and fuzzy logic estimation algorithms. The developed multi-sensor fusion algorithm provides the amount of the walking steps and the strength of each steps in real-time. Consequently, the estimated walking amount and strength per step are taken into the proposed fuzzy logic estimation algorithm to estimates the step lengths of the user. Since the walking length and direction are both the required information of the dead reckoning navigation, the walking direction is calculated by integrating the angular rate acquired by the gyroscope of the developed IMU module. Both the walking length and direction are calculated on the IMU module and transmit to a smartphone with Bluetooth to perform the dead reckoning navigation which is run on a self-developed APP. Due to the error accumulating of dead reckoning navigation, a particle filter and a pre-loaded map of indoor environment have been applied to the APP of the proposed navigation system

  17. Development of a Pedestrian Indoor Navigation System Based on Multi-Sensor Fusion and Fuzzy Logic Estimation Algorithms

    Science.gov (United States)

    Lai, Y. C.; Chang, C. C.; Tsai, C. M.; Lin, S. Y.; Huang, S. C.

    2015-05-01

    This paper presents a pedestrian indoor navigation system based on the multi-sensor fusion and fuzzy logic estimation algorithms. The proposed navigation system is a self-contained dead reckoning navigation that means no other outside signal is demanded. In order to achieve the self-contained capability, a portable and wearable inertial measure unit (IMU) has been developed. Its adopted sensors are the low-cost inertial sensors, accelerometer and gyroscope, based on the micro electro-mechanical system (MEMS). There are two types of the IMU modules, handheld and waist-mounted. The low-cost MEMS sensors suffer from various errors due to the results of manufacturing imperfections and other effects. Therefore, a sensor calibration procedure based on the scalar calibration and the least squares methods has been induced in this study to improve the accuracy of the inertial sensors. With the calibrated data acquired from the inertial sensors, the step length and strength of the pedestrian are estimated by multi-sensor fusion and fuzzy logic estimation algorithms. The developed multi-sensor fusion algorithm provides the amount of the walking steps and the strength of each steps in real-time. Consequently, the estimated walking amount and strength per step are taken into the proposed fuzzy logic estimation algorithm to estimates the step lengths of the user. Since the walking length and direction are both the required information of the dead reckoning navigation, the walking direction is calculated by integrating the angular rate acquired by the gyroscope of the developed IMU module. Both the walking length and direction are calculated on the IMU module and transmit to a smartphone with Bluetooth to perform the dead reckoning navigation which is run on a self-developed APP. Due to the error accumulating of dead reckoning navigation, a particle filter and a pre-loaded map of indoor environment have been applied to the APP of the proposed navigation system to extend its

  18. Observation of the bone mineral density of newly formed bone using rabbits. Compared with newly formed bone around implants and cortical bone

    International Nuclear Information System (INIS)

    Nakada, Hiroshi; Numata, Yasuko; Sakae, Toshiro; Tamaki, Hiroyuki; Kato, Takao

    2009-01-01

    There have been many studies reporting that newly formed bone around implants is spongy bone. However, although the morphology is reported as being like spongy bone, it is difficult to discriminate whether the bone quality of newly formed bone appears similar to osteoid or cortical bone; therefore, evaluation of bone quality is required. The aims of this study were to measure the bone mineral density (BMD) values of newly formed bone around implants after 4, 8, 16, 24 and 48 weeks, to represent these values on three-dimensional color mapping (3Dmap), and to evaluate the change in bone quality associated with newly formed bone around implants. The animal experimental protocol of this study was approved by the Ethics Committee for Animal Experiments of our University. This experiment used 20 surface treatment implants (Ti-6Al-4V alloy: 3.1 mm in diameter and 30.0 mm in length) by grit-blasting. They were embedded into surgically created flaws in femurs of 20 New Zealand white rabbits (16 weeks old, male). The rabbits were sacrificed with an ear intravenous overdose of pentobarbital sodium under general anesthesia each period, and the femurs were resected. We measured BMD of newly formed bone around implants and cortical bone using Micro-CT, and the BMD distribution map of 3Dmap (TRI/3D Bon BMD, Ratoc System Engineering). The BMD of cortical bone was 1,026.3±44.3 mg/cm 3 at 4 weeks, 1,023.8±40.9 mg/cm 3 at 8 weeks, 1,048.2±45.6 mg/cm 3 at 16 weeks, 1,067.2±60.2 mg/cm 3 at 24 weeks, and 1,069.3±50.7 mg/cm 3 at 48 weeks after implantation, showing a non-significant increase each period. The BMD of newly formed bone around implants was 296.8±25.6 mg/cm 3 at 4 weeks, 525.0±72.4 mg/cm 3 at 8 weeks, 691.2±26.0 mg/cm 3 at 16 weeks, 776.9±27.7 mg/cm 3 at 24 weeks, and 845.2±23.1 mg/cm 3 at 48 weeks after implantation, showing a significant increase after each period. It was revealed that the color scale of newly formed bone was Low level at 4 weeks, and then it

  19. An adaptive algorithm for the detection of microcalcifications in simulated low-dose mammography

    Science.gov (United States)

    Treiber, O.; Wanninger, F.; Führ, H.; Panzer, W.; Regulla, D.; Winkler, G.

    2003-02-01

    This paper uses the task of microcalcification detection as a benchmark problem to assess the potential for dose reduction in x-ray mammography. We present the results of a newly developed algorithm for detection of microcalcifications as a case study for a typical commercial film-screen system (Kodak Min-R 2000/2190). The first part of the paper deals with the simulation of dose reduction for film-screen mammography based on a physical model of the imaging process. Use of a more sensitive film-screen system is expected to result in additional smoothing of the image. We introduce two different models of that behaviour, called moderate and strong smoothing. We then present an adaptive, model-based microcalcification detection algorithm. Comparing detection results with ground-truth images obtained under the supervision of an expert radiologist allows us to establish the soundness of the detection algorithm. We measure the performance on the dose-reduced images in order to assess the loss of information due to dose reduction. It turns out that the smoothing behaviour has a strong influence on detection rates. For moderate smoothing, a dose reduction by 25% has no serious influence on the detection results, whereas a dose reduction by 50% already entails a marked deterioration of the performance. Strong smoothing generally leads to an unacceptable loss of image quality. The test results emphasize the impact of the more sensitive film-screen system and its characteristics on the problem of assessing the potential for dose reduction in film-screen mammography. The general approach presented in the paper can be adapted to fully digital mammography.

  20. An adaptive algorithm for the detection of microcalcifications in simulated low-dose mammography

    International Nuclear Information System (INIS)

    Treiber, O; Wanninger, F; Fuehr, H; Panzer, W; Regulla, D; Winkler, G

    2003-01-01

    This paper uses the task of microcalcification detection as a benchmark problem to assess the potential for dose reduction in x-ray mammography. We present the results of a newly developed algorithm for detection of microcalcifications as a case study for a typical commercial film-screen system (Kodak Min-R 2000/2190). The first part of the paper deals with the simulation of dose reduction for film-screen mammography based on a physical model of the imaging process. Use of a more sensitive film-screen system is expected to result in additional smoothing of the image. We introduce two different models of that behaviour, called moderate and strong smoothing. We then present an adaptive, model-based microcalcification detection algorithm. Comparing detection results with ground-truth images obtained under the supervision of an expert radiologist allows us to establish the soundness of the detection algorithm. We measure the performance on the dose-reduced images in order to assess the loss of information due to dose reduction. It turns out that the smoothing behaviour has a strong influence on detection rates. For moderate smoothing, a dose reduction by 25% has no serious influence on the detection results, whereas a dose reduction by 50% already entails a marked deterioration of the performance. Strong smoothing generally leads to an unacceptable loss of image quality. The test results emphasize the impact of the more sensitive film-screen system and its characteristics on the problem of assessing the potential for dose reduction in film-screen mammography. The general approach presented in the paper can be adapted to fully digital mammography

  1. Three dimensional fuzzy influence analysis of fitting algorithms on integrated chip topographic modeling

    International Nuclear Information System (INIS)

    Liang, Zhong Wei; Wang, Yi Jun; Ye, Bang Yan; Brauwer, Richard Kars

    2012-01-01

    In inspecting the detailed performance results of surface precision modeling in different external parameter conditions, the integrated chip surfaces should be evaluated and assessed during topographic spatial modeling processes. The application of surface fitting algorithms exerts a considerable influence on topographic mathematical features. The influence mechanisms caused by different surface fitting algorithms on the integrated chip surface facilitate the quantitative analysis of different external parameter conditions. By extracting the coordinate information from the selected physical control points and using a set of precise spatial coordinate measuring apparatus, several typical surface fitting algorithms are used for constructing micro topographic models with the obtained point cloud. In computing for the newly proposed mathematical features on surface models, we construct the fuzzy evaluating data sequence and present a new three dimensional fuzzy quantitative evaluating method. Through this method, the value variation tendencies of topographic features can be clearly quantified. The fuzzy influence discipline among different surface fitting algorithms, topography spatial features, and the external science parameter conditions can be analyzed quantitatively and in detail. In addition, quantitative analysis can provide final conclusions on the inherent influence mechanism and internal mathematical relation in the performance results of different surface fitting algorithms, topographic spatial features, and their scientific parameter conditions in the case of surface micro modeling. The performance inspection of surface precision modeling will be facilitated and optimized as a new research idea for micro-surface reconstruction that will be monitored in a modeling process

  2. Three dimensional fuzzy influence analysis of fitting algorithms on integrated chip topographic modeling

    Energy Technology Data Exchange (ETDEWEB)

    Liang, Zhong Wei; Wang, Yi Jun [Guangzhou Univ., Guangzhou (China); Ye, Bang Yan [South China Univ. of Technology, Guangzhou (China); Brauwer, Richard Kars [Indian Institute of Technology, Kanpur (India)

    2012-10-15

    In inspecting the detailed performance results of surface precision modeling in different external parameter conditions, the integrated chip surfaces should be evaluated and assessed during topographic spatial modeling processes. The application of surface fitting algorithms exerts a considerable influence on topographic mathematical features. The influence mechanisms caused by different surface fitting algorithms on the integrated chip surface facilitate the quantitative analysis of different external parameter conditions. By extracting the coordinate information from the selected physical control points and using a set of precise spatial coordinate measuring apparatus, several typical surface fitting algorithms are used for constructing micro topographic models with the obtained point cloud. In computing for the newly proposed mathematical features on surface models, we construct the fuzzy evaluating data sequence and present a new three dimensional fuzzy quantitative evaluating method. Through this method, the value variation tendencies of topographic features can be clearly quantified. The fuzzy influence discipline among different surface fitting algorithms, topography spatial features, and the external science parameter conditions can be analyzed quantitatively and in detail. In addition, quantitative analysis can provide final conclusions on the inherent influence mechanism and internal mathematical relation in the performance results of different surface fitting algorithms, topographic spatial features, and their scientific parameter conditions in the case of surface micro modeling. The performance inspection of surface precision modeling will be facilitated and optimized as a new research idea for micro-surface reconstruction that will be monitored in a modeling process.

  3. Preliminary Development and Evaluation of Lightning Jump Algorithms for the Real-Time Detection of Severe Weather

    Science.gov (United States)

    Schultz, Christopher J.; Petersen, Walter A.; Carey, Lawrence D.

    2009-01-01

    Previous studies have demonstrated that rapid increases in total lightning activity (intracloud + cloud-to-ground) are often observed tens of minutes in advance of the occurrence of severe weather at the ground. These rapid increases in lightning activity have been termed "lightning jumps." Herein, we document a positive correlation between lightning jumps and the manifestation of severe weather in thunderstorms occurring across the Tennessee Valley and Washington D.C. A total of 107 thunderstorms were examined in this study, with 69 of the 107 thunderstorms falling into the category of non-severe, and 38 into the category of severe. From the dataset of 69 isolated non-severe thunderstorms, an average peak 1 minute flash rate of 10 flashes/min was determined. A variety of severe thunderstorm types were examined for this study including an MCS, MCV, tornadic outer rainbands of tropical remnants, supercells, and pulse severe thunderstorms. Of the 107 thunderstorms, 85 thunderstorms (47 non-severe, 38 severe) from the Tennessee Valley and Washington D.C tested 6 lightning jump algorithm configurations (Gatlin, Gatlin 45, 2(sigma), 3(sigma), Threshold 10, and Threshold 8). Performance metrics for each algorithm were then calculated, yielding encouraging results from the limited sample of 85 thunderstorms. The 2(sigma) lightning jump algorithm had a high probability of detection (POD; 87%), a modest false alarm rate (FAR; 33%), and a solid Heidke Skill Score (HSS; 0.75). A second and more simplistic lightning jump algorithm named the Threshold 8 lightning jump algorithm also shows promise, with a POD of 81% and a FAR of 41%. Average lead times to severe weather occurrence for these two algorithms were 23 minutes and 20 minutes, respectively. The overall goal of this study is to advance the development of an operationally-applicable jump algorithm that can be used with either total lightning observations made from the ground, or in the near future from space using the

  4. Algorithmic alternatives

    International Nuclear Information System (INIS)

    Creutz, M.

    1987-11-01

    A large variety of Monte Carlo algorithms are being used for lattice gauge simulations. For purely bosonic theories, present approaches are generally adequate; nevertheless, overrelaxation techniques promise savings by a factor of about three in computer time. For fermionic fields the situation is more difficult and less clear. Algorithms which involve an extrapolation to a vanishing step size are all quite closely related. Methods which do not require such an approximation tend to require computer time which grows as the square of the volume of the system. Recent developments combining global accept/reject stages with Langevin or microcanonical updatings promise to reduce this growth to V/sup 4/3/

  5. Simple Algorithms for Distributed Leader Election in Anonymous Synchronous Rings and Complete Networks Inspired by Neural Development in Fruit Flies.

    Science.gov (United States)

    Xu, Lei; Jeavons, Peter

    2015-11-01

    Leader election in anonymous rings and complete networks is a very practical problem in distributed computing. Previous algorithms for this problem are generally designed for a classical message passing model where complex messages are exchanged. However, the need to send and receive complex messages makes such algorithms less practical for some real applications. We present some simple synchronous algorithms for distributed leader election in anonymous rings and complete networks that are inspired by the development of the neural system of the fruit fly. Our leader election algorithms all assume that only one-bit messages are broadcast by nodes in the network and processors are only able to distinguish between silence and the arrival of one or more messages. These restrictions allow implementations to use a simpler message-passing architecture. Even with these harsh restrictions our algorithms are shown to achieve good time and message complexity both analytically and experimentally.

  6. Newly developed central diabetes insipidus following kidney transplantation: a case report.

    Science.gov (United States)

    Kim, K M; Kim, S M; Lee, J; Lee, S Y; Kwon, S K; Kim, H-Y

    2013-09-01

    Polyuria after kidney transplantation is a common, usually self-limiting disorder. However, persistent polyuria can cause not only patient discomfort, including polyuria and polydipsia, but also volume depletion that can produce allograft dysfunction. Herein, we have report a case of central diabetes insipidus newly diagnosed after kidney transplantation. A 45-year-old woman with end-stage kidney disease underwent deceased donor kidney transplantation. Two months after the transplantation, she was admitted for persistent polyuria, polydipsia, and nocturia with urine output of more than 4 L/d. Urine osmolarity was 100 mOsm/kg, which implied that the polyuria was due to water rather than solute diuresis. A water deprivation test was compatible with central diabetes insipidus; desmopressin treatment resulted in immediate symptomatic relief. Brain magnetic resonance imaging (MRI) demonstrated diffuse thickening of the pituitary stalk, which was considered to be nonspecific finding. MRI 12 months later showed no change in the pituitary stalk, although the patient has been in good health without polyuria or polydipsia on desmopressin treatment. The possibility of central diabetes insipidus should be considered in patients presenting with persistent polyuria after kidney transplantation. Copyright © 2013 Elsevier Inc. All rights reserved.

  7. Newly graduated nurses' use of knowledge sources: a meta-ethnography.

    Science.gov (United States)

    Voldbjerg, Siri Lygum; Grønkjaer, Mette; Sørensen, Erik Elgaard; Hall, Elisabeth O C

    2016-08-01

    To advance evidence on newly graduated nurses' use of knowledge sources. Clinical decisions need to be evidence-based and understanding the knowledge sources that newly graduated nurses use will inform both education and practice. Qualitative studies on newly graduated nurses' use of knowledge sources are increasing though generated from scattered healthcare contexts. Therefore, a metasynthesis of qualitative research on what knowledge sources new graduates use in decision-making was conducted. Meta-ethnography. Nineteen reports, representing 17 studies, published from 2000-2014 were identified from iterative searches in relevant databases from May 2013-May 2014. Included reports were appraised for quality and Noblit and Hare's meta-ethnography guided the interpretation and synthesis of data. Newly graduated nurses' use of knowledge sources during their first 2-year postgraduation were interpreted in the main theme 'self and others as knowledge sources,' with two subthemes 'doing and following' and 'knowing and doing,' each with several elucidating categories. The metasynthesis revealed a line of argument among the report findings underscoring progression in knowledge use and perception of competence and confidence among newly graduated nurses. The transition phase, feeling of confidence and ability to use critical thinking and reflection, has a great impact on knowledge sources incorporated in clinical decisions. The synthesis accentuates that for use of newly graduated nurses' qualifications and skills in evidence-based practice, clinical practice needs to provide a supportive environment which nurtures critical thinking and questions and articulates use of multiple knowledge sources. © 2016 John Wiley & Sons Ltd.

  8. Decoding algorithm for vortex communications receiver

    Science.gov (United States)

    Kupferman, Judy; Arnon, Shlomi

    2018-01-01

    Vortex light beams can provide a tremendous alphabet for encoding information. We derive a symbol decoding algorithm for a direct detection matrix detector vortex beam receiver using Laguerre Gauss (LG) modes, and develop a mathematical model of symbol error rate (SER) for this receiver. We compare SER as a function of signal to noise ratio (SNR) for our algorithm and for the Pearson correlation algorithm. To our knowledge, this is the first comprehensive treatment of a decoding algorithm of a matrix detector for an LG receiver.

  9. Cache-Oblivious Algorithms and Data Structures

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting

    2004-01-01

    Frigo, Leiserson, Prokop and Ramachandran in 1999 introduced the ideal-cache model as a formal model of computation for developing algorithms in environments with multiple levels of caching, and coined the terminology of cache-oblivious algorithms. Cache-oblivious algorithms are described...... as standard RAM algorithms with only one memory level, i.e. without any knowledge about memory hierarchies, but are analyzed in the two-level I/O model of Aggarwal and Vitter for an arbitrary memory and block size and an optimal off-line cache replacement strategy. The result are algorithms that automatically...... apply to multi-level memory hierarchies. This paper gives an overview of the results achieved on cache-oblivious algorithms and data structures since the seminal paper by Frigo et al....

  10. Crytosystem Program Planning for Securing Data/Information of the Results of Research and Development using Triple DES Algorithm

    International Nuclear Information System (INIS)

    Tumpal P; Naga, Dali S.; David

    2004-01-01

    This software is a cryptosystem that uses triple DES algorithm and uses ECB (Electronic Code Book) mode. This cryptosystem can send a file with any extension whether it is encrypted or not, encrypt the data that representing the picture of bitmap file or text, as well as view the calculation that can be written. Triple DES is an efficient and effective developments of DES because same algorithm but the three times repeated operation causing the key become 168 bit from 56 bit. (author)

  11. The nanotoxicology of a newly developed zero-valent iron nanomaterial for groundwater remediation and its remediation efficiency assessment combined with in vitro bioassays for detection of dioxin-like environmental pollutants

    OpenAIRE

    Schiwy, Andreas Herbert

    2016-01-01

    The assessment of chemicals and new compounds is an important task of ecotoxicology. In this thesis a newly developed zero-valent iron material for nanoremediation of groundwater contaminations was investigated and in vitro bioassays for high throughput screening were developed. These two elements of the thesis were combined to assess the remediation efficiency of the nanomaterial on the groundwater contaminant acridine. The developed in vitro bioassays were evaluated for quantification of th...

  12. Integrative multicellular biological modeling: a case study of 3D epidermal development using GPU algorithms

    Directory of Open Access Journals (Sweden)

    Christley Scott

    2010-08-01

    Full Text Available Abstract Background Simulation of sophisticated biological models requires considerable computational power. These models typically integrate together numerous biological phenomena such as spatially-explicit heterogeneous cells, cell-cell interactions, cell-environment interactions and intracellular gene networks. The recent advent of programming for graphical processing units (GPU opens up the possibility of developing more integrative, detailed and predictive biological models while at the same time decreasing the computational cost to simulate those models. Results We construct a 3D model of epidermal development and provide a set of GPU algorithms that executes significantly faster than sequential central processing unit (CPU code. We provide a parallel implementation of the subcellular element method for individual cells residing in a lattice-free spatial environment. Each cell in our epidermal model includes an internal gene network, which integrates cellular interaction of Notch signaling together with environmental interaction of basement membrane adhesion, to specify cellular state and behaviors such as growth and division. We take a pedagogical approach to describing how modeling methods are efficiently implemented on the GPU including memory layout of data structures and functional decomposition. We discuss various programmatic issues and provide a set of design guidelines for GPU programming that are instructive to avoid common pitfalls as well as to extract performance from the GPU architecture. Conclusions We demonstrate that GPU algorithms represent a significant technological advance for the simulation of complex biological models. We further demonstrate with our epidermal model that the integration of multiple complex modeling methods for heterogeneous multicellular biological processes is both feasible and computationally tractable using this new technology. We hope that the provided algorithms and source code will be a

  13. Tau reconstruction and identification algorithm

    Indian Academy of Sciences (India)

    CMS has developed sophisticated tau identification algorithms for tau hadronic decay modes. Production of tau lepton decaying to hadrons are studied at 7 TeV centre-of-mass energy with 2011 collision data collected by CMS detector and has been used to measure the performance of tau identification algorithms by ...

  14. Certification document for newly generated contact-handled transuranic waste

    International Nuclear Information System (INIS)

    Box, W.D.; Setaro, J.

    1984-01-01

    The US Department of Energy has requested that all national laboratories handling defense waste develop and augment a program whereby all newly generated contact-handled transuranic (TRU) waste be contained, stored, and then shipped to the Waste Isolation Pilot Plant (WIPP) in accordance with the requirements set forth in WIPP-DOE-114. The program described in this report delineates how Oak Ridge National Laboratory intends to comply with these requirements and lists the procedures used by each generator to ensure that their TRU wastes are certifiable for shipment to WIPP

  15. Object-Oriented Implementation of Adaptive Mesh Refinement Algorithms

    Directory of Open Access Journals (Sweden)

    William Y. Crutchfield

    1993-01-01

    Full Text Available We describe C++ classes that simplify development of adaptive mesh refinement (AMR algorithms. The classes divide into two groups, generic classes that are broadly useful in adaptive algorithms, and application-specific classes that are the basis for our AMR algorithm. We employ two languages, with C++ responsible for the high-level data structures, and Fortran responsible for low-level numerics. The C++ implementation is as fast as the original Fortran implementation. Use of inheritance has allowed us to extend the original AMR algorithm to other problems with greatly reduced development time.

  16. Newly blind persons using virtual environment system in a traditional orientation and mobility rehabilitation program: a case study.

    Science.gov (United States)

    Lahav, Orly; Schloerb, David W; Srinivasan, Mandayam A

    2012-09-01

    This paper presents a virtual reality system (the BlindAid) developed for orientation and mobility training of people who are newly blind. The BlindAid allows users to interact with different virtual structures and objects via auditory and haptic feedback. This case study aims to examine if and how the BlindAid, in conjunction with a traditional rehabilitation programme, can help people who are newly blind develop new orientation and mobility methods. Follow-up research based on this study, with a large experiment and control group, could contribute to the area of orientation and mobility rehabilitation training for the newly blind. The case study research focused on A., a woman who is newly blind, for 17 virtual sessions spanning ten weeks, during the 12 weeks of her traditional orientation and mobility rehabilitation programme. The research was implemented by using virtual environment (VE) exploration and orientation tasks in VE and physical spaces. The research methodology used both qualitative and quantitative methods, including interviews, questionnaire, videotape recording, and user computer logs. The results of this study helped elucidate several issues concerning the contribution of the BlindAid system to the exploration strategies and learning processes experienced by the participant in her encounters with familiar and unfamiliar physical surroundings. [Box: see text].

  17. The impact of organisational culture on the adaptation of newly ...

    African Journals Online (AJOL)

    Usually newly employed nurses find adjusting to a work setting a challenging experience. Their successful adaptation to their work situation is greatly influenced by the socialisation process inherent in the organisational culture. The newly employed nurse often finds that the norms are unclear, confusing and restrictive.

  18. Creep force modelling for rail traction vehicles based on the Fastsim algorithm

    Science.gov (United States)

    Spiryagin, Maksym; Polach, Oldrich; Cole, Colin

    2013-11-01

    The evaluation of creep forces is a complex task and their calculation is a time-consuming process for multibody simulation (MBS). A methodology of creep forces modelling at large traction creepages has been proposed by Polach [Creep forces in simulations of traction vehicles running on adhesion limit. Wear. 2005;258:992-1000; Influence of locomotive tractive effort on the forces between wheel and rail. Veh Syst Dyn. 2001(Suppl);35:7-22] adapting his previously published algorithm [Polach O. A fast wheel-rail forces calculation computer code. Veh Syst Dyn. 1999(Suppl);33:728-739]. The most common method for creep force modelling used by software packages for MBS of running dynamics is the Fastsim algorithm by Kalker [A fast algorithm for the simplified theory of rolling contact. Veh Syst Dyn. 1982;11:1-13]. However, the Fastsim code has some limitations which do not allow modelling the creep force - creep characteristic in agreement with measurements for locomotives and other high-power traction vehicles, mainly for large traction creep at low-adhesion conditions. This paper describes a newly developed methodology based on a variable contact flexibility increasing with the ratio of the slip area to the area of adhesion. This variable contact flexibility is introduced in a modification of Kalker's code Fastsim by replacing the constant Kalker's reduction factor, widely used in MBS, by a variable reduction factor together with a slip-velocity-dependent friction coefficient decreasing with increasing global creepage. The proposed methodology is presented in this work and compared with measurements for different locomotives. The modification allows use of the well recognised Fastsim code for simulation of creep forces at large creepages in agreement with measurements without modifying the proven modelling methodology at small creepages.

  19. A newly developed spinal simulator.

    Science.gov (United States)

    Chester, R; Watson, M J

    2000-11-01

    A number of studies indicate poor intra-therapist and inter-therapist reliability in the performance of graded, passive oscillatory movements to the lumbar spine. However, it has been suggested that therapists can be trained to be more consistent in their performance of these techniques if given reliable quantitative feedback. The intention of this study was to develop equipment, analogous to the lumbar spine that could be used for both teaching and research purposes. Equipment has been updated and connected to a personal IBM compatible computer. Custom designed software allows concurrent and accurate feedback to students on their performance and in a form suitable for advanced data analysis using statistical packages. The uses and implications of this equipment are discussed. Copyright 2000 Harcourt Publishers Ltd.

  20. ADORE-GA: Genetic algorithm variant of the ADORE algorithm for ROP detector layout optimization in CANDU reactors

    International Nuclear Information System (INIS)

    Kastanya, Doddy

    2012-01-01

    Highlights: ► ADORE is an algorithm for CANDU ROP Detector Layout Optimization. ► ADORE-GA is a Genetic Algorithm variant of the ADORE algorithm. ► Robustness test of ADORE-GA algorithm is presented in this paper. - Abstract: The regional overpower protection (ROP) systems protect CANDU® reactors against overpower in the fuel that could reduce the safety margin-to-dryout. The overpower could originate from a localized power peaking within the core or a general increase in the global core power level. The design of the detector layout for ROP systems is a challenging discrete optimization problem. In recent years, two algorithms have been developed to find a quasi optimal solution to this detector layout optimization problem. Both of these algorithms utilize the simulated annealing (SA) algorithm as their optimization engine. In the present paper, an alternative optimization algorithm, namely the genetic algorithm (GA), has been implemented as the optimization engine. The implementation is done within the ADORE algorithm. Results from evaluating the effects of using various mutation rates and crossover parameters are presented in this paper. It has been demonstrated that the algorithm is sufficiently robust in producing similar quality solutions.

  1. Encounters of Newly Qualified Teachers with Micro-Politics in Primary Schools in Zimbabwe

    Science.gov (United States)

    Magudu, Snodia; Gumbo, Mishack

    2017-01-01

    This article demonstrates, through the example of Zimbabwe, the complexities of micro-political learning during induction. It reports on the experiences of ten newly qualified teachers with micro-politics or power relations in their schools during induction and locates these experiences within the broader context of their professional development.…

  2. System engineering approach to GPM retrieval algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Rose, C. R. (Chris R.); Chandrasekar, V.

    2004-01-01

    System engineering principles and methods are very useful in large-scale complex systems for developing the engineering requirements from end-user needs. Integrating research into system engineering is a challenging task. The proposed Global Precipitation Mission (GPM) satellite will use a dual-wavelength precipitation radar to measure and map global precipitation with unprecedented accuracy, resolution and areal coverage. The satellite vehicle, precipitation radars, retrieval algorithms, and ground validation (GV) functions are all critical subsystems of the overall GPM system and each contributes to the success of the mission. Errors in the radar measurements and models can adversely affect the retrieved output values. Ground validation (GV) systems are intended to provide timely feedback to the satellite and retrieval algorithms based on measured data. These GV sites will consist of radars and DSD measurement systems and also have intrinsic constraints. One of the retrieval algorithms being studied for use with GPM is the dual-wavelength DSD algorithm that does not use the surface reference technique (SRT). The underlying microphysics of precipitation structures and drop-size distributions (DSDs) dictate the types of models and retrieval algorithms that can be used to estimate precipitation. Many types of dual-wavelength algorithms have been studied. Meneghini (2002) analyzed the performance of single-pass dual-wavelength surface-reference-technique (SRT) based algorithms. Mardiana (2003) demonstrated that a dual-wavelength retrieval algorithm could be successfully used without the use of the SRT. It uses an iterative approach based on measured reflectivities at both wavelengths and complex microphysical models to estimate both No and Do at each range bin. More recently, Liao (2004) proposed a solution to the Do ambiguity problem in rain within the dual-wavelength algorithm and showed a possible melting layer model based on stratified spheres. With the No and Do

  3. Structure-Based Algorithms for Microvessel Classification

    KAUST Repository

    Smith, Amy F.

    2015-02-01

    © 2014 The Authors. Microcirculation published by John Wiley & Sons Ltd. Objective: Recent developments in high-resolution imaging techniques have enabled digital reconstruction of three-dimensional sections of microvascular networks down to the capillary scale. To better interpret these large data sets, our goal is to distinguish branching trees of arterioles and venules from capillaries. Methods: Two novel algorithms are presented for classifying vessels in microvascular anatomical data sets without requiring flow information. The algorithms are compared with a classification based on observed flow directions (considered the gold standard), and with an existing resistance-based method that relies only on structural data. Results: The first algorithm, developed for networks with one arteriolar and one venular tree, performs well in identifying arterioles and venules and is robust to parameter changes, but incorrectly labels a significant number of capillaries as arterioles or venules. The second algorithm, developed for networks with multiple inlets and outlets, correctly identifies more arterioles and venules, but is more sensitive to parameter changes. Conclusions: The algorithms presented here can be used to classify microvessels in large microvascular data sets lacking flow information. This provides a basis for analyzing the distinct geometrical properties and modelling the functional behavior of arterioles, capillaries, and venules.

  4. Development of Gis Tool for the Solution of Minimum Spanning Tree Problem using Prim's Algorithm

    Science.gov (United States)

    Dutta, S.; Patra, D.; Shankar, H.; Alok Verma, P.

    2014-11-01

    minimum spanning tree (MST) of a connected, undirected and weighted network is a tree of that network consisting of all its nodes and the sum of weights of all its edges is minimum among all such possible spanning trees of the same network. In this study, we have developed a new GIS tool using most commonly known rudimentary algorithm called Prim's algorithm to construct the minimum spanning tree of a connected, undirected and weighted road network. This algorithm is based on the weight (adjacency) matrix of a weighted network and helps to solve complex network MST problem easily, efficiently and effectively. The selection of the appropriate algorithm is very essential otherwise it will be very hard to get an optimal result. In case of Road Transportation Network, it is very essential to find the optimal results by considering all the necessary points based on cost factor (time or distance). This paper is based on solving the Minimum Spanning Tree (MST) problem of a road network by finding it's minimum span by considering all the important network junction point. GIS technology is usually used to solve the network related problems like the optimal path problem, travelling salesman problem, vehicle routing problems, location-allocation problems etc. Therefore, in this study we have developed a customized GIS tool using Python script in ArcGIS software for the solution of MST problem for a Road Transportation Network of Dehradun city by considering distance and time as the impedance (cost) factors. It has a number of advantages like the users do not need a greater knowledge of the subject as the tool is user-friendly and that allows to access information varied and adapted the needs of the users. This GIS tool for MST can be applied for a nationwide plan called Prime Minister Gram Sadak Yojana in India to provide optimal all weather road connectivity to unconnected villages (points). This tool is also useful for constructing highways or railways spanning several

  5. The construction of social identity in newly recruited nuclear engineering staff: A longitudinal study

    International Nuclear Information System (INIS)

    Nguyen, Lynda; Murphy, Glen; Chang, Artemis

    2014-01-01

    This study examines the process by which newly recruited nuclear engineering and technical staff came to understand, define, think, feel and behave within a distinct group that has a direct contribution to the organization's overall emphasis on a culture of reliability and system safety. In the field of organizational behavior the interactive model of social identity formation has been recently proposed to explain the process by which the internalization of shared norms and values occurs, an element critical in identity formation. Using this rich model of organizational behavior we analyzed multiple sources of data from nine new hires over a period of three years. This was done from the time they were employed to investigate the construction of social identity by new entrants entering into a complex organizational setting reflected in the context of a nuclear facility. Informed by our data analyses, we found support for the interactive model of social identity development and report the unexpected finding that a newly appointed member's age and level of experience appears to influence the manner in which they adapt, and assimilate into their surroundings. This study represents an important contribution to the safety and reliability literature as it provides a rich insight into the way newly recruited employees enact the process by which their identities are formed and hence act, particularly under conditions of duress or significant organizational disruption in complex organizational settings. - Highlights: • We examined how newly recruited nuclear engineer staff develop their social identity. • The study empirically examined the interactive model of social identity formation. • Innovative research strategies were used to capture rich primary data for all case studies. • Age and experience moderated internalization route and the social identity formation process

  6. Advances in metaheuristic algorithms for optimal design of structures

    CERN Document Server

    Kaveh, A

    2017-01-01

    This book presents efficient metaheuristic algorithms for optimal design of structures. Many of these algorithms are developed by the author and his colleagues, consisting of Democratic Particle Swarm Optimization, Charged System Search, Magnetic Charged System Search, Field of Forces Optimization, Dolphin Echolocation Optimization, Colliding Bodies Optimization, Ray Optimization. These are presented together with algorithms which were developed by other authors and have been successfully applied to various optimization problems. These consist of Particle Swarm Optimization, Big Bang-Big Crunch Algorithm, Cuckoo Search Optimization, Imperialist Competitive Algorithm, and Chaos Embedded Metaheuristic Algorithms. Finally a multi-objective optimization method is presented to solve large-scale structural problems based on the Charged System Search algorithm. The concepts and algorithms presented in this book are not only applicable to optimization of skeletal structures and finite element models, but can equally ...

  7. Advances in metaheuristic algorithms for optimal design of structures

    CERN Document Server

    Kaveh, A

    2014-01-01

    This book presents efficient metaheuristic algorithms for optimal design of structures. Many of these algorithms are developed by the author and his colleagues, consisting of Democratic Particle Swarm Optimization, Charged System Search, Magnetic Charged System Search, Field of Forces Optimization, Dolphin Echolocation Optimization, Colliding Bodies Optimization, Ray Optimization. These are presented together with algorithms which were developed by other authors and have been successfully applied to various optimization problems. These consist of Particle Swarm Optimization, Big Bang-Big Crunch Algorithm, Cuckoo Search Optimization, Imperialist Competitive Algorithm, and Chaos Embedded Metaheuristic Algorithms. Finally a multi-objective optimization method is presented to solve large-scale structural problems based on the Charged System Search algorithm. The concepts and algorithms presented in this book are not only applicable to optimization of skeletal structures and finite element models, but can equally ...

  8. Assessment for markers of nephropathy in newly diagnosed type 2 ...

    African Journals Online (AJOL)

    Objective: To assess for markers of nephropathy in newly diagnosed type 2 diabetics, using blood pressure levels, endogenous creatinine clearance and urinary protein excretion as markers of renal disease. Study design: Ninety newly diagnosed type 2 diabetics were studied within 6 weeks of diagnosis. They were in ...

  9. Algorithm Design and Validation for Adaptive Nonlinear Control Enhancement (ADVANCE) Technology Development for Resilient Flight Control, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — SSCI proposes to develop and test a framework referred to as the ADVANCE (Algorithm Design and Validation for Adaptive Nonlinear Control Enhancement), within which...

  10. Efficacy of Intra-articular Injection of a Newly Developed Plasma Rich in Growth Factor (PRGF) Versus Hyaluronic Acid on Pain and Function of Patients with Knee Osteoarthritis: A Single-Blinded Randomized Clinical Trial

    Science.gov (United States)

    Raeissadat, Seyed Ahmad; Rayegani, Seyed Mansoor; Ahangar, Azadeh Gharooee; Abadi, Porya Hassan; Mojgani, Parviz; Ahangar, Omid Gharooi

    2017-01-01

    Background and objectives: Knee osteoarthritis is the most common joint disease. We aimed to compare the efficacy and safety of intra-articular injection of a newly developed plasma rich in growth factor (PRGF) versus hyaluronic acid (HA) on pain and function of patients with knee osteoarthritis. Methods: In this single-blinded randomized clinical trial, patients with symptomatic osteoarthritis of knee were assigned to receive 2 intra-articular injections of our newly developed PRGF in 3 weeks or 3 wee