WorldWideScience

Sample records for richardson-lucy rl algorithm

  1. Deconvolution of 2D coincident Doppler broadening spectroscopy using the Richardson-Lucy algorithm

    International Nuclear Information System (INIS)

    Zhang, J.D.; Zhou, T.J.; Cheung, C.K.; Beling, C.D.; Fung, S.; Ng, M.K.

    2006-01-01

    Coincident Doppler Broadening Spectroscopy (CDBS) measurements are popular in positron solid-state studies of materials. By utilizing the instrumental resolution function obtained from a gamma line close in energy to the 511 keV annihilation line, it is possible to significantly enhance the quality of the CDBS spectra using deconvolution algorithms. In this paper, we compare two algorithms, namely the Non-Negativity Least Squares (NNLS) regularized method and the Richardson-Lucy (RL) algorithm. The latter, which is based on the method of maximum likelihood, is found to give superior results to the regularized least-squares algorithm and with significantly less computer processing time

  2. Active filtering applied to radiographic images unfolded by the Richardson-Lucy algorithm

    International Nuclear Information System (INIS)

    Almeida, Gevaldo L. de; Silvani, Maria Ines; Lopes, Ricardo T.

    2011-01-01

    Degradation of images caused by systematic uncertainties can be reduced when one knows the features of the spoiling agent. Typical uncertainties of this kind arise in radiographic images due to the non - zero resolution of the detector used to acquire them, and from the non-punctual character of the source employed in the acquisition, or from the beam divergence when extended sources are used. Both features blur the image, which, instead of a single point exhibits a spot with a vanishing edge, reproducing hence the point spread function - PSF of the system. Once this spoiling function is known, an inverse problem approach, involving inversion of matrices, can then be used to retrieve the original image. As these matrices are generally ill-conditioned, due to statistical fluctuation and truncation errors, iterative procedures should be applied, such as the Richardson-Lucy algorithm. This algorithm has been applied in this work to unfold radiographic images acquired by transmission of thermal neutrons and gamma-rays. After this procedure, the resulting images undergo an active filtering which fairly improves their final quality at a negligible cost in terms of processing time. The filter ruling the process is based on the matrix of the correction factors for the last iteration of the deconvolution procedure. Synthetic images degraded with a known PSF, and undergone to the same treatment, have been used as benchmark to evaluate the soundness of the developed active filtering procedure. The deconvolution and filtering algorithms have been incorporated to a Fortran program, written to deal with real images, generate the synthetic ones and display both. (author)

  3. A joint Richardson—Lucy deconvolution algorithm for the reconstruction of multifocal structured illumination microscopy data

    International Nuclear Information System (INIS)

    Ströhl, Florian; Kaminski, Clemens F

    2015-01-01

    We demonstrate the reconstruction of images obtained by multifocal structured illumination microscopy, MSIM, using a joint Richardson–Lucy, jRL-MSIM, deconvolution algorithm, which is based on an underlying widefield image-formation model. The method is efficient in the suppression of out-of-focus light and greatly improves image contrast and resolution. Furthermore, it is particularly well suited for the processing of noise corrupted data. The principle is verified on simulated as well as experimental data and a comparison of the jRL-MSIM approach with the standard reconstruction procedure, which is based on image scanning microscopy, ISM, is made. Our algorithm is efficient and freely available in a user friendly software package. (paper)

  4. Influence of the beam divergence on the quality neutron radiographic images improved by Richardson-Lucy deconvolution

    International Nuclear Information System (INIS)

    Almeida, Gevaldo L. de; Silvani, Maria Ines; Lopes, Ricardo T.

    2010-01-01

    towards the neutron source, and hence, the best PSF width depends upon the distance between a specific cross-section of the object and the imaging plate used as detector. In order to determine this dependence a special test-object containing neutron-absorbing inserts spatially distributed along the beam direction has been manufactured and exposed to the neutron beam. The acquired image has then be used to characterize the system through a curve PSF width versus object-detector gap, which has been applied to improve the neutron radiographic images of some objects using the Richardson-Lucy deconvolution algorithm incorporated to a Fortran program. (author)

  5. Determination of point spread function for a flat-panel X-ray imager and its application in image restoration

    International Nuclear Information System (INIS)

    Jeon, Sungchae; Cho, Gyuseong; Huh, Young; Jin, Seungoh; Park, Jongduk

    2006-01-01

    We investigate the image blur estimation methods, namely modified the Richardson-Lucy (R-L) estimator and the Wiener estimator. Based on the empirical model of the PSF, an image restoration is applied to radiological images. The accuracy of the PSF estimation under the Poisson noise and readout electronic noise is significantly better for the R-L estimator than the Wiener estimator. In the image restoration using the 2-D PSF from the R-L estimator, the result shows a good improvement in the low and middle range of spatial frequency

  6. Deconvolving instrumental and intrinsic broadening in core-shell x-ray spectroscopies

    International Nuclear Information System (INIS)

    Fister, T. T.; Seidler, G. T.; Rehr, J. J.; Kas, J. J.; Nagle, K. P.; Elam, W. T.; Cross, J. O.

    2007-01-01

    Intrinsic and experimental mechanisms frequently lead to broadening of spectral features in core-shell spectroscopies. For example, intrinsic broadening occurs in x-ray absorption spectroscopy (XAS) measurements of heavy elements where the core-hole lifetime is very short. On the other hand, nonresonant x-ray Raman scattering (XRS) and other energy loss measurements are more limited by instrumental resolution. Here, we demonstrate that the Richardson-Lucy (RL) iterative algorithm provides a robust method for deconvolving instrumental and intrinsic resolutions from typical XAS and XRS data. For the K-edge XAS of Ag, we find nearly complete removal of ∼9.3 eV full width at half maximum broadening from the combined effects of the short core-hole lifetime and instrumental resolution. We are also able to remove nearly all instrumental broadening in an XRS measurement of diamond, with the resulting improved spectrum comparing favorably with prior soft x-ray XAS measurements. We present a practical methodology for implementing the RL algorithm in these problems, emphasizing the importance of testing for stability of the deconvolution process against noise amplification, perturbations in the initial spectra, and uncertainties in the core-hole lifetime

  7. Comparison of image deconvolution algorithms on simulated and laboratory infrared images

    Energy Technology Data Exchange (ETDEWEB)

    Proctor, D. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    We compare Maximum Likelihood, Maximum Entropy, Accelerated Lucy-Richardson, Weighted Goodness of Fit, and Pixon reconstructions of simple scenes as a function of signal-to-noise ratio for simulated images with randomly generated noise. Reconstruction results of infrared images taken with the TAISIR (Temperature and Imaging System InfraRed) are also discussed.

  8. Point spread functions and deconvolution of ultrasonic images.

    Science.gov (United States)

    Dalitz, Christoph; Pohle-Fröhlich, Regina; Michalk, Thorsten

    2015-03-01

    This article investigates the restoration of ultrasonic pulse-echo C-scan images by means of deconvolution with a point spread function (PSF). The deconvolution concept from linear system theory (LST) is linked to the wave equation formulation of the imaging process, and an analytic formula for the PSF of planar transducers is derived. For this analytic expression, different numerical and analytic approximation schemes for evaluating the PSF are presented. By comparing simulated images with measured C-scan images, we demonstrate that the assumptions of LST in combination with our formula for the PSF are a good model for the pulse-echo imaging process. To reconstruct the object from a C-scan image, we compare different deconvolution schemes: the Wiener filter, the ForWaRD algorithm, and the Richardson-Lucy algorithm. The best results are obtained with the Richardson-Lucy algorithm with total variation regularization. For distances greater or equal twice the near field distance, our experiments show that the numerically computed PSF can be replaced with a simple closed analytic term based on a far field approximation.

  9. LUCY: A New Path to Diversity

    Science.gov (United States)

    Marrah, Arleezah; Mills, Roxanne

    2011-01-01

    This article describes the Librarianship Upgrades for Children and Youth Services (LUCY), a multifaceted multicultural continuing education program for librarians developed by the Library and Information Science Program at Old Dominion University. The Institute of Museum and Library Services (IMLS) funds LUCY through the Laura Bush 21st century…

  10. Lucy: Surveying the diversity of Trojans

    Science.gov (United States)

    Levison, H.; Olkin, C.; Noll, K.; Marchi, S.

    2017-09-01

    The Lucy mission, selected as part of NASA's Discovery Program, is the first reconnaissance of the Jupiter Trojans, objects that hold vital clues to deciphering the history of the Solar System. Due to an unusual and fortuitous orbital configuration, Lucy, will perform a comprehensive investigation that visits six of these primitive bodies, covering both the L4 and L5 swarms, all the known taxonomic types, the largest remnant of a catastrophic collision, and a nearly equal mass binary. It will use a suite of high-heritage remote sensing instruments to map geologic, surface color and composition, thermal and other physical properties of its targets at close range. Lucy, like the human fossil for which it is named, will revolutionize the understanding of our origins.

  11. Estimation of MONIN-OBUKHOV length using richardson and bulk richardson number

    International Nuclear Information System (INIS)

    Essa, K.S.M.

    2000-01-01

    The 1996 NOVA atmospheric boundary layer data from North Carolina are used in 30 minute's averages for five days. Because of missing data of friction velocity (u) and sensible heat flux (H), it is urgent to calculate (u*)and (H) using the equations of logarithmic wind speed and net radiation (Briggs [7]), which are considered in this work. It is found that the correlation between the predicted and observed values of (u*) and (H) is 0.88 and 0.86 respectively. A comparison is made of the Monin-Obukhov length scale (L) estimated using Richardson number (R i ) and bulk Richardson number (Ri b ) with L-value computed using formula of (L), it is found that the agreement between the predicted and observed values of (L) is better in the case (L)is estimated from the bulk Richardson number (Ri b ), rather than from the gradient Richarson number(R j )

  12. A Robust Gold Deconvolution Approach for LiDAR Waveform Data Processing to Characterize Vegetation Structure

    Science.gov (United States)

    Zhou, T.; Popescu, S. C.; Krause, K.; Sheridan, R.; Ku, N. W.

    2014-12-01

    Increasing attention has been paid in the remote sensing community to the next generation Light Detection and Ranging (lidar) waveform data systems for extracting information on topography and the vertical structure of vegetation. However, processing waveform lidar data raises some challenges compared to analyzing discrete return data. The overall goal of this study was to present a robust de-convolution algorithm- Gold algorithm used to de-convolve waveforms in a lidar dataset acquired within a 60 x 60m study area located in the Harvard Forest in Massachusetts. The waveform lidar data was collected by the National Ecological Observatory Network (NEON). Specific objectives were to: (1) explore advantages and limitations of various waveform processing techniques to derive topography and canopy height information; (2) develop and implement a novel de-convolution algorithm, the Gold algorithm, to extract elevation and canopy metrics; and (3) compare results and assess accuracy. We modeled lidar waveforms with a mixture of Gaussian functions using the Non-least squares (NLS) algorithm implemented in R and derived a Digital Terrain Model (DTM) and canopy height. We compared our waveform-derived topography and canopy height measurements using the Gold de-convolution algorithm to results using the Richardson-Lucy algorithm. Our findings show that the Gold algorithm performed better than the Richardson-Lucy algorithm in terms of recovering the hidden echoes and detecting false echoes for generating a DTM, which indicates that the Gold algorithm could potentially be applied to processing of waveform lidar data to derive information on terrain elevation and canopy characteristics.

  13. Characterization of turbidity in Florida's Lake Okeechobee and Caloosahatchee and St. Lucie estuaries using MODIS-Aqua measurements.

    Science.gov (United States)

    Wang, Menghua; Nim, Carl J; Son, Seunghyun; Shi, Wei

    2012-10-15

    This paper describes the use of ocean color remote sensing data from the Moderate Resolution Imaging Spectroradiometer (MODIS) onboard the Aqua satellite to characterize turbidity in Lake Okeechobee and its primary drainage basins, the Caloosahatchee and St. Lucie estuaries from 2002 to 2010. Drainage modification and agricultural development in southern Florida transport sediments and nutrients from watershed agricultural areas to Lake Okeechobee. As a result of development around Lake Okeechobee and the estuaries that are connected to Lake Okeechobee, estuarine conditions have also been adversely impacted, resulting in salinity and nutrient fluctuations. The measurement of water turbidity in lacustrine and estuarine ecosystems allows researchers to understand important factors such as light limitation and the potential release of nutrients from re-suspended sediments. Based on a strong correlation between water turbidity and normalized water-leaving radiance at the near-infrared (NIR) band (nL(w)(869)), a new satellite water turbidity algorithm has been developed for Lake Okeechobee. This study has shown important applications with satellite-measured nL(w)(869) data for water quality monitoring and measurements for turbid inland lakes. MODIS-Aqua-measured water property data are derived using the shortwave infrared (SWIR)-based atmospheric correction algorithm in order to remotely obtain synoptic turbidity data in Lake Okeechobee and normalized water-leaving radiance using the red band (nL(w)(645)) in the Caloosahatchee and St. Lucie estuaries. We found varied, but distinct seasonal, spatial, and event driven turbidity trends in Lake Okeechobee and the Caloosahatchee and St. Lucie estuary regions. Wind waves and hurricanes have the largest influence on turbidity trends in Lake Okeechobee, while tides, currents, wind waves, and hurricanes influence the Caloosahatchee and St. Lucie estuarine areas. Published by Elsevier Ltd.

  14. Multilingual Manipulation and Humor in "I Love Lucy"

    Science.gov (United States)

    Kirschen, Bryan

    2013-01-01

    "I Love Lucy" is considered to have been one of the most humorous television programs in the United States as early as the 1950s. This paper explores the use of language by the protagonists, Lucy and Ricky Ricardo, in order to understand the source of the program's humor. Linguistic analysis of the Ricardos' speech is applied,…

  15. Lucy: Navigating a Jupiter Trojan Tour

    Science.gov (United States)

    Stanbridge, Dale; Williams, Ken; Williams, Bobby; Jackman, Coralie; Weaver, Hal; Berry, Kevin; Sutter, Brian; Englander, Jacob

    2017-01-01

    In January 2017, NASA selected the Lucy mission to explore six Jupiter Trojan asteroids. These six bodies, remnants of the primordial material that formed the outer planets, were captured in the Sun-Jupiter L4 and L5 Lagrangian regions early in the solar system formation. These particular bodies were chosen because of their diverse spectral properties and the chance to observe up close for the first time two orbiting approximately equal mass binaries, Patroclus and Menoetius. KinetX, Inc. is the primary navigation supplier for the Lucy mission. This paper describes preliminary navigation analyses of the approach phase for each Trojan encounter.

  16. Genetic regulation of IL1RL1 methylation and IL1RL1-a protein levels in asthma.

    Science.gov (United States)

    Dijk, F Nicole; Xu, Chengjian; Melén, Erik; Carsin, Anne-Elie; Kumar, Asish; Nolte, Ilja M; Gruzieva, Olena; Pershagen, Goran; Grotenboer, Neomi S; Savenije, Olga E M; Antó, Josep Maria; Lavi, Iris; Dobaño, Carlota; Bousquet, Jean; van der Vlies, Pieter; van der Valk, Ralf J P; de Jongste, Johan C; Nawijn, Martijn C; Guerra, Stefano; Postma, Dirkje S; Koppelman, Gerard H

    2018-03-01

    Interleukin-1 receptor-like 1 ( IL1RL1 ) is an important asthma gene. (Epi)genetic regulation of IL1RL1 protein expression has not been established. We assessed the association between IL1RL1 single nucleotide polymorphisms (SNPs), IL1RL1 methylation and serum IL1RL1-a protein levels, and aimed to identify causal pathways in asthma.Associations of IL1RL1 SNPs with asthma were determined in the Dutch Asthma Genome-wide Association Study cohort and three European birth cohorts, BAMSE (Children/Barn, Allergy, Milieu, Stockholm, an Epidemiological survey), INMA (Infancia y Medio Ambiente) and PIAMA (Prevention and Incidence of Asthma and Mite Allergy), participating in the Mechanisms of the Development of Allergy study. We performed blood DNA IL1RL1 methylation quantitative trait locus (QTL) analysis (n=496) and (epi)genome-wide protein QTL analysis on serum IL1RL1-a levels (n=1462). We investigated the association of IL1RL1 CpG methylation with asthma (n=632) and IL1RL1-a levels (n=548), with subsequent causal inference testing. Finally, we determined the association of IL1RL1-a levels with asthma and its clinical characteristics (n=1101). IL1RL1 asthma-risk SNPs strongly associated with IL1RL1 methylation (rs1420101; p=3.7×10 -16 ) and serum IL1RL1-a levels (p=2.8×10 -56 ). IL1RL1 methylation was not associated with asthma or IL1RL1-a levels. IL1RL1-a levels negatively correlated with blood eosinophil counts, whereas there was no association between IL1RL1-a levels and asthma.In conclusion, asthma-associated IL1RL1 SNPs strongly regulate IL1RL1 methylation and serum IL1RL1-a levels, yet neither these IL1RL1- methylation CpG sites nor IL1RL1-a levels are associated with asthma. Copyright ©ERS 2018.

  17. Construction: first of St. Lucie unit 2 successes

    International Nuclear Information System (INIS)

    Conway, W.F.

    1989-01-01

    The Nuclear Regulatory Commission (NRC) granted a full power operating license for St. Lucie Unit 2 on June 10, 1983, just six years after construction began. The industry average for nuclear power plant construction during this time was approximately ten years. The rate of completion had a positive effect on the cost of the facility. The price of the unit was $1.42 billion as compared to the $2 billion to $5 billion range experienced by other utilities for nuclear plants. These accomplishments were not serendipitous but the results of management techniques and personnel attitudes involved in the construction of the unit. More importantly, many of these same techniques and attitudes have now become part of a quality improvement program at St Lucie and are reflected in its performance indicators. This paper analyzes the construction success of St Lucie Unit 2 and demonstrates that excellent performance in the construction phase can be carried over to the operation of a facility

  18. John Reginald Richardson

    International Nuclear Information System (INIS)

    Craddock, M.K.

    1999-01-01

    The recent death of Reg Richardson has robbed the cyclotron community of its most senior figure. His many achievements over a long career include the first demonstration of phase stability, the first synchrocyclotron, the first sector-focused cyclotron, and one of the two cyclotron meson factories. (authors)

  19. Hydrologic data summary for the St. Lucie River Estuary, Martin and St. Lucie Counties, Florida, 1998-2001

    Science.gov (United States)

    Byrne, Michael J.; Patino, Eduardo

    2004-01-01

    A hydrologic analysis was made at three canal sites and four tidal sites along the St. Lucie River Estuary in southeastern Florida from 1998 to 2001. The data included for analysis are stage, 15-minute flow, salinity, water temperature, turbidity, and suspended-solids concentration. During the period of record, the estuary experienced a drought, major storm events, and high-water discharge from Lake Okeechobee. Flow mainly occurred through the South Fork of the St. Lucie River; however, when flow increased through control structures along the C-23 and C-24 Canals, the North Fork was a larger than usual contributor of total freshwater inflow to the estuary. At one tidal site (Steele Point), the majority of flow was southward toward the St. Lucie Inlet; at a second tidal site (Indian River Bridge), the majority of flow was northward into the Indian River Lagoon. Large-volume stormwater discharge events greatly affected the St. Lucie River Estuary. Increased discharge typically was accompanied by salinity decreases that resulted in water becoming and remaining fresh throughout the estuary until the discharge events ended. Salinity in the estuary usually returned to prestorm levels within a few days after the events. Turbidity decreased and salinity began to increase almost immediately when the gates at the control structures closed. Salinity ranged from less than 1 to greater than 35 parts per thousand during the period of record (1998-2001), and typically varied by several parts per thousand during a tidal cycle. Suspended-solids concentrations were observed at one canal site (S-80) and two tidal sites (Speedy Point and Steele Point) during a discharge event in April and May 2000. Results suggest that most deposition of suspended-solids concentration occurs between S-80 and Speedy Point. The turbidity data collected also support this interpretation. The ratio of inorganic to organic suspended-solids concentration observed at S-80, Speedy Point, and Steele Point

  20. A new method by steering kernel-based Richardson–Lucy algorithm for neutron imaging restoration

    International Nuclear Information System (INIS)

    Qiao, Shuang; Wang, Qiao; Sun, Jia-ning; Huang, Ji-peng

    2014-01-01

    Motivated by industrial applications, neutron radiography has become a powerful tool for non-destructive investigation techniques. However, resulted from a combined effect of neutron flux, collimated beam, limited spatial resolution of detector and scattering, etc., the images made with neutrons are degraded severely by blur and noise. For dealing with it, by integrating steering kernel regression into Richardson–Lucy approach, we present a novel restoration method in this paper, which is capable of suppressing noise while restoring details of the blurred imaging result efficiently. Experimental results show that compared with the other methods, the proposed method can improve the restoration quality both visually and quantitatively

  1. RL5SORT/RL5PLOT. A graphite package for the JRC-Ispra IBM version of RELAP5/MOD1

    International Nuclear Information System (INIS)

    Kolar, W.; Brewka, W.

    1984-01-01

    The present report describes the programs RL5SORT and RL5PLOT, their implementation and ''how to use''. Both programs base on the IBM version of RELAP5 as developed at JRC-ISPRA. RL5SORT creates from the output file (restart plot file) of a RELAP5 calculation data files, which serve as input data base for the program RL5PLOT. RL5PLOT retrieves the previous stored data records (minor edit quantities of RELAP5), allows arithmetic operations with the retrieved data and enables a print or graphic output on the terminal screen of a TEKTRONIX graphic terminal. A set of commands, incorporated in the program RL5PLOT, facilitates the user's work. Program RL5SORT has been developed as a batch program, while RL5PLOT has been conceived for interactive working mode

  2. Childhood cancer mortality in relation to the St Lucie nuclear power station

    International Nuclear Information System (INIS)

    Boice, John D Jr; Mumma, Michael T; Blot, William J; Heath, Clark W Jr

    2005-01-01

    An unusual county-wide excess of childhood cancers of brain and other nervous tissue in the late 1990s in St Lucie County, Florida, prompted the Florida Department of Health to conduct a case-control study within the county assessing residential chemical exposures. No clear associations were found, but claims were then made that the release of radioactive substances such as strontium 90 from the St Lucie nuclear power station, which began operating in 1976, might have played a role. To test the plausibility of this hypothesis, we extended by 17 years a previous study of county mortality conducted by the National Cancer Institute. Rates of total cancer, leukaemia and cancer of brain and other nervous tissue in children and across all ages in St Lucie County were evaluated with respect to the years before and after the nuclear power station began operation and contrasted with rates in two similar counties in Florida (Polk and Volusia). Over the prolonged period 1950-2000, no unusual patterns of childhood cancer mortality were found for St Lucie County as a whole. In particular, no unusual patterns of childhood cancer mortality were seen in relation to the start-up of the St Lucie nuclear power station in 1976. Further, there were no significant differences in mortality between the study and comparison counties for any cancer in the time period after the power station was in operation. Relative rates for all childhood cancers and for childhood leukaemia were higher before the nuclear facility began operating than after, while rates of brain and other nervous tissue cancer were slightly lower in St Lucie County than in the two comparison counties for both time periods. Although definitive conclusions cannot be drawn from descriptive studies, these data provide no support for the hypothesis that the operation of the St Lucie nuclear power station has adversely affected the cancer mortality experience of county residents

  3. ADAPTIVE SELECTION OF AUXILIARY OBJECTIVES IN MULTIOBJECTIVE EVOLUTIONARY ALGORITHMS

    Directory of Open Access Journals (Sweden)

    I. A. Petrova

    2016-05-01

    Full Text Available Subject of Research.We propose to modify the EA+RL method, which increases efficiency of evolutionary algorithms by means of auxiliary objectives. The proposed modification is compared to the existing objective selection methods on the example of travelling salesman problem. Method. In the EA+RL method a reinforcement learning algorithm is used to select an objective – the target objective or one of the auxiliary objectives – at each iteration of the single-objective evolutionary algorithm.The proposed modification of the EA+RL method adopts this approach for the usage with a multiobjective evolutionary algorithm. As opposed to theEA+RL method, in this modification one of the auxiliary objectives is selected by reinforcement learning and optimized together with the target objective at each step of the multiobjective evolutionary algorithm. Main Results.The proposed modification of the EA+RL method was compared to the existing objective selection methods on the example of travelling salesman problem. In the EA+RL method and its proposed modification reinforcement learning algorithms for stationary and non-stationary environment were used. The proposed modification of the EA+RL method applied with reinforcement learning for non-stationary environment outperformed the considered objective selection algorithms on the most problem instances. Practical Significance. The proposed approach increases efficiency of evolutionary algorithms, which may be used for solving discrete NP-hard optimization problems. They are, in particular, combinatorial path search problems and scheduling problems.

  4. Spectral inversion of an indefinite Sturm-Liouville problem due to Richardson

    International Nuclear Information System (INIS)

    Shanley, Paul E

    2009-01-01

    We study an indefinite Sturm-Liouville problem due to Richardson whose complicated eigenvalue dependence on a parameter has been a puzzle for decades. In atomic physics a process exists that inverts the usual Schroedinger situation of an energy eigenvalue depending on a coupling parameter into the so-called Sturmian problem where the coupling parameter becomes the eigenvalue which then depends on the energy. We observe that the Richardson equation is of the Sturmian type. This means that the Richardson and its related Schroedinger eigenvalue functions are inverses of each other and that the Richardson spectrum is therefore no longer a puzzle

  5. Windmill, sugar works, 'Springhall', St. Lucy, Barbados

    OpenAIRE

    Unknown

    2003-01-01

    204 x 143 mm. Showing the windmill and other refinery buildings with workers leading bullock teams loaded with cane towards the refinery. The Spring Hall Estate lies in the centre of St. Lucy Parish in northern Barbados.

  6. Another Vision of Progressivism: Marion Richardson's Triumph and Tragedy.

    Science.gov (United States)

    Smith, Peter

    1996-01-01

    Profiles the career and contributions of English art teacher Marion Richardson (1892-1946). A dynamic and assertive woman, Richardson's ideas and practices changed British primary and secondary art teaching for many years. She often used "word pictures" (narrative descriptions of scenes or emotions) to inspire her students. (MJP)

  7. Marion Richardson: "Art and the Child," a Forgotten Classic

    Science.gov (United States)

    Armstrong, Michael

    2015-01-01

    Marion Richardson was a revolutionary art teacher and schools inspector. First published in 1948, her book "Art and the Child" is one of the most remarkable educational documents of the period between the first and second world wars. This article reviews Richardson's philosophy and practice of art and suggests its continuing…

  8. Vanishing of Littlewood-Richardson polynomials is in P

    OpenAIRE

    Adve, Anshul; Robichaux, Colleen; Yong, Alexander

    2017-01-01

    J. DeLoera-T. McAllister and K. D. Mulmuley-H. Narayanan-M. Sohoni independently proved that determining the vanishing of Littlewood-Richardson coefficients has strongly polynomial time computational complexity. Viewing these as Schubert calculus numbers, we prove the generalization to the Littlewood-Richardson polynomials that control equivariant cohomology of Grassmannians. We construct a polytope using the edge-labeled tableau rule of H. Thomas-A. Yong. Our proof then combines a saturation...

  9. A Freudian Reading of Samuel Richardson's Pamela

    Directory of Open Access Journals (Sweden)

    Shadi Torabi Sarijaloo

    2016-03-01

    Full Text Available Richardson's Pamela (1740_1 is replete with elements and incidents that make it worthy enough to be viewed from Freud's perspective. The present study focuses upon how Richardson's characters unconsciously attempt to conceal and repress their own conflicting emotions, thoughts, wishes, impulses and how they struggle against their anxiety-ridden situations to regain their psychic balance. Moreover, the repetition of certain occurrences and elements play a crucial role in generating the uncanny effect in Pamela, including the role of double and déjà-vu, the castle-like settings, heroine's intimidating situations and also her master's past secret. In addition, the way Richardson's characters dress for the noteworthy masquerade ball scene and the ambiguous words of Pamela's master are considerably implies something that is affiliated with characters' psyche according to Freud's condensation theory. With regard to Freud's concepts of The 'Tripartite Psyche', 'Anxiety and Ego Defense Mechanisms' and 'Uncanny' the researcher attempts to delve into the heroine and her master's psyche through her letters which reveal the contents of the heroine's unconscious mind.

  10. DOE-RL Integrated Safety Management System Description

    International Nuclear Information System (INIS)

    SHOOP, D.S.

    2000-01-01

    The purpose of this Integrated Safety Management System Description (ISMSD) is to describe the U.S. Department of Energy (DOE), Richland Operations Office (RL) ISMS as implemented through the RL Integrated Management System (RIMS). This ISMSD does not impose additional requirements but rather provides an overview describing how various parts of the ISMS fit together. Specific requirements for each of the core functions and guiding principles are established in other implementing processes, procedures, and program descriptions that comprise RIMS. RL is organized to conduct work through operating contracts; therefore, it is extremely difficult to provide an adequate ISMS description that only addresses RL functions. Of necessity, this ISMSD contains some information on contractor processes and procedures which then require RL approval or oversight. This ISMSD does not purport to contain a full description of the contractors' ISM System Descriptions

  11. DOE-RL Integrated Safety Management System Description

    CERN Document Server

    Shoop, D S

    2000-01-01

    The purpose of this Integrated Safety Management System Description (ISMSD) is to describe the U.S. Department of Energy (DOE), Richland Operations Office (RL) ISMS as implemented through the RL Integrated Management System (RIMS). This ISMSD does not impose additional requirements but rather provides an overview describing how various parts of the ISMS fit together. Specific requirements for each of the core functions and guiding principles are established in other implementing processes, procedures, and program descriptions that comprise RIMS. RL is organized to conduct work through operating contracts; therefore, it is extremely difficult to provide an adequate ISMS description that only addresses RL functions. Of necessity, this ISMSD contains some information on contractor processes and procedures which then require RL approval or oversight. This ISMSD does not purport to contain a full description of the contractors' ISM System Descriptions.

  12. Susan And Lucy: Two Outstanding Heroines Of Alan Ayckbourn / Susan ve Lucy: Alan Ayckbourn’un İki Sıradışı Kadın-Kahramanı

    OpenAIRE

    Parlak, Erdinç

    2012-01-01

    Alan Ayckbourn (1939-     ) has an important place among the twentieth century British playwrights. The playwright handles some present-day social problems such as insensitiveness, lack of communication, lack of love, collision, alienation, moral degeneration especially around his heroines. Susan, the protagonist of Woman in Mind, and Lucy, the little heroine in Invisible Friends, are among the outstanding heroines of the playwright. The life experiences of Susan and Lucy reflected from the s...

  13. Memoirs of Eileen Richardson

    Index Scriptorium Estoniae

    Richardson, Eileen

    2010-01-01

    Bournemouthi Ülikooli professor Eileen Richardson projektist ”Inequalities in access to rural communities”, mis on mõeldud Kanada ja Euroopa tervishoiuerialade tudengitele ning mille eesmärgiks on edendada erinevate tervishoiuerialade tudengite vahetust, rajada õpetlaste ja töötavate tervishoiuspetsialistide võrgustik arendamaks välja ühised õppekavad ja juhendid. Projektiga ühines ka Tallinna Tervishoiu Kõrgkool

  14. Solving the Richardson equations close to the critical points

    Energy Technology Data Exchange (ETDEWEB)

    DomInguez, F [Departamento de Matematicas, Universidad de Alcala, 28871 Alcala de Henares (Spain); Esebbag, C [Departamento de Matematicas, Universidad de Alcala, 28871 Alcala de Henares (Spain); Dukelsky, J [Instituto de Estructura de la Materia, CSIC, Serrano 123, 28006 Madrid (Spain)

    2006-09-15

    We study the Richardson equations close to the critical values of the pairing strength g{sub c}, where the occurrence of divergences precludes numerical solutions. We derive a set of equations for determining the critical g values and the non-collapsing pair energies. Studying the behaviour of the solutions close to the critical points, we develop a procedure to solve numerically the Richardson equations for arbitrary coupling strength.

  15. Dactylobiotus luci , a new freshwater tardigrade (Eutardigrada ...

    African Journals Online (AJOL)

    A new freshwater eutardigrade, Dactylobiotus luci sp. nov., is described from a permanent marsh pool (Zaphania's Pool) at 4225 m elevation in the Alpine zone of the Rwenzori Mountains, Uganda. The new species is most similar to D. dervizi Biserov, 1998 in the shape of the egg processes, absence of papillae and ...

  16. IL1RL1 gene variants and nasopharyngeal IL1RL-a levels are associated with severe RSV bronchiolitis: a multicenter cohort study.

    Directory of Open Access Journals (Sweden)

    Tina E Faber

    Full Text Available Targets for intervention are required for respiratory syncytial virus (RSV bronchiolitis, a common disease during infancy for which no effective treatment exists. Clinical and genetic studies indicate that IL1RL1 plays an important role in the development and exacerbations of asthma. Human IL1RL1 encodes three isoforms, including soluble IL1RL1-a, that can influence IL33 signalling by modifying inflammatory responses to epithelial damage. We hypothesized that IL1RL1 gene variants and soluble IL1RL1-a are associated with severe RSV bronchiolitis.We studied the association between RSV and 3 selected IL1RL1 single-nucleotide polymorphisms rs1921622, rs11685480 or rs1420101 in 81 ventilated and 384 non-ventilated children under 1 year of age hospitalized with primary RSV bronchiolitis in comparison to 930 healthy controls. Severe RSV infection was defined by need for mechanical ventilation. Furthermore, we examined soluble IL1RL1-a concentration in nasopharyngeal aspirates from children hospitalized with primary RSV bronchiolitis. An association between SNP rs1921622 and disease severity was found at the allele and genotype level (p = 0.011 and p = 0.040, respectively. In hospitalized non-ventilated patients, RSV bronchiolitis was not associated with IL1RL1 genotypes. Median concentrations of soluble IL1RL1-a in nasopharyngeal aspirates were >20-fold higher in ventilated infants when compared to non-ventilated infants with RSV (median [and quartiles] 9,357 [936-15,528] pg/ml vs. 405 [112-1,193] pg/ml respectively; p<0.001.We found a genetic link between rs1921622 IL1RL1 polymorphism and disease severity in RSV bronchiolitis. The potential biological role of IL1RL1 in the pathogenesis of severe RSV bronchiolitis was further supported by high local concentrations of IL1RL1 in children with most severe disease. We speculate that IL1RL1a modifies epithelial damage mediated inflammatory responses during RSV bronchiolitis and thus may serve as a

  17. Fractional-order RC and RL circuits

    KAUST Repository

    Radwan, Ahmed Gomaa

    2012-05-30

    This paper is a step forward to generalize the fundamentals of the conventional RC and RL circuits in fractional-order sense. The effect of fractional orders is the key factor for extra freedom, more flexibility, and novelty. The conditions for RC and RL circuits to act as pure imaginary impedances are derived, which are unrealizable in the conventional case. In addition, the sensitivity analyses of the magnitude and phase response with respect to all parameters showing the locations of these critical values are discussed. A qualitative revision for the fractional RC and RL circuits in the frequency domain is provided. Numerical and PSpice simulations are included to validate this study. © Springer Science+Business Media, LLC 2012.

  18. Reinforcement Learning for Online Control of Evolutionary Algorithms

    NARCIS (Netherlands)

    Eiben, A.; Horvath, Mark; Kowalczyk, Wojtek; Schut, Martijn

    2007-01-01

    The research reported in this paper is concerned with assessing the usefulness of reinforcment learning (RL) for on-line calibration of parameters in evolutionary algorithms (EA). We are running an RL procedure and the EA simultaneously and the RL is changing the EA parameters on-the-fly. We

  19. UAS Conflict-Avoidance Using Multiagent RL with Abstract Strategy Type Communication

    Science.gov (United States)

    Rebhuhn, Carrie; Knudson, Matt; Tumer, Kagan

    2014-01-01

    The use of unmanned aerial systems (UAS) in the national airspace is of growing interest to the research community. Safety and scalability of control algorithms are key to the successful integration of autonomous system into a human-populated airspace. In order to ensure safety while still maintaining efficient paths of travel, these algorithms must also accommodate heterogeneity of path strategies of its neighbors. We show that, using multiagent RL, we can improve the speed with which conflicts are resolved in cases with up to 80 aircraft within a section of the airspace. In addition, we show that the introduction of abstract agent strategy types to partition the state space is helpful in resolving conflicts, particularly in high congestion.

  20. Steele Richardson Olszewski syndrome

    Directory of Open Access Journals (Sweden)

    Vijayashree S Gokhale

    2013-01-01

    Full Text Available Parkinson′s disease and its plus syndromes are an important cause of morbidity in the geriatric age group. Its plus syndromes show a myriad of clinical features characterized by progressive symptoms. Here we present a 65-year-old woman with progressive "Parkinsonian-like features," i.e., mask-like face, slowness of all movements and tendency to fall, and difficulty in eye movements, leading to the diagnosis of Steele Richardson Olszewski Syndrome or progressive supranuclear palsy.

  1. Soosaar Lucie-Smithi taktikepi all / Reet Varblane

    Index Scriptorium Estoniae

    Varblane, Reet, 1952-

    2004-01-01

    "Jumal saab inimeseks": Edward Lucie-Smithi kuraatoriprojekt Ateena Frissirase muuseumis 14. VII-12. IX ja Mark Soosaare kuraatoriprojekt Pärnu Uue Kunsti Muuseumis 5. VI-29. VIII. Sui Jianguo, Tim Maseini ja Jennifer Mehra, Olga Tobrelutsu, Genja Sheffi, Nikos Navridise, Jaan Toomiku, Michalis Manoussakise, Lembit Sarapuu, Kaljo Põllu ja tšehhi rühmituse Kamera Skura töödest

  2. A low-cost vector processor boosting compute-intensive image processing operations

    Science.gov (United States)

    Adorf, Hans-Martin

    1992-01-01

    Low-cost vector processing (VP) is within reach of everyone seriously engaged in scientific computing. The advent of affordable add-on VP-boards for standard workstations complemented by mathematical/statistical libraries is beginning to impact compute-intensive tasks such as image processing. A case in point in the restoration of distorted images from the Hubble Space Telescope. A low-cost implementation is presented of the standard Tarasko-Richardson-Lucy restoration algorithm on an Intel i860-based VP-board which is seamlessly interfaced to a commercial, interactive image processing system. First experience is reported (including some benchmarks for standalone FFT's) and some conclusions are drawn.

  3. Deconvolution for the localization of sound sources using a circular microphone array

    DEFF Research Database (Denmark)

    Tiana Roig, Elisabet; Jacobsen, Finn

    2013-01-01

    During the last decade, the aeroacoustic community has examined various methods based on deconvolution to improve the visualization of acoustic fields scanned with planar sparse arrays of microphones. These methods assume that the beamforming map in an observation plane can be approximated by a c......-negative least squares, and the Richardson-Lucy. This investigation examines the matter with computer simulations and measurements....... that the beamformer's point-spread function is shift-invariant. This makes it possible to apply computationally efficient deconvolution algorithms that consist of spectral procedures in the entire region of interest, such as the deconvolution approach for the mapping of the acoustic sources 2, the Fourier-based non...

  4. Application of Reinforcement Learning in Cognitive Radio Networks: Models and Algorithms

    Directory of Open Access Journals (Sweden)

    Kok-Lim Alvin Yau

    2014-01-01

    Full Text Available Cognitive radio (CR enables unlicensed users to exploit the underutilized spectrum in licensed spectrum whilst minimizing interference to licensed users. Reinforcement learning (RL, which is an artificial intelligence approach, has been applied to enable each unlicensed user to observe and carry out optimal actions for performance enhancement in a wide range of schemes in CR, such as dynamic channel selection and channel sensing. This paper presents new discussions of RL in the context of CR networks. It provides an extensive review on how most schemes have been approached using the traditional and enhanced RL algorithms through state, action, and reward representations. Examples of the enhancements on RL, which do not appear in the traditional RL approach, are rules and cooperative learning. This paper also reviews performance enhancements brought about by the RL algorithms and open issues. This paper aims to establish a foundation in order to spark new research interests in this area. Our discussion has been presented in a tutorial manner so that it is comprehensive to readers outside the specialty of RL and CR.

  5. Yet one more dwell time algorithm

    Science.gov (United States)

    Haberl, Alexander; Rascher, Rolf

    2017-06-01

    typical processing times are reduced to about 80 % up to 50 % compared to conventional algorithms (Lucy-Richardson, Van-Cittert …) as used in established machines. To verify its effectiveness a plane surface was machined on an IBF.

  6. RL-1: a certified uranium reference ore

    International Nuclear Information System (INIS)

    Steger, H.F.; Bowman, W.S.

    1985-06-01

    A 145-kg sample of a uranium ore from Rabbit Lake, Saskatchewan, has been prepared as a compositional reference material. RL-1 was ground to minus 74 μm and mixed in one lot. Approximately one half of this ore was bottled in 100-g units, the remainder being stored in bulk. The homogeneity of RL-1 with respect to uranium and nickel was confirmed by neutron activation and X-ray fluorescence analytical techniques. In a 'free choice' analytical program, 13 laboratories contributed results for one or more of uranium, nickel and arsenic in one bottle of RL-1. Based on a statistical analysis of the data, the following recommended values were assigned: U, 0.201%; Ni, 185 μg/g; and As, 19.6 μg/g

  7. Kohalejõudmise kõrvaltee : graafikatriennaal / Lucy Harrison ; interv. Anneli Porri

    Index Scriptorium Estoniae

    Harrison, Lucy

    2004-01-01

    Intervjuu Lucy Harrisoniga, kes esineb Tallinna XIII graafikatriennaalil tekstikogumikuga "Fantastilised linnad" ja kellel ilmus koostöös Christiane Baumgartneriga raamat "Detour", kus kunstnikud püüavad kahe turistidele mõeldud nõukogudeaegse linnagiidi abil ühitada seal antud kirjeldust Tallinna tegelikkusega

  8. Using Learning Analytics to Understand the Design of an Intelligent Language Tutor – Chatbot Lucy

    OpenAIRE

    Yi Fei Wang; Stephen Petrina

    2013-01-01

    the goal of this article is to explore how learning analytics can be used to predict and advise the design of an intelligent language tutor, chatbot Lucy. With its focus on using student-produced data to understand the design of Lucy to assist English language learning, this research can be a valuable component for language-learning designers to improve second language acquisition. In this article, we present students’ learning journey and data trails, the chatting log architecture and result...

  9. The spirit of St. Lucie: nuclear plant built on schedule

    International Nuclear Information System (INIS)

    Derrickson, W.B.

    1984-01-01

    Florida Power and Light Company currently has four nuclear units in operation with St. Lucie Unit 2 being the last to receive an operating license in June, 1983. It's sister Unit 1 received its license in 1976 and has, through 1982, compiled one of the best operating records in the United States. The full power license for St. Lucie Unit 2 was received from the Nuclear Regulatory Commission (NRC) on June 10, 1983, just six years after construction began. The industry average for construction of nuclear plants in this time period is about 10 years. The success of the St. Lucie Unit 2 project can be at least in part attributed to planning the work, accurate and timely reporting of results via valid indicators, well trained and skilled personnel, and most of all, teamwork. During the course of the project the plant was constantly on or near schedule and always ahead of industry averages. This was done despite issuance of numerous regulations by the NRC (TMI), a 1979 hurricane which did considerable damage to the Reactor Auxiliary Building, labor problems, and an NRC schedule review team that determined the best that could be done was to complete the plant a year later. The final price tag is about $1.42 billion, including ''allowance for funds used during construction''. In operation to date the post core loading test program has been completed in less than two months, enabling the plant to be put into commercial operation only two months after its original scheduled date of May 28, 1983exclamation

  10. Announced Strategy Types in Multiagent RL for Conflict-Avoidance in the National Airspace

    Science.gov (United States)

    Rebhuhn, Carrie; Knudson, Matthew D.; Tumer, Kagan

    2014-01-01

    The use of unmanned aerial systems (UAS) in the national airspace is of growing interest to the research community. Safety and scalability of control algorithms are key to the successful integration of autonomous system into a human-populated airspace. In order to ensure safety while still maintaining efficient paths of travel, these algorithms must also accommodate heterogeneity of path strategies of its neighbors. We show that, using multiagent RL, we can improve the speed with which conflicts are resolved in cases with up to 80 aircraft within a section of the airspace. In addition, we show that the introduction of abstract agent strategy types to partition the state space is helpful in resolving conflicts, particularly in high congestion.

  11. Genetic regulation ofmethylation and IL1RL1-a protein levels in asthma

    NARCIS (Netherlands)

    Dijk, F Nicole; Xu, Chengjian; Melén, Erik; Carsin, Anne-Elie; Kumar, Asish; Nolte, Ilja M; Gruzieva, Olena; Pershagen, Goran; Grotenboer, Neomi S; Savenije, Olga E M; Antó, Josep Maria; Lavi, Iris; Dobaño, Carlota; Bousquet, Jean; van der Vlies, Pieter; van der Valk, Ralf J P; de Jongste, Johan C; Nawijn, Martijn C; Guerra, Stefano; Postma, Dirkje S; Koppelman, Gerard H

    2018-01-01

    Interleukin-1 receptor-like 1 (IL1RL1) is an important asthma gene. (Epi)genetic regulation ofIL1RL1protein expression has not been established. We assessed the association betweenIL1RL1single nucleotide polymorphisms (SNPs),IL1RL1methylation and serum IL1RL1-a protein levels, and aimed to identify

  12. Richardson Number, stability and turbulence- A coherent view

    Digital Repository Service at National Institute of Oceanography (India)

    Varkey, M.J.

    As turbulence in water is governed by vertical mobility controlled by static stability and horizontal mobility controlled by currents, the Richardson Number should give a measure of turbulence also. It is argued in this note that inverse...

  13. Rotationally resolved colors of the targets of NASA's Lucy mission

    Science.gov (United States)

    Emery, Joshua; Mottola, Stefano; Brown, Mike; Noll, Keith; Binzel, Richard

    2018-05-01

    We propose rotationally resolved photometry at 3.6 and 4.5 um of 5 Trojan asteroids and one Main Belt asteroid - the targets of NASA's Lucy mission. The proposed Spitzer observations are designed to meet a combination of science goals and mission support objectives. Science goals 1) Search for signatures of volatiles and/or organics on the surfaces. a. This goal includes resolving a discrepancy between previous WISE and Spitzer measurements of Trojans 2) Provide new constraints on the cause of rotational spectral heterogeneity detected on 3548 Eurybates at shorter wavelengths a. Determine whether the heterogeneity (Fig 1) extends to the 3-5 um region 3) Assess the possibility for spectral heterogeneity on the other targets a. This goal will help test the hypothesis of Wong and Brown (2015) that the near-surface interiors of Trojans differ from their surfaces 4) Thermal data at 4.5 um for the Main Belt target Donaldjohanson will refine estimates of size, albedo, and provide the first estimate of thermal inertia Mission support objectives 1) Assess scientifically optimal encounter times (viewing geometries) for the fly-bys a. Characterizing rotational spectral units now will enable the team to choose the most scientifically valuable part of the asteroid to view 2) Gather data to optimize observing parameters for Lucy instruments a. Measuring brightness in the 3 - 5 um region and resolving the discrepancy between WISE and Spitzer will enable better planning of the Lucy spectral observations in this wavelength range 3) The size, albedo, and thermal inertia of Donaldjohanson are fundamental data for planning the encounter with that Main Belt asteroid

  14. Cooperated Bayesian algorithm for distributed scheduling problem

    Institute of Scientific and Technical Information of China (English)

    QIANG Lei; XIAO Tian-yuan

    2006-01-01

    This paper presents a new distributed Bayesian optimization algorithm (BOA) to overcome the efficiency problem when solving NP scheduling problems.The proposed approach integrates BOA into the co-evolutionary schema,which builds up a concurrent computing environment.A new search strategy is also introduced for local optimization process.It integrates the reinforcement learning(RL) mechanism into the BOA search processes,and then uses the mixed probability information from BOA (post-probability) and RL (pre-probability) to enhance the cooperation between different local controllers,which improves the optimization ability of the algorithm.The experiment shows that the new algorithm does better in both optimization (2.2%) and convergence (11.7%),compared with classic BOA.

  15. Developmental identity versus typology: Lucy has only four sacral segments.

    Science.gov (United States)

    Machnicki, Allison L; Lovejoy, C Owen; Reno, Philip L

    2016-08-01

    Both interspecific and intraspecific variation in vertebral counts reflect the action of patterning control mechanisms such as Hox. The preserved A.L. 288-1 ("Lucy") sacrum contains five fused elements. However, the transverse processes of the most caudal element do not contact those of the segment immediately craniad to it, leaving incomplete sacral foramina on both sides. This conforms to the traditional definition of four-segmented sacra, which are very rare in humans and African apes. It was recently suggested that fossilization damage precludes interpretation of this specimen and that additional sacral-like features of its last segment (e.g., the extent of the sacral hiatus) suggest a general Australopithecus pattern of five sacral vertebrae. We provide updated descriptions of the original Lucy sacrum. We evaluate sacral/coccygeal variation in a large sample of extant hominoids and place it within the context of developmental variation in the mammalian vertebral column. We report that fossilization damage did not shorten the transverse processes of the fifth segment of Lucy's sacrum. In addition, we find that the extent of the sacral hiatus is too variable in apes and hominids to provide meaningful information on segment identity. Most importantly, a combination of sacral and coccygeal features is to be expected in vertebrae at regional boundaries. The sacral/caudal boundary appears to be displaced cranially in early hominids relative to extant African apes and humans, a condition consistent with the likely ancestral condition for Miocene hominoids. While not definitive in itself, a four-segmented sacrum accords well with the "long-back" model for the Pan/Homo last common ancestor. Am J Phys Anthropol 160:729-739, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  16. Optimal reservoir operation policies using novel nested algorithms

    Science.gov (United States)

    Delipetrev, Blagoj; Jonoski, Andreja; Solomatine, Dimitri

    2015-04-01

    Historically, the two most widely practiced methods for optimal reservoir operation have been dynamic programming (DP) and stochastic dynamic programming (SDP). These two methods suffer from the so called "dual curse" which prevents them to be used in reasonably complex water systems. The first one is the "curse of dimensionality" that denotes an exponential growth of the computational complexity with the state - decision space dimension. The second one is the "curse of modelling" that requires an explicit model of each component of the water system to anticipate the effect of each system's transition. We address the problem of optimal reservoir operation concerning multiple objectives that are related to 1) reservoir releases to satisfy several downstream users competing for water with dynamically varying demands, 2) deviations from the target minimum and maximum reservoir water levels and 3) hydropower production that is a combination of the reservoir water level and the reservoir releases. Addressing such a problem with classical methods (DP and SDP) requires a reasonably high level of discretization of the reservoir storage volume, which in combination with the required releases discretization for meeting the demands of downstream users leads to computationally expensive formulations and causes the curse of dimensionality. We present a novel approach, named "nested" that is implemented in DP, SDP and reinforcement learning (RL) and correspondingly three new algorithms are developed named nested DP (nDP), nested SDP (nSDP) and nested RL (nRL). The nested algorithms are composed from two algorithms: 1) DP, SDP or RL and 2) nested optimization algorithm. Depending on the way we formulate the objective function related to deficits in the allocation problem in the nested optimization, two methods are implemented: 1) Simplex for linear allocation problems, and 2) quadratic Knapsack method in the case of nonlinear problems. The novel idea is to include the nested

  17. The contributions of Lewis Fry Richardson to drainage theory, soil physics, and the soil-plant-atmosphere continuum

    Science.gov (United States)

    Knight, John; Raats, Peter

    2016-04-01

    The EGU Division on Nonlinear Processes in Geophysics awards the Lewis Fry Richardson Medal. Richardson's significance is highlighted in http://www.egu.eu/awards-medals/portrait-lewis-fry-richardson/, but his contributions to soil physics and to numerical solutions of heat and diffusion equations are not mentioned. We would like to draw attention to those little known contributions. Lewis Fry Richardson (1881-1953) made important contributions to many fields including numerical weather prediction, finite difference solutions of partial differential equations, turbulent flow and diffusion, fractals, quantitative psychology and studies of conflict. He invented numerical weather prediction during World War I, although his methods were not successfully applied until 1950, after the invention of fast digital computers. In 1922 he published the book `Numerical weather prediction', of which few copies were sold and even fewer were read until the 1950s. To model heat and mass transfer in the atmosphere, he did much original work on turbulent flow and defined what is now known as the Richardson number. His technique for improving the convergence of a finite difference calculation is known as Richardson extrapolation, and was used by John Philip in his 1957 semi-analytical solution of the Richards equation for water movement in unsaturated soil. Richardson's first papers in 1908 concerned the numerical solution of the free surface problem of unconfined flow of water in saturated soil, arising in the design of drain spacing in peat. Later, for the lower boundary of his atmospheric model he needed to understand the movement of heat, liquid water and water vapor in what is now called the vadose zone and the soil plant atmosphere system, and to model coupled transfer of heat and flow of water in unsaturated soil. Finding little previous work, he formulated partial differential equations for transient, vertical flow of liquid water and for transfer of heat and water vapor. He

  18. Comparison Between CCCM and CloudSat Radar-Lidar (RL) Cloud and Radiation Products

    Science.gov (United States)

    Ham, Seung-Hee; Kato, Seiji; Rose, Fred G.; Sun-Mack, Sunny

    2015-01-01

    To enhance cloud properties, LaRC and CIRA developed each combination algorithm for obtained properties from passive, active and imager in A-satellite constellation. When comparing global cloud fraction each other, LaRC-produced CERES-CALIPSO-CloudSat-MODIS (CCCM) products larger low-level cloud fraction over tropic ocean, while CIRA-produced Radar-Lidar (RL) shows larger mid-level cloud fraction for high latitude region. The reason for different low-level cloud fraction is due to different filtering method of lidar-detected cloud layers. Meanwhile difference in mid-level clouds is occurred due to different priority of cloud boundaries from lidar and radar.

  19. Effect of ferrocene-substituted porphyrin RL-91 on Candida albicans biofilm formation.

    Science.gov (United States)

    Lippert, Rainer; Vojnovic, Sandra; Mitrovic, Aleksandra; Jux, Norbert; Ivanović-Burmazović, Ivana; Vasiljevic, Branka; Stankovic, Nada

    2014-08-01

    Ferrocene-substituted porphyrin RL-91 exhibits antifungal activity against opportune human pathogen Candida albicans. RL-91 efficiently inhibits growth of both planktonic C. albicans cells and cells within biofilms without photoactivation. The minimal inhibitory concentration for plankton form (PMIC) was established to be 100 μg/mL and the same concentration killed 80% of sessile cells in the mature biofilm (SMIC80). Furthermore PMIC of RL-91 efficiently prevents C. albicans biofilm formation. RL-91 is cytotoxic for human fibroblasts in vitro in concentration of 10 μg/mL, however it does not cause hemolysis in concentrations of up to 50 μg/mL. These findings open possibility for application of RL-91 as an antifungal agent for external antibiofilm treatment of medical devices as well as a scaffold for further development of porphyrin based systemic antifungals. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Learning-based traffic signal control algorithms with neighborhood information sharing: An application for sustainable mobility

    Energy Technology Data Exchange (ETDEWEB)

    Aziz, H. M. Abdul [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Zhu, Feng [Purdue University, West Lafayette, IN (United States). Lyles School of Civil Engineering; Ukkusuri, Satish V. [Purdue University, West Lafayette, IN (United States). Lyles School of Civil Engineering

    2017-10-04

    Here, this research applies R-Markov Average Reward Technique based reinforcement learning (RL) algorithm, namely RMART, for vehicular signal control problem leveraging information sharing among signal controllers in connected vehicle environment. We implemented the algorithm in a network of 18 signalized intersections and compare the performance of RMART with fixed, adaptive, and variants of the RL schemes. Results show significant improvement in system performance for RMART algorithm with information sharing over both traditional fixed signal timing plans and real time adaptive control schemes. Additionally, the comparison with reinforcement learning algorithms including Q learning and SARSA indicate that RMART performs better at higher congestion levels. Further, a multi-reward structure is proposed that dynamically adjusts the reward function with varying congestion states at the intersection. Finally, the results from test networks show significant reduction in emissions (CO, CO2, NOx, VOC, PM10) when RL algorithms are implemented compared to fixed signal timings and adaptive schemes.

  1. Framatome technologies awarded nickel plating contract at St. Lucie

    International Nuclear Information System (INIS)

    Anon.

    1997-01-01

    Framatome Technologies is to perform nickel plating on 120 pressurizer heater sleeves at Florida Power and Light's St. Lucie unit 1 during the fall 1997 outage; replacement heaters, fabricated by Framatome, that use stainless steel for the heather sheath, will be installed. The nickel layer is deposited on the inside surface of the heater sleeves, which protects the Inconel 600 material and significantly reduces the intergranular stress-corrosion cracking, as proved in other US and European plants

  2. Status of CSR RL06 GRACE reprocessing and preliminary results

    Science.gov (United States)

    Save, H.

    2017-12-01

    The GRACE project plans to re-processes the GRACE mission data in order to be consistent with the first gravity products released by the GRACE-FO project. The RL06 reprocessing will harmonize the GRACE time-series with the first release of GRACE-FO. This paper catalogues the changes in the upcoming RL06 release and discusses the quality improvements as compared to the current RL05 release. The processing and parameterization changes as compared to the current release are also discussed. This paper discusses the evolution of the quality of the GRACE solutions and characterize the errors over the past few years. The possible challenges associated with connecting the GRACE time series with that from GRACE-FO are also discussed.

  3. Parametrization of the Richardson weather generator within the European Union

    NARCIS (Netherlands)

    Voet, van der P.; Kramer, K.; Diepen, van C.A.

    1996-01-01

    The Richardson model for mathematically generating daily weather data was parametrized. Thirty years' time-series of the 355 main meteorological stations in the European Union formed the database. Model parameters were derived from both observed weather station data and interpolated weather data on

  4. Noisy Spins and the Richardson-Gaudin Model

    Science.gov (United States)

    Rowlands, Daniel A.; Lamacraft, Austen

    2018-03-01

    We study a system of spins (qubits) coupled to a common noisy environment, each precessing at its own frequency. The correlated noise experienced by the spins implies long-lived correlations that relax only due to the differing frequencies. We use a mapping to a non-Hermitian integrable Richardson-Gaudin model to find the exact spectrum of the quantum master equation in the high-temperature limit and, hence, determine the decay rate. Our solution can be used to evaluate the effect of inhomogeneous splittings on a system of qubits coupled to a common bath.

  5. Quench tank in-leakage diagnosis at St. Lucie

    Energy Technology Data Exchange (ETDEWEB)

    Price, J.E.; Au-Yang, M.K.; Beckner, D.A.; Vickery, A.N.

    1996-12-01

    In February 1995, leakage into the quench tank of the St. Lucie Nuclear Station Unit 1 was becoming an operational concern. This internal leak resulted in measurable increases in both the temperature and level of the quench tank water, and was so severe that, if the trend continued, plant shut down would be necessary. Preliminary diagnosis based on in-plant instrumentation indicated that any one of 11 valves might be leaking into the quench tank. This paper describes the joint effort by two teams of engineers--one from Florida Power & Light, the other from Framatome Technologies--to identify the sources of the leak, using the latest technology developed for valve diagnosis.

  6. Progress towards CSR RL06 GRACE gravity solutions

    Science.gov (United States)

    Save, Himanshu

    2017-04-01

    The GRACE project plans to re-processes the GRACE mission data in order to be consistent with the first gravity products released by the GRACE-FO project. The next generation Release-06 (RL06) gravity products from GRACE will include the improvements in GRACE Level-1 data products, background gravity models and the processing methodology. This paper will outline the planned improvements for CSR - RL06 and discuss the preliminary results. This paper will discuss the evolution of the quality of the GRACE solutions, especially over the past few years. We will also discuss the possible challenges we may face in connecting/extending the measurements of mass fluxes from the GRACE era to the GRACE-FO era due quality of the GRACE solutions from recent years.

  7. High-resolution CSR GRACE RL05 mascons

    Science.gov (United States)

    Save, Himanshu; Bettadpur, Srinivas; Tapley, Byron D.

    2016-10-01

    The determination of the gravity model for the Gravity Recovery and Climate Experiment (GRACE) is susceptible to modeling errors, measurement noise, and observability issues. The ill-posed GRACE estimation problem causes the unconstrained GRACE RL05 solutions to have north-south stripes. We discuss the development of global equal area mascon solutions to improve the GRACE gravity information for the study of Earth surface processes. These regularized mascon solutions are developed with a 1° resolution using Tikhonov regularization in a geodesic grid domain. These solutions are derived from GRACE information only, and no external model or data is used to inform the constraints. The regularization matrix is time variable and will not bias or attenuate future regional signals to some past statistics from GRACE or other models. The resulting Center for Space Research (CSR) mascon solutions have no stripe errors and capture all the signals observed by GRACE within the measurement noise level. The solutions are not tailored for specific applications and are global in nature. This study discusses the solution approach and compares the resulting solutions with postprocessed results from the RL05 spherical harmonic solutions and other global mascon solutions for studies of Arctic ice sheet processes, ocean bottom pressure variation, and land surface total water storage change. This suite of comparisons leads to the conclusion that the mascon solutions presented here are an enhanced representation of the RL05 GRACE solutions and provide accurate surface-based gridded information that can be used without further processing.

  8. Quench tank in-leakage diagnosis at St. Lucie

    International Nuclear Information System (INIS)

    Price, J.E.; Au-Yang, M.K.; Beckner, D.A.; Vickery, A.N.

    1996-01-01

    In February 1995, leakage into the quench tank of the St. Lucie Nuclear Station Unit 1 was becoming an operational concern. This internal leak resulted in measurable increases in both the temperature and level of the quench tank water, and was so severe that, if the trend continued, plant shut down would be necessary. Preliminary diagnosis based on in-plant instrumentation indicated that any one of 11 valves might be leaking into the quench tank. This paper describes the joint effort by two teams of engineers--one from Florida Power ampersand Light, the other from Framatome Technologies--to identify the sources of the leak, using the latest technology developed for valve diagnosis

  9. Meeting the Challenge of Systemic Change in Geography Education: Lucy Sprague Mitchell's Young Geographers

    Science.gov (United States)

    Downs, Roger M.

    2016-01-01

    The history of K-12 geography education has been characterized by recurrent high hopes and dashed expectations. There have, however, been moments when the trajectory of geography education might have changed to offer students the opportunity to develop a thorough working knowledge of geography. Lucy Sprague Mitchell's geography program developed…

  10. DE-BLURRING SINGLE PHOTON EMISSION COMPUTED TOMOGRAPHY IMAGES USING WAVELET DECOMPOSITION

    Directory of Open Access Journals (Sweden)

    Neethu M. Sasi

    2016-02-01

    Full Text Available Single photon emission computed tomography imaging is a popular nuclear medicine imaging technique which generates images by detecting radiations emitted by radioactive isotopes injected in the human body. Scattering of these emitted radiations introduces blur in this type of images. This paper proposes an image processing technique to enhance cardiac single photon emission computed tomography images by reducing the blur in the image. The algorithm works in two main stages. In the first stage a maximum likelihood estimate of the point spread function and the true image is obtained. In the second stage Lucy Richardson algorithm is applied on the selected wavelet coefficients of the true image estimate. The significant contribution of this paper is that processing of images is done in the wavelet domain. Pre-filtering is also done as a sub stage to avoid unwanted ringing effects. Real cardiac images are used for the quantitative and qualitative evaluations of the algorithm. Blur metric, peak signal to noise ratio and Tenengrad criterion are used as quantitative measures. Comparison against other existing de-blurring algorithms is also done. The simulation results indicate that the proposed method effectively reduces blur present in the image.

  11. 76 FR 77563 - Florida Power & Light Company; St. Lucie Plant, Unit No. 1; Exemption

    Science.gov (United States)

    2011-12-13

    ... level for St. Lucie, Unit 1, from 2700 megawatts thermal (MWt) to 3020 MWt. As part of the LAR, the.... The above LAR referenced a topical report that stated that the proposed methodology for the P-T curves.... ML103560511), which references Combustion Engineering (CE) Owners Group Topical Report CE NPSD-683-A, Revision...

  12. Treadmill walking of the pneumatic biped Lucy: Walking at different speeds and step-lengths

    Science.gov (United States)

    Vanderborght, B.; Verrelst, B.; Van Ham, R.; Van Damme, M.; Versluys, R.; Lefeber, D.

    2008-07-01

    Actuators with adaptable compliance are gaining interest in the field of legged robotics due to their capability to store motion energy and to exploit the natural dynamics of the system to reduce energy consumption while walking and running. To perform research on compliant actuators we have built the planar biped Lucy. The robot has six actuated joints, the ankle, knee and hip of both legs with each joint powered by two pleated pneumatic artificial muscles in an antagonistic setup. This makes it possible to control both the torque and the stiffness of the joint. Such compliant actuators are used in passive walkers to overcome friction when walking over level ground and to improve stability. Typically, this kind of robots is only designed to walk with a constant walking speed and step-length, determined by the mechanical design of the mechanism and the properties of the ground. In this paper, we show that by an appropriate control, the robot Lucy is able to walk at different speeds and step-lengths and that adding and releasing weights does not affect the stability of the robot. To perform these experiments, an automated treadmill was built

  13. Face off: searching for truth and beauty in the clinical encounter. Based on the memoir, autobiography of a face by Lucy Grealy.

    Science.gov (United States)

    Shannon, Mary T

    2012-08-01

    Based on Lucy Grealy's memoir, Autobiography of a Face, this article explores the relationship between gender and illness in our culture, as well as the paradox of "intimacy without intimacy" in the clinical encounter. Included is a brief review of how authenticity, vulnerability, and mutual recognition of suffering can foster the kind of empathic doctor-patient relationship that Lucy Grealy sorely needed, but never received. As she says at the end of her memoir, "All those years I'd handed my ugliness over to people, and seen only the different ways it was reflected back to me."

  14. New deconvolution method for microscopic images based on the continuous Gaussian radial basis function interpolation model.

    Science.gov (United States)

    Chen, Zhaoxue; Chen, Hao

    2014-01-01

    A deconvolution method based on the Gaussian radial basis function (GRBF) interpolation is proposed. Both the original image and Gaussian point spread function are expressed as the same continuous GRBF model, thus image degradation is simplified as convolution of two continuous Gaussian functions, and image deconvolution is converted to calculate the weighted coefficients of two-dimensional control points. Compared with Wiener filter and Lucy-Richardson algorithm, the GRBF method has an obvious advantage in the quality of restored images. In order to overcome such a defect of long-time computing, the method of graphic processing unit multithreading or increasing space interval of control points is adopted, respectively, to speed up the implementation of GRBF method. The experiments show that based on the continuous GRBF model, the image deconvolution can be efficiently implemented by the method, which also has a considerable reference value for the study of three-dimensional microscopic image deconvolution.

  15. Double cascade turbulence and Richardson dispersion in a horizontal fluid flow induced by Faraday waves.

    Science.gov (United States)

    von Kameke, A; Huhn, F; Fernández-García, G; Muñuzuri, A P; Pérez-Muñuzuri, V

    2011-08-12

    We report the experimental observation of Richardson dispersion and a double cascade in a thin horizontal fluid flow induced by Faraday waves. The energy spectra and the mean spectral energy flux obtained from particle image velocimetry data suggest an inverse energy cascade with Kolmogorov type scaling E(k) ∝ k(γ), γ ≈ -5/3 and an E(k) ∝ k(γ), γ ≈ -3 enstrophy cascade. Particle transport is studied analyzing absolute and relative dispersion as well as the finite size Lyapunov exponent (FSLE) via the direct tracking of real particles and numerical advection of virtual particles. Richardson dispersion with ∝ t(3) is observed and is also reflected in the slopes of the FSLE (Λ ∝ ΔR(-2/3)) for virtual and real particles.

  16. Fungsi Bangunan Dokwi Vam dan Kembu Vam Bagi Suku Yali dalam Novel Penguasa-penguasa Bumi Karya Don Richardson

    Directory of Open Access Journals (Sweden)

    Ummu Fatimah Ria Lestari

    2016-04-01

    Full Text Available This study discusses the function of the building of Dokwi Vam and Kembu Vam contained in the novel-Sovereign Ruler of the Earth works Don Richardson. In general, this novel tells the story of the life of Stan Dale and Yali tribe. Stan Dale is a missionary who served in Papua. He struggled to introduce Christianity to the Yali tribe. This study uses the description of the technical literature. This research resulted in a description of the function of the building of Dokwi Vam and Kembu Vam contained in the novel Lord of-Ruler of the Earth works Don Richardson. Dokwi Vam used as a museum (where the old stuff to worship as they still follow animism, while the Kembu Vam serves as a temple / animism worship in Yali tribe. Penelitian ini membahas tentang fungsi bangunan Dokwi Vam dan Kembu Vam yang terdapat dalam novel Penguasa-Penguasa Bumi karya Don Richardson. Secara umum, novel ini bercerita tentang kehidupan Stan Dale dan suku Yali. Stan Dale adalah seorang missionaris yang melayani di Tanah Papua. Ia berjuang untuk memperkenalkan agama Nasrani kepada suku Yali. Penelitian ini menggunakan metode deskripsi dengan teknik studi pustaka. Penelitian ini menghasilkan deskripsi tentang fungsi bangunan Dokwi Vam dan Kembu Vam yang terdapat dalam novel Penguasa-Penguasa Bumi karya Don Richardson. Dokwi Vam digunakan sebagai museum (tempat barang-barang kuno untuk penyembahan karena mereka masih menganut kepercayaan animisme, sedangkan Kembu Vam berfungsi sebagai rumah peribadatan/penyembahan dalam kepercayaan animisme suku Yali.

  17. Evaluation of Clear-Sky Incoming Radiation Estimating Equations Typically Used in Remote Sensing Evapotranspiration Algorithms

    Directory of Open Access Journals (Sweden)

    Ted W. Sammis

    2013-09-01

    Full Text Available Net radiation is a key component of the energy balance, whose estimation accuracy has an impact on energy flux estimates from satellite data. In typical remote sensing evapotranspiration (ET algorithms, the outgoing shortwave and longwave components of net radiation are obtained from remote sensing data, while the incoming shortwave (RS and longwave (RL components are typically estimated from weather data using empirical equations. This study evaluates the accuracy of empirical equations commonly used in remote sensing ET algorithms for estimating RS and RL radiation. Evaluation is carried out through comparison of estimates and observations at five sites that represent different climatic regions from humid to arid. Results reveal (1 both RS and RL estimates from all evaluated equations well correlate with observations (R2 ≥ 0.92, (2 RS estimating equations tend to overestimate, especially at higher values, (3 RL estimating equations tend to give more biased values in arid and semi-arid regions, (4 a model that parameterizes the diffuse component of radiation using two clearness indices and a simple model that assumes a linear increase of atmospheric transmissivity with elevation give better RS estimates, and (5 mean relative absolute errors in the net radiation (Rn estimates caused by the use of RS and RL estimating equations varies from 10% to 22%. This study suggests that Rn estimates using recommended incoming radiation estimating equations could improve ET estimates.

  18. Calibration of piezoelectric RL shunts with explicit residual mode correction

    DEFF Research Database (Denmark)

    Høgsberg, Jan Becker; Krenk, Steen

    2017-01-01

    Piezoelectric RL (resistive-inductive) shunts are passive resonant devices used for damping of dominant vibration modes of a flexible structure and their efficiency relies on the precise calibration of the shunt components. In the present paper improved calibration accuracy is attained by an exte......Piezoelectric RL (resistive-inductive) shunts are passive resonant devices used for damping of dominant vibration modes of a flexible structure and their efficiency relies on the precise calibration of the shunt components. In the present paper improved calibration accuracy is attained...... by an extension of the local piezoelectric transducer displacement by two additional terms, representing the flexibility and inertia contributions from the residual vibration modes not directly addressed by the shunt damping. This results in an augmented dynamic model for the targeted resonant vibration mode...

  19. Styrene maleic acid-encapsulated RL71 micelles suppress tumor growth in a murine xenograft model of triple negative breast cancer.

    Science.gov (United States)

    Martey, Orleans; Nimick, Mhairi; Taurin, Sebastien; Sundararajan, Vignesh; Greish, Khaled; Rosengren, Rhonda J

    2017-01-01

    Patients with triple negative breast cancer have a poor prognosis due in part to the lack of targeted therapies. In the search for novel drugs, our laboratory has developed a second-generation curcumin derivative, 3,5-bis(3,4,5-trimethoxybenzylidene)-1-methylpiperidine-4-one (RL71), that exhibits potent in vitro cytotoxicity. To improve the clinical potential of this drug, we have encapsulated it in styrene maleic acid (SMA) micelles. SMA-RL71 showed improved biodistribution, and drug accumulation in the tumor increased 16-fold compared to control. SMA-RL71 (10 mg/kg, intravenously, two times a week for 2 weeks) also significantly suppressed tumor growth compared to control in a xenograft model of triple negative breast cancer. Free RL71 was unable to alter tumor growth. Tumors from SMA-RL71-treated mice showed a decrease in angiogenesis and an increase in apoptosis. The drug treatment also modulated various cell signaling proteins including the epidermal growth factor receptor, with the mechanisms for tumor suppression consistent with previous work with RL71 in vitro. The nanoformulation was also nontoxic as shown by normal levels of plasma markers for liver and kidney injury following weekly administration of SMA-RL71 (10 mg/kg) for 90 days. Thus, we report clinical potential following encapsulation of a novel curcumin derivative, RL71, in SMA micelles.

  20. Briti ekspert : euroliidus kaob Eestist tööpuudus / Michael Richardson ; interv. Airi Ilisson

    Index Scriptorium Estoniae

    Richardson, Michael

    2003-01-01

    Eestis Suurbritannia tööreformi tutvustava töö- ja sotsiaalnõuniku sõnul on Suurbritannias peaaegu kadunud pikaajaline töötus noorte hulgas, vanematest inimestest on vaid üksikud tööta kauem kui aasta. Vt. samas: Kes on Michael Richardson?

  1. PeRL: A circum-Arctic Permafrost Region Pond and Lake database

    Science.gov (United States)

    Muster, Sina; Roth, Kurt; Langer, Moritz; Lange, Stephan; Cresto Aleina, Fabio; Bartsch, Annett; Morgenstern, Anne; Grosse, Guido; Jones, Benjamin; Sannel, A.B.K.; Sjoberg, Ylva; Gunther, Frank; Andresen, Christian; Veremeeva, Alexandra; Lindgren, Prajna R.; Bouchard, Frédéric; Lara, Mark J.; Fortier, Daniel; Charbonneau, Simon; Virtanen, Tarmo A.; Hugelius, Gustaf; Palmtag, J.; Siewert, Matthias B.; Riley, William J.; Koven, Charles; Boike, Julia

    2017-01-01

    Ponds and lakes are abundant in Arctic permafrost lowlands. They play an important role in Arctic wetland ecosystems by regulating carbon, water, and energy fluxes and providing freshwater habitats. However, ponds, i.e., waterbodies with surface areas smaller than 1. 0 × 104 m2, have not been inventoried on global and regional scales. The Permafrost Region Pond and Lake (PeRL) database presents the results of a circum-Arctic effort to map ponds and lakes from modern (2002–2013) high-resolution aerial and satellite imagery with a resolution of 5 m or better. The database also includes historical imagery from 1948 to 1965 with a resolution of 6 m or better. PeRL includes 69 maps covering a wide range of environmental conditions from tundra to boreal regions and from continuous to discontinuous permafrost zones. Waterbody maps are linked to regional permafrost landscape maps which provide information on permafrost extent, ground ice volume, geology, and lithology. This paper describes waterbody classification and accuracy, and presents statistics of waterbody distribution for each site. Maps of permafrost landscapes in Alaska, Canada, and Russia are used to extrapolate waterbody statistics from the site level to regional landscape units. PeRL presents pond and lake estimates for a total area of 1. 4 × 106 km2 across the Arctic, about 17 % of the Arctic lowland ( s.l.) land surface area. PeRL waterbodies with sizes of 1. 0 × 106 m2 down to 1. 0 × 102 m2 contributed up to 21 % to the total water fraction. Waterbody density ranged from 1. 0 × 10 to 9. 4 × 101 km−2. Ponds are the dominant waterbody type by number in all landscapes representing 45–99 % of the total waterbody number. The implementation of PeRL size distributions in land surface models will greatly improve the investigation and projection of surface inundation and carbon fluxes in permafrost lowlands. Waterbody maps, study area

  2. Insights into the Earth System mass variability from CSR-RL05 GRACE gravity fields

    Science.gov (United States)

    Bettadpur, S.

    2012-04-01

    The next-generation Release-05 GRACE gravity field data products are the result of extensive effort applied to the improvements to the GRACE Level-1 (tracking) data products, and to improvements in the background gravity models and processing methodology. As a result, the squared-error upper-bound in RL05 fields is half or less than the squared-error upper-bound in RL04 fields. The CSR-RL05 field release consists of unconstrained gravity fields as well as a regularized gravity field time-series that can be used for several applications without any post-processing error reduction. This paper will describe the background and the nature of these improvements in the data products, and provide an error characterization. We will describe the insights these new series offer in measuring the mass flux due to diverse Hydrologic, Oceanographic and Cryospheric processes.

  3. On Richardson extrapolation for low-dissipation low-dispersion diagonally implicit Runge-Kutta schemes

    Science.gov (United States)

    Havasi, Ágnes; Kazemi, Ehsan

    2018-04-01

    In the modeling of wave propagation phenomena it is necessary to use time integration methods which are not only sufficiently accurate, but also properly describe the amplitude and phase of the propagating waves. It is not clear if amending the developed schemes by extrapolation methods to obtain a high order of accuracy preserves the qualitative properties of these schemes in the perspective of dissipation, dispersion and stability analysis. It is illustrated that the combination of various optimized schemes with Richardson extrapolation is not optimal for minimal dissipation and dispersion errors. Optimized third-order and fourth-order methods are obtained, and it is shown that the proposed methods combined with Richardson extrapolation result in fourth and fifth orders of accuracy correspondingly, while preserving optimality and stability. The numerical applications include the linear wave equation, a stiff system of reaction-diffusion equations and the nonlinear Euler equations with oscillatory initial conditions. It is demonstrated that the extrapolated third-order scheme outperforms the recently developed fourth-order diagonally implicit Runge-Kutta scheme in terms of accuracy and stability.

  4. The Lund University Checklist for Incipient Exhaustion: a prospective validation of the onset of sustained stress and exhaustion warnings

    Directory of Open Access Journals (Sweden)

    Kai Österberg

    2016-09-01

    Full Text Available Abstract Background The need for instruments that can assist in detecting the prodromal stages of stress-related exhaustion has been acknowledged. The aim of the present study was to evaluate whether the Lund University Checklist for Incipient Exhaustion (LUCIE could accurately and prospectively detect the onset of incipient exhaustion and to what extent work stressor exposure and private burdens were associated with increasing LUCIE scores. Methods Using surveys, 1355 employees were followed for 11 quarters. Participants with prospectively elevated LUCIE scores were targeted by three algorithms entailing 4 quarters: (1 abrupt onset to a sustained Stress Warning (n = 18, (2 gradual onset to a sustained Stress Warning (n = 42, and (3 sustained Exhaustion Warning (n = 36. The targeted participants’ survey reports on changes in work situation and private life during the fulfillment of any algorithm criteria were analyzed, together with the interview data. Participants untargeted by the algorithms constituted a control group (n = 745. Results Eighty-seven percent of participants fulfilling any LUCIE algorithm criteria (LUCIE indication cases rated a negative change in their work situation during the 4 quarters, compared to 48 % of controls. Ratings of negative changes in private life were also more common in the LUCIE indication groups than among controls (58 % vs. 29 %, but free-text commentaries revealed that almost half of the ratings in the LUCIE indication groups were due to work-to-family conflicts and health problems caused by excessive workload, assigned more properly to work-related negative changes. When excluding the themes related to work-stress-related private life compromises, negative private life changes in the LUCIE indication groups dropped from 58 to 32 %, while only a negligible drop from 29 to 26 % was observed among controls. In retrospective interviews, 79 % of the LUCIE indication participants

  5. Anti-Proliferative Effects of Siegesbeckia orientalis Ethanol Extract on Human Endometrial RL-95 Cancer Cells

    Directory of Open Access Journals (Sweden)

    Chi-Chang Chang

    2014-12-01

    Full Text Available Endometrial cancer is a common malignancy of the female genital tract. This study demonstrates that Siegesbeckia orientalis ethanol extract (SOE significantly inhibited the proliferation of RL95-2 human endometrial cancer cells. Treating RL95-2 cells with SOE caused cell arrest in the G2/M phase and induced apoptosis of RL95-2 cells by up-regulating Bad, Bak and Bax protein expression and down-regulation of Bcl-2 and Bcl-xL protein expression. Treatment with SOE increased protein expression of caspase-3, -8 and -9 dose-dependently, indicating that apoptosis was through the intrinsic and extrinsic apoptotic pathways. Moreover, SOE was also effective against A549 (lung cancer, Hep G2 (hepatoma, FaDu (pharynx squamous cancer, MDA-MB-231 (breast cancer, and especially on LNCaP (prostate cancer cell lines. In total, 10 constituents of SOE were identified by Gas chromatography-mass analysis. Caryophyllene oxide and caryophyllene are largely responsible for most cytotoxic activity of SOE against RL95-2 cells. Overall, this study suggests that SOE is a promising anticancer agent for treating endometrial cancer.

  6. Meloidogyne luci n. sp. (Nematoda: Meloidogynidae), a root-knot nematode parasitising different crops in Brazil, Chile and Iran

    NARCIS (Netherlands)

    Carneiro, R.M.D.G.; Correa, V.R.; Almeida, M.R.A.; Gomes, A.C.M.M.; Deimi, A.M.; Castagnone-Sereno, P.; Karssen, G.

    2014-01-01

    A new root-knot nematode parasitising vegetables, flowers and fruits in Brazil, Iran and Chile, is described as Meloidogyne luci n. sp. The female has an oval to squarish perineal pattern with a low to moderately high dorsal arc and without shoulders, similar to M. ethiopica. The female stylet is

  7. Zanaatkârlığın Tarihsel Dönüşümü Ve Richard Sennett’in Zanaatkârlık Kavramı / Historical Transformation of Craftsmanship and Richard Sennett’s Concept of Craftsmanhip

    Directory of Open Access Journals (Sweden)

    Umut Osmanlı

    2017-06-01

    Full Text Available Abstract This study examines the concept of crafting which means becoming an expert in an activity. Specializing in a field, in order to facilitate the life, backs as early as the first human settlements in human history. Crafting is a phenomenon in which the person makes the craft in the best possible way for its own goodness. However, it has been gone through some transformations since its formation. The history of crafting includes some milestones such as institution of crafting, its unification and its transformation after the Industrial Revolution. The most essential one of those transformations has been experienced after the Industrial Revolution and it has become difficult to define the crafting followingly. At the present time, crafting has been diminished to the status of a simple worker. In this work, which the historical transformation of crafting is held, It will be examined respectively: the historical roots of crafting, the social status of crafting, professional organizations that hold crafting altogether, the effects on crafting brought by new economic order of industrial revolution and the ideas of Richard Sennett on crafting in order to evaluate crafting in our present society. Hereby, I will try to reach the definitions of crafting that are able to address our today’s society by taking references from the roots of archaic crafting phenomenon. Öz Bu çalışmada, bilinçli olarak yapılan bir eylemde ustalaşma/uzmanlaşma anlamına gelen zanaatkârlık kavramı incelenmiştir. Hayatı kolaylaştırmak adına herhangi bir alanda çalışma ve bu alanda beceri kazanarak ustalaşmak, insanların bir arada yaşamaya başlaması kadar eskidir. Zanaatkârlık, kişinin kendi iyiliği için uğraş verdiği şeyi mümkün olan en iyi şekilde yapmasıdır. Fakat bu durum ortaya çıktığı günden günümüze gelene kadar çeşitli dönüşümlere uğramıştır. Zanaatkârlığın tarihsel serüveni, zanaatkârl

  8. Nickel detoxification and plant growth promotion by multi metal resistant plant growth promoting Rhizobium species RL9.

    Science.gov (United States)

    Wani, Parvaze Ahmad; Khan, Mohammad Saghir

    2013-07-01

    Pollution of the biosphere by heavy metals is a global threat that has accelerated dramatically since the beginning of industrial revolution. The aim of the study is to check the resistance of RL9 towards the metals and to observe the effect of Rhizobium species on growth, pigment content, protein and nickel uptake by lentil in the presence and absence of nickel. The multi metal tolerant and plant growth promoting Rhizobium strain RL9 was isolated from the nodules of lentil. The strain not only tolerated nickel but was also tolerant o cadmium, chromium, nickel, lead, zinc and copper. The strain tolerated nickel 500 μg/mL, cadmium 300 μg/mL, chromium 400 μg/mL, lead 1,400 μg/mL, zinc 1,000 μg/mL and copper 300 μg/mL, produced good amount of indole acetic acid and was also positive for siderophore, hydrogen cyanide and ammonia. The strain RL9 was further assessed with increasing concentrations of nickel when lentil was used as a test crop. The strain RL9 significantly increased growth, nodulation, chlorophyll, leghaemoglobin, nitrogen content, seed protein and seed yield compared to plants grown in the absence of bioinoculant but amended with nickel The strain RL9 decreased uptake of nickel in lentil compared to plants grown in the absence of bio-inoculant. Due to these intrinsic abilities strain RL9 could be utilized for growth promotion as well as for the remediation of nickel in nickel contaminated soil.

  9. Kino repertuāra pārlūks Android ierīcēm

    OpenAIRE

    Zvirbulis, Jānis

    2013-01-01

    Kvalifikācijas darbā “Kino repertuāra pārlūks Android ierīcēm” tiek aprakstīta Android lietojumprogrammas “Kino repertuāra pārlūks” izstrāde un funkcionalitāte. Lietotne paredzēta kinoteātra repertuāra aplūkošanai izmantojot planšetdatorus un mobilos tālruņus, kas darbojas ar Android operētājsistēmu. Tā ir domāta kā parocīgāka alternatīva filmu apraksta un seansa laiku uzzināšanai caur kinoteātra mājaslapu, skrejlapām vai afišām. Atslēgvārdi: Android, filmas, pārlūks....

  10. The permeability of EUDRAGIT RL and HEMA-MMA microcapsules to glucose and inulin.

    Science.gov (United States)

    Douglas, J A; Sefton, M V

    1990-10-05

    Measurement of the rate of glucose diffusion from EUDGRAGIT RL and HEMA-MMA microcapsules coupled with a Thiele modulus/Biot number analysis of the glucose utilization rate suggests that pancreatic islets and CHO (Chinese hamster ovary) cells (at moderate to high cell densities) should not be adversely affected by the diffusion restrictions associated with these capsule membranes. The mass transfer coefficients for glucose at 20 degrees C were of the same order of magnitude for both capsules, based on release measurements: approximately 5 x 10(-6) cm/s for EUDRAGIT RL and approximately 2 x 10(-6) for HEMA-MMA. Inulin release from EUDRAGIT RL was slower than for glucose (mass transfer coefficient 14 +/- 4 x 10(-8) cm/s). The Thiele moduli were much less than 1, either for a single islet at the center of a capsule or CHO cells uniformly distributed throughout a capsule at 10(-6) cells/ mL, so that diffusion restrictions within the cells in EUDRAGIT RL or 800 microm HEMA-MMA capsules should be negligible. The ratio of external to internal diffusion resistance (Biot number) was less than 1, so that at most, only a small diffusion effect on glucose utilization should be expected (i.e., the overall effectiveness factors were greater than 0.8). These calculations were consistent with experimental observation of encapsulated islet behavior but not fully with CHO cell behavior. Permeability restricted cell viability and growth is potentially a major limitation of encapsulated cells; further analysis is warranted.

  11. Exposure to welding fumes is associated with hypomethylation of the F2RL3 gene: a cardiovascular disease marker.

    Science.gov (United States)

    Hossain, Mohammad B; Li, Huiqi; Hedmer, Maria; Tinnerberg, Håkan; Albin, Maria; Broberg, Karin

    2015-12-01

    Welders are at risk for cardiovascular disease. Recent studies linked tobacco smoke exposure to hypomethylation of the F2RL3 (coagulation factor II (thrombin) receptor-like 3) gene, a marker for cardiovascular disease prognosis and mortality. However, whether welding fumes cause hypomethylation of F2RL3 remains unknown. We investigated 101 welders (median span of working as a welder: 7 years) and 127 unexposed controls (non-welders with no obvious exposure to respirable dust at work), age range 23-60 years, all currently non-smoking, in Sweden. The participants were interviewed about their work history, lifestyle factors and diseases. Personal sampling of respirable dust was performed for the welders. DNA methylation of F2RL3 in blood was assessed by pyrosequencing of four CpG sites, CpG_2 (corresponds to cg03636183) to CpG_5, in F2RL3. Multivariable linear regression analysis was used to assess the association between exposure to welding fumes and F2RL3 methylation. Welders had 2.6% lower methylation of CpG_5 than controls (pWelding fumes exposure and previous smoking were associated with F2RL3 hypomethylation. This finding links low-to-moderate exposure to welding fumes to adverse effects on the cardiovascular system, and suggests a potential mechanistic pathway for this link, via epigenetic effects on F2RL3 expression. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  12. Toxin composition of the 2016 Microcystis aeruginosa bloom in the St. Lucie Estuary, Florida.

    Science.gov (United States)

    Oehrle, Stuart; Rodriguez-Matos, Marliette; Cartamil, Michael; Zavala, Cristian; Rein, Kathleen S

    2017-11-01

    A bloom of the cyanobacteria, Microcystis aeruginosa occurred in the St. Lucie Estuary during the summer of 2016, stimulated by the release of waters from Lake Okeechobee. This cyanobacterium produces the microcystins, a suite of heptapeptide hepatotoxins. The toxin composition of the bloom was analyzed and was compared to an archived bloom sample from 2005. Microcystin-LR was the most abundant toxin with lesser amounts of microcystin variants. Nodularin, cylindrospermopsin and anatoxin-a were not detected. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. PeRL: a circum-Arctic Permafrost Region Pond and Lake database

    Science.gov (United States)

    Muster, Sina; Roth, Kurt; Langer, Moritz; Lange, Stephan; Cresto Aleina, Fabio; Bartsch, Annett; Morgenstern, Anne; Grosse, Guido; Jones, Benjamin; Sannel, A. Britta K.; Sjöberg, Ylva; Günther, Frank; Andresen, Christian; Veremeeva, Alexandra; Lindgren, Prajna R.; Bouchard, Frédéric; Lara, Mark J.; Fortier, Daniel; Charbonneau, Simon; Virtanen, Tarmo A.; Hugelius, Gustaf; Palmtag, Juri; Siewert, Matthias B.; Riley, William J.; Koven, Charles D.; Boike, Julia

    2017-06-01

    Ponds and lakes are abundant in Arctic permafrost lowlands. They play an important role in Arctic wetland ecosystems by regulating carbon, water, and energy fluxes and providing freshwater habitats. However, ponds, i.e., waterbodies with surface areas smaller than 1. 0 × 104 m2, have not been inventoried on global and regional scales. The Permafrost Region Pond and Lake (PeRL) database presents the results of a circum-Arctic effort to map ponds and lakes from modern (2002-2013) high-resolution aerial and satellite imagery with a resolution of 5 m or better. The database also includes historical imagery from 1948 to 1965 with a resolution of 6 m or better. PeRL includes 69 maps covering a wide range of environmental conditions from tundra to boreal regions and from continuous to discontinuous permafrost zones. Waterbody maps are linked to regional permafrost landscape maps which provide information on permafrost extent, ground ice volume, geology, and lithology. This paper describes waterbody classification and accuracy, and presents statistics of waterbody distribution for each site. Maps of permafrost landscapes in Alaska, Canada, and Russia are used to extrapolate waterbody statistics from the site level to regional landscape units. PeRL presents pond and lake estimates for a total area of 1. 4 × 106 km2 across the Arctic, about 17 % of the Arctic lowland ( pangaea.de/10.1594/PANGAEA.868349" target="_blank">https://doi.pangaea.de/10.1594/PANGAEA.868349.

  14. Cognitive Radio Transceivers: RF, Spectrum Sensing, and Learning Algorithms Review

    Directory of Open Access Journals (Sweden)

    Lise Safatly

    2014-01-01

    reconfigurable radio frequency (RF parts, enhanced spectrum sensing algorithms, and sophisticated machine learning techniques. In this paper, we present a review of the recent advances in CR transceivers hardware design and algorithms. For the RF part, three types of antennas are presented: UWB antennas, frequency-reconfigurable/tunable antennas, and UWB antennas with reconfigurable band notches. The main challenges faced by the design of the other RF blocks are also discussed. Sophisticated spectrum sensing algorithms that overcome main sensing challenges such as model uncertainty, hardware impairments, and wideband sensing are highlighted. The cognitive engine features are discussed. Moreover, we study unsupervised classification algorithms and a reinforcement learning (RL algorithm that has been proposed to perform decision-making in CR networks.

  15. Tumor Cells Express FcγRl Which Contributes to Tumor Cell Growth and a Metastatic Phenotype

    Directory of Open Access Journals (Sweden)

    M. Bud Nelson

    2001-01-01

    Full Text Available High levels of circulating immune complexes containing tumor-associated antigens are associated with a poor prognosis for individuals with cancer. The ability of B cells, previously exposed to tumor-associated antigens, to promote both in vitro and in vivo tumor growth formed the rationale to evaluate the mechanism by which immune complexes may promote tumor growth. In elucidating this mechanism, FcγRl expression by tumor cells was characterized by flow cytometry, polymerase chain reaction, and sequence analysis. Immune complexes containing shed tumor antigen and anti-shed tumor antigen Ab cross-linked FcγRl-expressing tumor cells, which resulted in an induction of tumor cell proliferation and of shed tumor antigen production. Use of selective tyrosine kinase inhibitors demonstrated that tumor cell proliferation induced by immune complex cross-linking of FcγRl is dependent on the tyrosine kinase signal transduction pathway. A selective inhibitor of phosphatidylinositol-3 kinase also inhibited this induction of tumor cell proliferation. These findings support a role for immune complexes and FcγRl expression by tumor cells in augmentation of tumor growth and a metastatic phenotype.

  16. Styrene maleic acid-encapsulated RL71 micelles suppress tumor growth in a murine xenograft model of triple negative breast cancer

    Directory of Open Access Journals (Sweden)

    Martey O

    2017-10-01

    Full Text Available Orleans Martey,1 Mhairi Nimick,1 Sebastien Taurin,1 Vignesh Sundararajan,1 Khaled Greish,2 Rhonda J Rosengren1 1Department of Pharmacology and Toxicology, University of Otago, Dunedin, New Zealand; 2Department of Molecular Medicine, College of Medicine and Medical Sciences, Arabian Gulf University, Manama, Kingdom of Bahrain Abstract: Patients with triple negative breast cancer have a poor prognosis due in part to the lack of targeted therapies. In the search for novel drugs, our laboratory has developed a second-generation curcumin derivative, 3,5-bis(3,4,5-trimethoxybenzylidene-1-methylpiperidine-4-one (RL71, that exhibits potent in vitro cytotoxicity. To improve the clinical potential of this drug, we have encapsulated it in styrene maleic acid (SMA micelles. SMA-RL71 showed improved biodistribution, and drug accumulation in the tumor increased 16-fold compared to control. SMA-RL71 (10 mg/kg, intravenously, two times a week for 2 weeks also significantly suppressed tumor growth compared to control in a xenograft model of triple negative breast cancer. Free RL71 was unable to alter tumor growth. Tumors from SMA-RL71-treated mice showed a decrease in angiogenesis and an increase in apoptosis. The drug treatment also modulated various cell signaling proteins including the epidermal growth factor receptor, with the mechanisms for tumor suppression consistent with previous work with RL71 in vitro. The nanoformulation was also nontoxic as shown by normal levels of plasma markers for liver and kidney injury following weekly administration of SMA-RL71 (10 mg/kg for 90 days. Thus, we report clinical potential following encapsulation of a novel curcumin derivative, RL71, in SMA micelles. Keywords: curcumin derivatives, nanomedicine, EGFR, biodistribution

  17. Biotreatment of anthraquinone dye Drimarene Blue K 2 RL | Siddiqui ...

    African Journals Online (AJOL)

    Drimarene Blue (Db) K2RL is a reactive anthraquinone dye, used extensively in textile industry, due to poor adsorbability to textile fiber; it has a higher exhaustion rate in wastewater. The dye is toxic, carcinogenic, mutagenic and resistant to degradation. Decolorization of this dye was studied in two different systems.

  18. Richardson effects in turbulent buoyant flows

    Science.gov (United States)

    Biggi, Renaud; Blanquart, Guillaume

    2010-11-01

    Rayleigh Taylor instabilities are found in a wide range of scientific fields from supernova explosions to underwater hot plumes. The turbulent flow is affected by the presence of buoyancy forces and may not follow the Kolmogorov theory anymore. The objective of the present work is to analyze the complex interactions between turbulence and buoyancy. Towards that goal, simulations have been performed with a high order, conservative, low Mach number code [Desjardins et. al. JCP 2010]. The configuration corresponds to a cubic box initially filled with homogeneous isotropic turbulence with heavy fluid on top and light gas at the bottom. The initial turbulent field was forced using linear forcing up to a Reynolds number of Reλ=55 [Meneveau & Rosales, POF 2005]. The Richardson number based on the rms velocity and the integral length scale was varied from 0.1 to 10 to investigate cases with weak and strong buoyancy. Cases with gravity as a stabilizer of turbulence (gravity pointing up) were also considered. The evolution of the turbulent kinetic energy and the total kinetic energy was analyzed and a simple phenomenological model was proposed. Finally, the energy spectra and the isotropy of the flow were also investigated.

  19. Reconstruction of the two-dimensional gravitational potential of galaxy clusters from X-ray and Sunyaev-Zel'dovich measurements

    Science.gov (United States)

    Tchernin, C.; Bartelmann, M.; Huber, K.; Dekel, A.; Hurier, G.; Majer, C. L.; Meyer, S.; Zinger, E.; Eckert, D.; Meneghetti, M.; Merten, J.

    2018-06-01

    Context. The mass of galaxy clusters is not a direct observable, nonetheless it is commonly used to probe cosmological models. Based on the combination of all main cluster observables, that is, the X-ray emission, the thermal Sunyaev-Zel'dovich (SZ) signal, the velocity dispersion of the cluster galaxies, and gravitational lensing, the gravitational potential of galaxy clusters can be jointly reconstructed. Aims: We derive the two main ingredients required for this joint reconstruction: the potentials individually reconstructed from the observables and their covariance matrices, which act as a weight in the joint reconstruction. We show here the method to derive these quantities. The result of the joint reconstruction applied to a real cluster will be discussed in a forthcoming paper. Methods: We apply the Richardson-Lucy deprojection algorithm to data on a two-dimensional (2D) grid. We first test the 2D deprojection algorithm on a β-profile. Assuming hydrostatic equilibrium, we further reconstruct the gravitational potential of a simulated galaxy cluster based on synthetic SZ and X-ray data. We then reconstruct the projected gravitational potential of the massive and dynamically active cluster Abell 2142, based on the X-ray observations collected with XMM-Newton and the SZ observations from the Planck satellite. Finally, we compute the covariance matrix of the projected reconstructed potential of the cluster Abell 2142 based on the X-ray measurements collected with XMM-Newton. Results: The gravitational potentials of the simulated cluster recovered from synthetic X-ray and SZ data are consistent, even though the potential reconstructed from X-rays shows larger deviations from the true potential. Regarding Abell 2142, the projected gravitational cluster potentials recovered from SZ and X-ray data reproduce well the projected potential inferred from gravitational-lensing observations. We also observe that the covariance matrix of the potential for Abell 2142

  20. 3D Laser Processing : The Renault Rl5

    Science.gov (United States)

    Rolland, Olivier C.; Meyer, Bernard D.

    1986-11-01

    The RL5, a five-axis robot, is designed to steer a powerful laser beam on 3 dimensional (3D) trajectories with a great accuracy. Cutting and welding with a CO2 laser beam, drilling with a YAG laser beam are some applications of this machine which can be integrated in a production line. Easy management and modifications of trajectories, obtained either in a teaching mode or by a CAD-CAM system, give the laser tool its main interest : flexibility.

  1. A new 2DS·2RL Robertsonian translocation transfers stem rust resistance gene Sr59 into wheat.

    Science.gov (United States)

    Rahmatov, Mahbubjon; Rouse, Matthew N; Nirmala, Jayaveeramuthu; Danilova, Tatiana; Friebe, Bernd; Steffenson, Brian J; Johansson, Eva

    2016-07-01

    A new stem rust resistance gene Sr59 from Secale cereale was introgressed into wheat as a 2DS·2RL Robertsonian translocation. Emerging new races of the wheat stem rust pathogen (Puccinia graminis f. sp. tritici), from Africa threaten global wheat (Triticum aestivum L.) production. To broaden the resistance spectrum of wheat to these widely virulent African races, additional resistance genes must be identified from all possible gene pools. From the screening of a collection of wheat-rye (Secale cereale L.) chromosome substitution lines developed at the Swedish University of Agricultural Sciences, we described the line 'SLU238' 2R (2D) as possessing resistance to many races of P. graminis f. sp. tritici, including the widely virulent race TTKSK (isolate synonym Ug99) from Africa. The breakage-fusion mechanism of univalent chromosomes was used to produce a new Robertsonian translocation: T2DS·2RL. Molecular marker analysis and stem rust seedling assays at multiple generations confirmed that the stem rust resistance from 'SLU238' is present on the rye chromosome arm 2RL. Line TA5094 (#101) was derived from 'SLU238' and was found to be homozygous for the T2DS·2RL translocation. The stem rust resistance gene on chromosome 2RL arm was designated as Sr59. Although introgressions of rye chromosome arms into wheat have most often been facilitated by irradiation, this study highlights the utility of the breakage-fusion mechanism for rye chromatin introgression. Sr59 provides an additional asset for wheat improvement to mitigate yield losses caused by stem rust.

  2. R/L, a double reporter mouse line that expresses luciferase gene upon Cre-mediated excision, followed by inactivation of mRFP expression.

    Science.gov (United States)

    Jia, Junshuang; Lin, Xiaolin; Lin, Xia; Lin, Taoyan; Chen, Bangzhu; Hao, Weichao; Cheng, Yushuang; Liu, Yu; Dian, Meijuan; Yao, Kaitai; Xiao, Dong; Gu, Weiwang

    2016-10-01

    The Cre/loxP system has become an important tool for the conditional gene knockout and conditional gene expression in genetically engineered mice. The applications of this system depend on transgenic reporter mouse lines that provide Cre recombinase activity with a defined cell type-, tissue-, or developmental stage-specificity. To develop a sensitive assay for monitoring Cre-mediated DNA excisions in mice, we generated Cre-mediated excision reporter mice, designated R/L mice (R/L: mRFP(monomeric red fluorescent protein)/luciferase), express mRFP throughout embryonic development and adult stages, while Cre-mediated excision deletes a loxP-flanked mRFP reporter gene and STOP sequence, thereby activating the expression of the second reporter gene luciferase, as assayed by in vivo and ex vivo bioluminescence imaging. After germ line deletion of the floxed mRFP and STOP sequence in R/L mice by EIIa-Cre mice, the resulting luciferase transgenic mice in which the loxP-mRFP-STOP-loxP cassette is excised from all cells express luciferase in all tissues and organs examined. The expression of luciferase transgene was activated in liver of RL/Alb-Cre double transgenic mice and in brain of RL/Nestin-Cre double transgenic mice when R/L reporter mice were mated with Alb-Cre mice and Nestin-Cre mice, respectively. Our findings reveal that the double reporter R/L mouse line is able to indicate the occurrence of Cre-mediated excision from early embryonic to adult lineages. Taken together, these findings demonstrate that the R/L mice serve as a sensitive reporter for Cre-mediated DNA excision both in living animals and in organs, tissues, and cells following necropsy.

  3. From Lucy to Kadanuumuu: balanced analyses of Australopithecus afarensis assemblages confirm only moderate skeletal dimorphism

    Directory of Open Access Journals (Sweden)

    Philip L. Reno

    2015-04-01

    Full Text Available Sexual dimorphism in body size is often used as a correlate of social and reproductive behavior in Australopithecus afarensis. In addition to a number of isolated specimens, the sample for this species includes two small associated skeletons (A.L. 288-1 or “Lucy” and A.L. 128/129 and a geologically contemporaneous death assemblage of several larger individuals (A.L. 333. These have driven both perceptions and quantitative analyses concluding that Au. afarensis was markedly dimorphic. The Template Method enables simultaneous evaluation of multiple skeletal sites, thereby greatly expanding sample size, and reveals that A. afarensis dimorphism was similar to that of modern humans. A new very large partial skeleton (KSD-VP-1/1 or “Kadanuumuu” can now also be used, like Lucy, as a template specimen. In addition, the recently developed Geometric Mean Method has been used to argue that Au. afarensis was equally or even more dimorphic than gorillas. However, in its previous application Lucy and A.L. 128/129 accounted for 10 of 11 estimates of female size. Here we directly compare the two methods and demonstrate that including multiple measurements from the same partial skeleton that falls at the margin of the species size range dramatically inflates dimorphism estimates. Prevention of the dominance of a single specimen’s contribution to calculations of multiple dimorphism estimates confirms that Au. afarensis was only moderately dimorphic.

  4. Accurate calibration of RL shunts for piezoelectric vibration damping of flexible structures

    DEFF Research Database (Denmark)

    Høgsberg, Jan Becker; Krenk, Steen

    2016-01-01

    Piezoelectric RL (resistive-inductive) shunts are passive resonant devices used for damping of dominantvibration modes of a flexible structure and their efficiency relies on precise calibration of the shuntcomponents. In the present paper improved calibration accuracy is attained by an extension...

  5. Life extension of the St. Lucie unit 1 reactor vessel

    International Nuclear Information System (INIS)

    Rowan, G.A.; Sun, J.B.; Mott, S.L.

    1991-01-01

    In late 1989, Florida Power and Light Company (FP and L) established the policy that St. Lucie unit 1 should not be prevented from achieving a 60-yr operating life by reactor vessel embrittlement. A 60-yr operating life means that the plant would be allowed to operate until the year 2036, which is 20 years beyond the current license expiration date of 2016. Since modifications to the reactor vessel and its components are projected to be expensive, the desire of FP and L management was to achieve this lifetime extension through the use of fuel management and proven technology. The following limitations were placed on any acceptable method for achieving this lifetime extension capability: low fuel cycle cost; low impact on safety parameters; very little or no operations impact; and use of normal reactor materials. A task team was formed along with the Advanced Nuclear Fuels Company (ANF) to develop a vessel-life extension program

  6. Environmental Management Performance Report to DOE-RL June 2002

    International Nuclear Information System (INIS)

    EDER, D.M.

    2002-01-01

    The purpose of this report is to provide the Department of Energy Richland Operations Office (RL) a monthly summary of the Central Plateau Contractor's Environmental Management (EM) performance by Fluor Hanford (FH) and its subcontractors. Only current FH workscope responsibilities are described and other contractor/RL managed work is excluded. Please refer to other sections (BHI, PNNL) for other contractor information. Section A, Executive Summary, provides an executive level summary of the cost, schedule, and technical performance described in this report. It summarizes performance for the period covered, highlights areas worthy of management attention, and provides a forward look to some of the upcoming key performance activities as extracted from the contractor baseline. The remaining sections provide detailed performance data relative to each individual subproject (e.g., Plutonium Finishing Plant, Spent Nuclear Fuels, etc.), in support of Section A of the report. All information is updated as of the end of June 2002 unless otherwise noted. ''Stoplight'' boxes are used to indicate at a glance the condition of a particular safety area. Green boxes denote either (1) the data are stable at a level representing acceptable performance, or (2) an improving trend exists. Yellows denote the data are stable at a level from which improvement is needed. Red denotes a trend exists in a non-improving direction

  7. PeRL: a circum-Arctic Permafrost Region Pond and Lake database

    Directory of Open Access Journals (Sweden)

    S. Muster

    2017-06-01

    Full Text Available Ponds and lakes are abundant in Arctic permafrost lowlands. They play an important role in Arctic wetland ecosystems by regulating carbon, water, and energy fluxes and providing freshwater habitats. However, ponds, i.e., waterbodies with surface areas smaller than 1. 0 × 104 m2, have not been inventoried on global and regional scales. The Permafrost Region Pond and Lake (PeRL database presents the results of a circum-Arctic effort to map ponds and lakes from modern (2002–2013 high-resolution aerial and satellite imagery with a resolution of 5 m or better. The database also includes historical imagery from 1948 to 1965 with a resolution of 6 m or better. PeRL includes 69 maps covering a wide range of environmental conditions from tundra to boreal regions and from continuous to discontinuous permafrost zones. Waterbody maps are linked to regional permafrost landscape maps which provide information on permafrost extent, ground ice volume, geology, and lithology. This paper describes waterbody classification and accuracy, and presents statistics of waterbody distribution for each site. Maps of permafrost landscapes in Alaska, Canada, and Russia are used to extrapolate waterbody statistics from the site level to regional landscape units. PeRL presents pond and lake estimates for a total area of 1. 4 × 106 km2 across the Arctic, about 17 % of the Arctic lowland ( <  300 m a.s.l. land surface area. PeRL waterbodies with sizes of 1. 0 × 106 m2 down to 1. 0 × 102 m2 contributed up to 21 % to the total water fraction. Waterbody density ranged from 1. 0 × 10 to 9. 4 × 101 km−2. Ponds are the dominant waterbody type by number in all landscapes representing 45–99 % of the total waterbody number. The implementation of PeRL size distributions in land surface models will greatly improve the investigation and projection of surface inundation and carbon fluxes in permafrost lowlands

  8. Notas de leitura da obra de Lucie Tanguy: a pesquisa como atividade social e a relação entre ciência e política

    Directory of Open Access Journals (Sweden)

    Liliana Rolfsen Petrilli Segnini

    2012-03-01

    Full Text Available Destacar duas dimensões analíticas a partir da leitura do último livro de Lucie Tanguy, que nos estimula a refletir sobre nossa sociedade e a sociologia brasileira - da educação e do trabalho -, mesmo considerando as dessemelhanças e diferenças de nossas trajetórias históricas. Refiro-me à pesquisa como atividade social e às relações entre ciência e política. Recuperar as formas de poluição e domesticação da sociologia (Florestan Fernandes, especialmente do trabalho e da educação, é uma das tarefas que nos é imposta por meio da leitura do livro de Lucie Tanguy. Para nós, a história da sociologia permanece, ainda muito frequentemente, como uma história dos autores, temas e teorias gerais. Lucie Tanguy foi além, buscou compreender as fontes nos arquivos que informavam as relações de poder - econômicas e sociais - na constituição de uma disciplina e as trajetórias acadêmicas de seus pesquisadores. Ela "levantou o véu" da produção do conhecimento, procurando sua importância na relação entre educação (formação e trabalho (produtividade.

  9. Memory effects and systematic errors in the RL signal from fiber coupled Al2O3:C for medical dosimetry

    DEFF Research Database (Denmark)

    Damkjær, Sidsel Marie Skov; Andersen, Claus Erik

    2010-01-01

    This review describes 40 years of experience gained at Risø The radioluminescence (RL) signal from fiber coupled Al2O3:C can be used for real-time in vivo dosimetry during radiotherapy. RL generally provides measurements with a reproducibility of 2% (one standard deviation). However, we have...

  10. EGS Richardson AGU Chapman NVAG3 Conference: Nonlinear Variability in Geophysics: scaling and multifractal processes

    Directory of Open Access Journals (Sweden)

    D. Schertzer

    1994-01-01

    Full Text Available 1. The conference The third conference on "Nonlinear VAriability in Geophysics: scaling and multifractal processes" (NVAG 3 was held in Cargese, Corsica, Sept. 10-17, 1993. NVAG3 was joint American Geophysical Union Chapman and European Geophysical Society Richardson Memorial conference, the first specialist conference jointly sponsored by the two organizations. It followed NVAG1 (Montreal, Aug. 1986, NVAG2 (Paris, June 1988; Schertzer and Lovejoy, 1991, five consecutive annual sessions at EGS general assemblies and two consecutive spring AGU meeting sessions. As with the other conferences and workshops mentioned above, the aim was to develop confrontation between theories and experiments on scaling/multifractal behaviour of geophysical fields. Subjects covered included climate, clouds, earthquakes, atmospheric and ocean dynamics, tectonics, precipitation, hydrology, the solar cycle and volcanoes. Areas of focus included new methods of data analysis (especially those used for the reliable estimation of multifractal and scaling exponents, as well as their application to rapidly growing data bases from in situ networks and remote sensing. The corresponding modelling, prediction and estimation techniques were also emphasized as were the current debates about stochastic and deterministic dynamics, fractal geometry and multifractals, self-organized criticality and multifractal fields, each of which was the subject of a specific general discussion. The conference started with a one day short course of multifractals featuring four lectures on a Fundamentals of multifractals: dimension, codimensions, codimension formalism, b Multifractal estimation techniques: (PDMS, DTM, c Numerical simulations, Generalized Scale Invariance analysis, d Advanced multifractals, singular statistics, phase transitions, self-organized criticality and Lie cascades (given by D. Schertzer and S. Lovejoy, detailed course notes were sent to participants shortly after the

  11. EGS Richardson AGU Chapman NVAG3 Conference: Nonlinear Variability in Geophysics: scaling and multifractal processes

    Science.gov (United States)

    Schertzer, D.; Lovejoy, S.

    1. The conference The third conference on "Nonlinear VAriability in Geophysics: scaling and multifractal processes" (NVAG 3) was held in Cargese, Corsica, Sept. 10-17, 1993. NVAG3 was joint American Geophysical Union Chapman and European Geophysical Society Richardson Memorial conference, the first specialist conference jointly sponsored by the two organizations. It followed NVAG1 (Montreal, Aug. 1986), NVAG2 (Paris, June 1988; Schertzer and Lovejoy, 1991), five consecutive annual sessions at EGS general assemblies and two consecutive spring AGU meeting sessions. As with the other conferences and workshops mentioned above, the aim was to develop confrontation between theories and experiments on scaling/multifractal behaviour of geophysical fields. Subjects covered included climate, clouds, earthquakes, atmospheric and ocean dynamics, tectonics, precipitation, hydrology, the solar cycle and volcanoes. Areas of focus included new methods of data analysis (especially those used for the reliable estimation of multifractal and scaling exponents), as well as their application to rapidly growing data bases from in situ networks and remote sensing. The corresponding modelling, prediction and estimation techniques were also emphasized as were the current debates about stochastic and deterministic dynamics, fractal geometry and multifractals, self-organized criticality and multifractal fields, each of which was the subject of a specific general discussion. The conference started with a one day short course of multifractals featuring four lectures on a) Fundamentals of multifractals: dimension, codimensions, codimension formalism, b) Multifractal estimation techniques: (PDMS, DTM), c) Numerical simulations, Generalized Scale Invariance analysis, d) Advanced multifractals, singular statistics, phase transitions, self-organized criticality and Lie cascades (given by D. Schertzer and S. Lovejoy, detailed course notes were sent to participants shortly after the conference). This

  12. EGS Richardson AGU Chapman NVAG3 Conference: Nonlinear Variability in Geophysics: scaling and multifractal processes

    OpenAIRE

    D. Schertzer; S. Lovejoy; S. Lovejoy

    1994-01-01

    1. The conference The third conference on "Nonlinear VAriability in Geophysics: scaling and multifractal processes" (NVAG 3) was held in Cargese, Corsica, Sept. 10-17, 1993. NVAG3 was joint American Geophysical Union Chapman and European Geophysical Society Richardson Memorial conference, the first specialist conference jointly sponsored by the two organizations. It followed NVAG1 (Montreal, Aug. 1986), NVAG2 (Paris, June 1988; Schertzer and Lovejoy, 1991), five consecutive annual ...

  13. EGS Richardson AGU Chapman NVAG3 Conference: Nonlinear Variability in Geophysics: scaling and multifractal processes

    OpenAIRE

    Schertzer , D; Lovejoy , S.

    1994-01-01

    International audience; 1. The conference The third conference on "Nonlinear VAriability in Geophysics: scaling and multifractal processes" (NVAG 3) was held in Cargese, Corsica, Sept. 10-17, 1993. NVAG3 was joint American Geophysical Union Chapman and European Geophysical Society Richardson Memorial conference, the first specialist conference jointly sponsored by the two organizations. It followed NVAG1 (Montreal, Aug. 1986), NVAG2 (Paris, June 1988; Schertzer and Lovejoy, 1991), five conse...

  14. Calculation of electromagnetic parameter based on interpolation algorithm

    International Nuclear Information System (INIS)

    Zhang, Wenqiang; Yuan, Liming; Zhang, Deyuan

    2015-01-01

    Wave-absorbing material is an important functional material of electromagnetic protection. The wave-absorbing characteristics depend on the electromagnetic parameter of mixed media. In order to accurately predict the electromagnetic parameter of mixed media and facilitate the design of wave-absorbing material, based on the electromagnetic parameters of spherical and flaky carbonyl iron mixture of paraffin base, this paper studied two different interpolation methods: Lagrange interpolation and Hermite interpolation of electromagnetic parameters. The results showed that Hermite interpolation is more accurate than the Lagrange interpolation, and the reflectance calculated with the electromagnetic parameter obtained by interpolation is consistent with that obtained through experiment on the whole. - Highlights: • We use interpolation algorithm on calculation of EM-parameter with limited samples. • Interpolation method can predict EM-parameter well with different particles added. • Hermite interpolation is more accurate than Lagrange interpolation. • Calculating RL based on interpolation is consistent with calculating RL from experiment

  15. Lucy's flat feet: the relationship between the ankle and rearfoot arching in early hominins.

    Directory of Open Access Journals (Sweden)

    Jeremy M DeSilva

    Full Text Available BACKGROUND: In the Plio-Pleistocene, the hominin foot evolved from a grasping appendage to a stiff, propulsive lever. Central to this transition was the development of the longitudinal arch, a structure that helps store elastic energy and stiffen the foot during bipedal locomotion. Direct evidence for arch evolution, however, has been somewhat elusive given the failure of soft-tissue to fossilize. Paleoanthropologists have relied on footprints and bony correlates of arch development, though little consensus has emerged as to when the arch evolved. METHODOLOGY/PRINCIPAL FINDINGS: Here, we present evidence from radiographs of modern humans (n = 261 that the set of the distal tibia in the sagittal plane, henceforth referred to as the tibial arch angle, is related to rearfoot arching. Non-human primates have a posteriorly directed tibial arch angle, while most humans have an anteriorly directed tibial arch angle. Those humans with a posteriorly directed tibial arch angle (8% have significantly lower talocalcaneal and talar declination angles, both measures of an asymptomatic flatfoot. Application of these results to the hominin fossil record reveals that a well developed rearfoot arch had evolved in Australopithecus afarensis. However, as in humans today, Australopithecus populations exhibited individual variation in foot morphology and arch development, and "Lucy" (A.L. 288-1, a 3.18 Myr-old female Australopithecus, likely possessed asymptomatic flat feet. Additional distal tibiae from the Plio-Pleistocene show variation in tibial arch angles, including two early Homo tibiae that also have slightly posteriorly directed tibial arch angles. CONCLUSIONS/SIGNIFICANCE: This study finds that the rearfoot arch was present in the genus Australopithecus. However, the female Australopithecus afarensis "Lucy" has an ankle morphology consistent with non-pathological flat-footedness. This study suggests that, as in humans today, there was variation in arch

  16. High-frequency signal and noise estimates of CSR GRACE RL04

    Science.gov (United States)

    Bonin, Jennifer A.; Bettadpur, Srinivas; Tapley, Byron D.

    2012-12-01

    A sliding window technique is used to create daily-sampled Gravity Recovery and Climate Experiment (GRACE) solutions with the same background processing as the official CSR RL04 monthly series. By estimating over shorter time spans, more frequent solutions are made using uncorrelated data, allowing for higher frequency resolution in addition to daily sampling. Using these data sets, high-frequency GRACE errors are computed using two different techniques: assuming the GRACE high-frequency signal in a quiet area of the ocean is the true error, and computing the variance of differences between multiple high-frequency GRACE series from different centers. While the signal-to-noise ratios prove to be sufficiently high for confidence at annual and lower frequencies, at frequencies above 3 cycles/year the signal-to-noise ratios in the large hydrological basins looked at here are near 1.0. Comparisons with the GLDAS hydrological model and high frequency GRACE series developed at other centers confirm CSR GRACE RL04's poor ability to accurately and reliably measure hydrological signal above 3-9 cycles/year, due to the low power of the large-scale hydrological signal typical at those frequencies compared to the GRACE errors.

  17. A comparative mathematical analysis of RL and RC electrical circuits via Atangana-Baleanu and Caputo-Fabrizio fractional derivatives

    Science.gov (United States)

    Abro, Kashif Ali; Memon, Anwar Ahmed; Uqaili, Muhammad Aslam

    2018-03-01

    This research article is analyzed for the comparative study of RL and RC electrical circuits by employing newly presented Atangana-Baleanu and Caputo-Fabrizio fractional derivatives. The governing ordinary differential equations of RL and RC electrical circuits have been fractionalized in terms of fractional operators in the range of 0 ≤ ξ ≤ 1 and 0 ≤ η ≤ 1. The analytic solutions of fractional differential equations for RL and RC electrical circuits have been solved by using the Laplace transform with its inversions. General solutions have been investigated for periodic and exponential sources by implementing the Atangana-Baleanu and Caputo-Fabrizio fractional operators separately. The investigated solutions have been expressed in terms of simple elementary functions with convolution product. On the basis of newly fractional derivatives with and without singular kernel, the voltage and current have interesting behavior with several similarities and differences for the periodic and exponential sources.

  18. Distribution and migration of pesticide residues in mosquito control impoundments St. Lucie County, Florida, USA

    Science.gov (United States)

    Parkinson, R. W.; Wang, T. C.; White, J. R.; David, J. R.; Hoffman, M. E.

    1993-09-01

    This project was designed to: (1) document the distribution and migration of organochlorine pesticide residues within marsh substrates of 18 St. Lucie County mosquito control impoundments located along the Indian River Lagoon estuary, and (2) evaluate the impact of water management techniques on residue mobility. Our results indicate that detectible concentrations of organochlorine compounds, applied between the late 1940s and early 1950s, are present in 16 of the 18 St. Lucie County mosquito control impoundments. These compounds are primarily restricted to the surficial, organic-rich wetland sediment, which, based upon geotechnical analysis, was exposed to the atmosphere at a time when the impoundments were subjected to pesticide treatment. Contaminated sediments are present below the surficial, organic-rich layer, suggesting that some vertical migration of pesticides has occurred. It is unlikely that leaching associated with the downward percolation of impounded water was responsible for this migration as pesticide residues were never detected within the in situ pore waters. An alternative explanation is that biological processes (e.g., rooting, burrowing) facilitated the downward flux of organochlorine compounds into sediment horizons not subjected to direct treatment. Eighty-eight surface water samples obtained from two impoundments subjected to contrasting water management techniques were analyzed for pesticide content. None of the surficial water samples collected in association with these impoundments contained detectible concentrations of organochlorine compounds. These samples were unfiltered and contained as much as 25 mg/1 of particulate organic matter. This suggests that the currently preferred management technique (RIM), which is designed to maintain water quality, limit mosquito production, and provide for ecological continuity, does not hydraulically mobilize pesticide residues into the Indian River Lagoon estuary.

  19. Verginin Ağırlıklandırılmış Fiyat Elastikiyetinin Hesaplanması: Türkiye (1998-2013

    Directory of Open Access Journals (Sweden)

    Engin YILMAZ

    2015-04-01

    Full Text Available Bu çalışma içerisinde, enflasyonun vergi gelirleri üzerindeki etkilerini inceleyen ilk çalışmalarda ortaya konulan, “gelişmekte olan ülkelerde verginin ağırlıklandırılmış fiyat elastikiyetinin birim olduğu” varsayımı Türkiye için 1998 -2013 periyodu için yeniden değerlendirilecektir. Ağırlıklandırılmış fiyat elastikiyetinin hesaplanmasında Türk vergi gelirleri ve fiyat endeksleri verileri kullanılmıştır. Dinamik En Küçük Kareler (DOLS yöntemiyle, vergi sisteminin uzun dönem ağırlıklandırılmış fiyat elastikiyeti tahmin edilmiştir. Çalışmanın önemi Türkiye için verginin ağırlıklandırılmış fiyat elastikiyetini hesaplamaya yönelik ilk çalışma olmasıdır. Bu anlamda “gelişmekte olan ülkelerde verginin ağırlıklandırılmış fiyat elastikiyetinin birim olduğu” varsayımının yeniden gözden geçirilmesi için yol gösterici bir çalışma olacaktır.

  20. Home and Away': Reconstructing Identity in Jamaica Kincaid’s Lucy

    Directory of Open Access Journals (Sweden)

    Eleanor Anneh Dasi

    2014-11-01

    Full Text Available After the forceful displacement of people during the trans-Atlantic slave trade came another wave of migration from the one-time colonies to the colonial metropolis. This other shift was the result of political, social and economic instabilities that were witnessed during the clamour for independence of the colonies. The Africans and West Indians were particularly affected by this phenomenon as they struggled for a better and satisfying life. But the experiences of migration have not been very fulfilling to the migrants as they grapple with the experiences of race, class and gender hostilities and the ensuring sense of alienation. The discussion that follows looks at how Jamaica Kincaid’s Lucy translates experiences of migration and how these experiences work in reshaping and reconstructing new identities based on the individual’s perceptions of life. It focuses on how the protagonist creates a delicate balance between native culture and colonial integration to build a new identity that transcends gender, race and class. Therefore, migration constructs spaces for the renegotiation of cultural polarities that permit the formation of transnational identities.

  1. New wheat-rye 5DS-4RS·4RL and 4RS-5DS·5DL translocation lines with powdery mildew resistance.

    Science.gov (United States)

    Fu, Shulan; Ren, Zhenglong; Chen, Xiaoming; Yan, Benju; Tan, Feiquan; Fu, Tihua; Tang, Zongxiang

    2014-11-01

    Powdery mildew is one of the serious diseases of wheat (Triticum aestivum L., 2 n = 6 × = 42, genomes AABBDD). Rye (Secale cereale L., 2 n = 2 × = 14, genome RR) offers a rich reservoir of powdery mildew resistant genes for wheat breeding program. However, extensive use of these resistant genes may render them susceptible to new pathogen races because of co-evolution of host and pathogen. Therefore, the continuous exploration of new powdery mildew resistant genes is important to wheat breeding program. In the present study, we identified several wheat-rye addition lines from the progeny of T. aestivum L. Mianyang11 × S. cereale L. Kustro, i.e., monosomic addition lines of the rye chromosomes 4R and 6R; a disomic addition line of 6R; and monotelosomic or ditelosomic addition lines of the long arms of rye chromosomes 4R (4 RL) and 6R (6 RL). All these lines displayed immunity to powdery mildew. Thus, we concluded that both the 4 RL and 6 RL arms of Kustro contain powdery mildew resistant genes. It is the first time to discover that 4 RL arm carries powdery mildew resistant gene. Additionally, wheat lines containing new wheat-rye translocation chromosomes were also obtained: these lines retained a short arm of wheat chromosome 5D (5 DS) on which rye chromosome 4R was fused through the short arm 4 RS (designated 5 DS-4 RS · 4 RL; 4 RL stands for the long arm of rye chromosome 4R); or they had an extra short arm of rye chromosome 4R (4 RS) that was attached to the short arm of wheat chromosome 5D (5 DS) (designated 4 RS-5 DS · 5 DL; 5 DL stands for the long arm of wheat chromosome 5D). These two translocation chromosomes could be transmitted to next generation stably, and the wheat lines containing 5 DS-4 RS · 4 RL chromosome also displayed immunity to powdery mildew. The materials obtained in this study can be used for wheat powdery mildew resistant breeding program.

  2. The Function of Native American Storytelling as Means of Education in Luci Tapahonso’s Selected Poems

    Directory of Open Access Journals (Sweden)

    Widad Allawi Saddam

    2015-12-01

    Full Text Available Native American storytelling has become a very vital issue in education. It preserves Native American history for the next generation and teaches them important lessons about the Native American culture. It also conveys moral meanings, knowledge and social values of the Native American people to the universe. More importantly, Native American storytelling teaches people not to be isolated, and the key issues discussed in this paper are borrowed from the selected poems of Native American Luci Tapahonso: ‘The Holy Twins’ and ‘Remember the Things that you told.’   Keywords:  folklore, narrating, Native American, oral tradition, storytelling

  3. LATENCIA DEL HERPESVIRUS BOVINO-1: EL PAPEL DE LOS TRANSCRITOS RELACIONADOS CON LATENCIA (RL

    Directory of Open Access Journals (Sweden)

    JULIÁN, RUIZ

    2008-01-01

    Full Text Available El herpesvirus bovino-1 es un virus de distribución mundial causante de graves pérdidas económicas debidas principalmente a la disminución de la eficiencia y en los indicadores de salud y productividad de cualquier hato ganadero infectado. Luego de la infección inicial del tracto respiratorio de los animales, el virus establece un estado de latencia viral en las neuronas sensoriales del ganglio trigémino y en los centros germinales de las tonsilas faríngeas. Periódicamente, el virus es reactivado y excretado en secreciones a través de las cuales puede infectar a otros animales susceptibles. Durante dicho estado de latencia hay disminución dramática de la expresión de genes virales, llevando solo a la expresión de dos transcritos: El RNA codificado por el gen relacionado con latencia (RL y el ORF-E viral. Múltiples estudios demuestran como el RL y el ORF-E están involucrados en la regulación del complejo ciclo de latencia y reactivación de la infección. La presente revisión de literatura se enfocará en describir y analizar los distintos estudios que han llevado a dilucidar el papel jugado por el gen RL y el ORF-E, sus transcritos y sus productos proteicos en el establecimiento, mantenimiento y reactivación de la latencia del HVB-1.

  4. Valid measures of periodic leg movements (PLMs) during a suggested immobilization test using the PAM-RL leg activity monitors require adjusting detection parameters for noise and signal in each recording.

    Science.gov (United States)

    Yang, Myung Sung; Montplaisir, Jacques; Desautels, Alex; Winkelman, John W; Cramer Bornemann, Michel A; Earley, Christopher J; Allen, Richard P

    2014-01-01

    Individuals with restless legs syndrome (RLS) (Willis-Ekbom disease [WED]) usually have periodic leg movements (PLMs). The suggested immobilization test (SIT) measures sensory and motor features of WED during wakefulness. Surface electromyogram (EMG) recordings of the anterior tibialis (AT) are used as the standard for counting PLMs. However, due to several limitations, leg activity meters such as the PAM-RL were advanced as a potential substitute. In our study, we assessed the validity of the measurements of PLM during wakefulness (PLMW) in the SIT for PAM-RL using both default and custom detection threshold parameters compared to AT EMG. Data were obtained from 39 participants who were diagnosed with primary WED and who were on stable medication as part of another study using the SIT to repeatedly evaluate WED symptoms over 6-12 months. EMG recordings and PAM-RL, when available, were used to detect PLMW for each SIT. Complete PAM-RL and polysomnography (PSG) EMG data were available for 253 SITs from that study. The default PAM-RL (dPAM-RL) detected leg movements based on manufacturer's noise (resting) and signal (movement) amplitude criteria developed to accurately detect PLM during sleep (PLMS). The custom PAM-RL (cPAM-RL) similarly detected leg movements except the noise and movement detection parameters were adjusted to match the PAM-RL data for each SIT. The distributions of the differences between either dPAM-RL or cPAM-RL and EMG PLMW were strongly leptokurtic (Kurtosis >2) with many small differences and a few unusually large differences. These distributions are better described by median and quartile ranges than mean and standard deviation. Despite an adequate correlation (r=0.66) between the dPAM-RL and EMG recordings, the dPAM-RL on average significantly underscored the number of PLMW (median: quartiles=-13: -51.2, 0.0) and on Bland-Altman plots had a significant magnitude bias with greater underscoring for larger average PLMW/h. There also was an

  5. Digital algorithms to recognize shot circuits just in right time. Digitale Algorithmen zur fruehzeitigen Kurzschlusserkennung

    Energy Technology Data Exchange (ETDEWEB)

    Lindmayer, M.; Stege, M. (Technische Univ. Braunschweig (Germany, F.R.). Inst. fuer Elektrische Energieanlagen)

    1991-07-01

    Algorithms for early detection and prevention of short circuits are presented. Data on current levels and steepness in the a.c. network to be protected are evaluated by microcomputers. In particular, a simplified low-voltage grid is considered whose load circuit is formed in normal conditions by a serial R-L circuit. An optimum short-circuit detection algorithm is proposed for this network, which forecasts a current value from the current and steepness signals and compares this value with a limiting value. (orig.).

  6. Aitken extrapolation and epsilon algorithm for an accelerated solution of weakly singular nonlinear Volterra integral equations

    International Nuclear Information System (INIS)

    Mesgarani, H; Parmour, P; Aghazadeh, N

    2010-01-01

    In this paper, we apply Aitken extrapolation and epsilon algorithm as acceleration technique for the solution of a weakly singular nonlinear Volterra integral equation of the second kind. In this paper, based on Tao and Yong (2006 J. Math. Anal. Appl. 324 225-37.) the integral equation is solved by Navot's quadrature formula. Also, Tao and Yong (2006) for the first time applied Richardson extrapolation to accelerating convergence for the weakly singular nonlinear Volterra integral equations of the second kind. To our knowledge, this paper may be the first attempt to apply Aitken extrapolation and epsilon algorithm for the weakly singular nonlinear Volterra integral equations of the second kind.

  7. Intrusion of soil covered uranium mill tailings by whitetail prairie dogs and Richardson's ground squirrels

    International Nuclear Information System (INIS)

    Shuman, R.

    1984-01-01

    The primary objective of the reclamation of uranium mill tailings is the long-term isolation of the matrial from the biosphere. Fossorial and semi-fossorial species represent a potentially disruptive influence as a result of their burrowing habits. The potential for intrusion was investigated with respect to two sciurids, the whitetail prairie dog (Cynomys leucurus) and Richardson's ground squirrel (Spermophilus richardsonii). Populations of prairie dogs were established on a control area, lacking a tailings layer, and two experimental areas, underlain by a waste layer, in southeastern Wyoming. Weekly measurements of prairie dog mound surface activities were conducted to demonstrate penetration, or lack thereof, of the tailings layer. Additionally, the impact of burrowing upon radon flux was determined. Limited penetration of the waste layer was noted after which frequency of inhabitance of the intruding burrow system declined. No significant changes in radon flux were detected. In another experiment, it was found that Richardson's ground squirrels burrowed to less extreme depths when confronted by mill tailings. Additional work at an inactive tailings pile in western Colorado revealed repeated intrusion through a shallow cover, and subsequent transport of radioactive material to the ground surface by prairie dogs. Radon flux from burrow entrances was significantly greater than that from undisturbed ground. Data suggested that textural and pH properties of tailings material may act to discourage repeated intrusion at some sites. 58 references

  8. Black carp (Mylopharyngodon piceus, Richardson. Thematic bibliography

    Directory of Open Access Journals (Sweden)

    I. Hrytsynyak

    2017-03-01

    Full Text Available Purpose. Creating a thematic bibliographic list of publications in Ukrainian and Russian, dedicated to the ecology, biology, selection and cultivation of such Far East fish fauna species as black carp (Mylopharyngodon piceus Richardson in conditions of fish farms of Ukraine and neighboring countries, as well as the possibility of it introduction into water bodies for bioameliorative purpose. Methodology. The complete and selective methods were applied in the process of the systematic search. The bibliographic core has been formed with the literature from the fund of the scientific library of the Institute of Fisheries NAAS. Findings. There was composed a thematic list of publications with a total quantity of 67 sources, containing characteristics of black carp as representative of cyprinids, which is very important species from the point of view of aquaculture. This bibliography covers the time period from 1949 till 2011.The literary sources were arranged in alphabetical order by author or title, and described according to DSTU 7.1:2006 «System of standards on information, librarianship and publishing. Bibliographic entry. Bibliographic description. General requirements and rules», as well as in accordance with the requirements of APA style — international standard of references. Practical value. The list may be useful for scientists, practitioners, students, whose area of interests covers the questions of breeding and study of the biological features of black carp.

  9. Aerial radiological survey of the area surrounding the St. Lucie Power Plant, Fort Pierce, Florida

    International Nuclear Information System (INIS)

    Feimster, E.L.

    1979-06-01

    An airborne radiological survey of an 1100 km 2 area surrounding the St. Lucie Power Plant was conducted 1 to 8 March 1977. Detected radioisotopes and their associated gamma ray exposure rates were consistent with that expected from the normal background emitters. Count rates observed at 150 m altitude are converted to equivalent exposure rates at 1 m above the ground and are presented in the form of an isopleth map. Ground exposure rates measured with small portable instruments and soil sample analysis agreed with the airborne data. Geological data are presented in an isopleth map of rock and soil types. Also included is a brief description of the vegetation and terrain surrounding the site

  10. Real-time estimation of prostate tumor rotation and translation with a kV imaging system based on an iterative closest point algorithm

    International Nuclear Information System (INIS)

    Tehrani, Joubin Nasehi; O’Brien, Ricky T; Keall, Paul; Poulsen, Per Rugaard

    2013-01-01

    Previous studies have shown that during cancer radiotherapy a small translation or rotation of the tumor can lead to errors in dose delivery. Current best practice in radiotherapy accounts for tumor translations, but is unable to address rotation due to a lack of a reliable real-time estimate. We have developed a method based on the iterative closest point (ICP) algorithm that can compute rotation from kilovoltage x-ray images acquired during radiation treatment delivery. A total of 11 748 kilovoltage (kV) images acquired from ten patients (one fraction for each patient) were used to evaluate our tumor rotation algorithm. For each kV image, the three dimensional coordinates of three fiducial markers inside the prostate were calculated. The three dimensional coordinates were used as input to the ICP algorithm to calculate the real-time tumor rotation and translation around three axes. The results show that the root mean square error was improved for real-time calculation of tumor displacement from a mean of 0.97 mm with the stand alone translation to a mean of 0.16 mm by adding real-time rotation and translation displacement with the ICP algorithm. The standard deviation (SD) of rotation for the ten patients was 2.3°, 0.89° and 0.72° for rotation around the right–left (RL), anterior–posterior (AP) and superior–inferior (SI) directions respectively. The correlation between all six degrees of freedom showed that the highest correlation belonged to the AP and SI translation with a correlation of 0.67. The second highest correlation in our study was between the rotation around RL and rotation around AP, with a correlation of −0.33. Our real-time algorithm for calculation of rotation also confirms previous studies that have shown the maximum SD belongs to AP translation and rotation around RL. ICP is a reliable and fast algorithm for estimating real-time tumor rotation which could create a pathway to investigational clinical treatment studies requiring

  11. Richardson Instructional Management System (RIMS). How to Blend a Computerized Objectives-Referenced Testing System, Distributive Data Processing, and Systemwide Evaluation.

    Science.gov (United States)

    Riegel, N. Blyth

    Recent changes in the structure of curriculum and the instructional system in Texas have required a major reorganization of teaching, evaluating, budgeting, and planning activities in the local education agencies, which has created the need for a database. The history of Richardson Instructional Management System (RIMS), its data processing…

  12. Pharmacologically induced long QT type 2 can be rescued by activation of IKs with benzodiazepine R-L3 in isolated guinea pig cardiomyocytes

    DEFF Research Database (Denmark)

    Nissen, Jakob Dahl; Diness, Jonas Goldin; Diness, Thomas Goldin

    2009-01-01

    of this study was to evaluate potential antiarrhythmic effects of compound induced IKs activation using the benzodiazepine L-364,373 (R-L3). Ventricular myocytes from guinea pigs were isolated and whole-cell current clamping was performed at 35 degrees C. It was found that 1 microM R-L3 significantly reduced...

  13. Restoration of Thickness, Density, and Volume for Highly Blurred Thin Cortical Bones in Clinical CT Images.

    Science.gov (United States)

    Pakdel, Amirreza; Hardisty, Michael; Fialkov, Jeffrey; Whyne, Cari

    2016-11-01

    In clinical CT images containing thin osseous structures, accurate definition of the geometry and density is limited by the scanner's resolution and radiation dose. This study presents and validates a practical methodology for restoring information about thin bone structure by volumetric deblurring of images. The methodology involves 2 steps: a phantom-free, post-reconstruction estimation of the 3D point spread function (PSF) from CT data sets, followed by iterative deconvolution using the PSF estimate. Performance of 5 iterative deconvolution algorithms, blind, Richardson-Lucy (standard, plus Total Variation versions), modified residual norm steepest descent (MRNSD), and Conjugate Gradient Least-Squares were evaluated using CT scans of synthetic cortical bone phantoms. The MRNSD algorithm resulted in the highest relative deblurring performance as assessed by a cortical bone thickness error (0.18 mm) and intensity error (150 HU), and was subsequently applied on a CT image of a cadaveric skull. Performance was compared against micro-CT images of the excised thin cortical bone samples from the skull (average thickness 1.08 ± 0.77 mm). Error in quantitative measurements made from the deblurred images was reduced 82% (p < 0.01) for cortical thickness and 55% (p < 0.01) for bone mineral mass. These results demonstrate a significant restoration of geometrical and radiological density information derived for thin osseous features.

  14. Using Spatial Reinforcement Learning to Build Forest Wildfire Dynamics Models From Satellite Images

    Directory of Open Access Journals (Sweden)

    Sriram Ganapathi Subramanian

    2018-04-01

    Full Text Available Machine learning algorithms have increased tremendously in power in recent years but have yet to be fully utilized in many ecology and sustainable resource management domains such as wildlife reserve design, forest fire management, and invasive species spread. One thing these domains have in common is that they contain dynamics that can be characterized as a spatially spreading process (SSP, which requires many parameters to be set precisely to model the dynamics, spread rates, and directional biases of the elements which are spreading. We present related work in artificial intelligence and machine learning for SSP sustainability domains including forest wildfire prediction. We then introduce a novel approach for learning in SSP domains using reinforcement learning (RL where fire is the agent at any cell in the landscape and the set of actions the fire can take from a location at any point in time includes spreading north, south, east, or west or not spreading. This approach inverts the usual RL setup since the dynamics of the corresponding Markov Decision Process (MDP is a known function for immediate wildfire spread. Meanwhile, we learn an agent policy for a predictive model of the dynamics of a complex spatial process. Rewards are provided for correctly classifying which cells are on fire or not compared with satellite and other related data. We examine the behavior of five RL algorithms on this problem: value iteration, policy iteration, Q-learning, Monte Carlo Tree Search, and Asynchronous Advantage Actor-Critic (A3C. We compare to a Gaussian process-based supervised learning approach and also discuss the relation of our approach to manually constructed, state-of-the-art methods from forest wildfire modeling. We validate our approach with satellite image data of two massive wildfire events in Northern Alberta, Canada; the Fort McMurray fire of 2016 and the Richardson fire of 2011. The results show that we can learn predictive, agent

  15. Draft audit report, human factors engineering control room design review: Saint Lucie Nuclear Power Plant, Unit No. 2

    International Nuclear Information System (INIS)

    Peterson, L.R.; Lappa, D.A.; Moore, J.W.

    1981-01-01

    A human factors engineering preliminary design review of the Saint Lucie Unit 2 control room was performed at the site on August 3 through August 7, 1981. This design review was carried out by a team from the Human Factors Engineering Branch, Division of Human Factors Safety. This report was prepared on the basis of the HFEB's review of the applicant's Preliminary Design Assessment and the human factors engineering design review/audit performed at the site. The review team included human factors consultants from BioTechnology, Inc., Falls Church, Virginia, and from Lawrence Livermore National Laboratory (University of California), Livermore, California

  16. CAD2RL: Real Single-Image Flight without a Single Real Image

    OpenAIRE

    Sadeghi, Fereshteh; Levine, Sergey

    2016-01-01

    Deep reinforcement learning has emerged as a promising and powerful technique for automatically acquiring control policies that can process raw sensory inputs, such as images, and perform complex behaviors. However, extending deep RL to real-world robotic tasks has proven challenging, particularly in safety-critical domains such as autonomous flight, where a trial-and-error learning process is often impractical. In this paper, we explore the following question: can we train vision-based navig...

  17. A Case-Control Study Indicates that no Association Exists Between Polymorphisms of IL-33 and IL-1RL1 and Preeclampsia

    Directory of Open Access Journals (Sweden)

    Xiaoyan Ren

    2016-03-01

    Full Text Available Background/Aims: Preeclampsia (PE is a systemic inflammatory response syndrome involving varieties of cytokines, and previous studies have shown that IL-33 and its receptor IL-1RL1 play pivotal roles in the development of it. As a polygenetic hereditary disease, it is necessary to study the gene analysis for PE. Therefore, the present study was to determine whether IL-33 rs3939286 and IL-1RL1 rs13015714 associated with susceptibility to PE in Chinese Han women. Methods: 1,031 PE patients and 1,298 controls were enrolled and the genotyping for rs3939286 in IL-33 and rs13015714 in IL-1RL1 was performed by TaqMan allelic discrimination real-time PCR. Hardy-Weinberg equilibrium (HWE was examined to ensure the group representativeness and Pearson's chi-square test was used to compare the differences in genetic distributions between the two groups. Results: No significant differences in genotypic and allelic frequencies of the two polymorphisms loci were observed between cases and controls. There were also no significant differences in genetic distributions between mild/severe and early/late-onset PE and control groups. Conclusion: Although our data suggested that the polymorphisms of IL-33 rs3939286 and IL-1RL1 rs13015714 might not be critical risk factors for PE in Chinese Han women, the results need to be validated in different nations.

  18. TomoTherapy MLC verification using exit detector data

    Energy Technology Data Exchange (ETDEWEB)

    Chen Quan; Westerly, David; Fang Zhenyu; Sheng, Ke; Chen Yu [TomoTherapy Inc., 1240 Deming Way, Madison, Wisconsin 53717 (United States); Department of Radiation Oncology, University of Colorado School of Medicine, Aurora, Colorado 80045 (United States); Xinghua Cancer Hospital, Xinghua, Jiangsu 225700 (China); Department of Radiation Oncology, University of California-Los Angeles, Los Angeles, California 90095 (United States); TomoTherapy Inc., 1240 Deming Way, Madison, Wisconsin 53717 (United States)

    2012-01-15

    Purpose: Treatment delivery verification (DV) is important in the field of intensity modulated radiation therapy (IMRT). While IMRT and image guided radiation therapy (IGRT), allow us to create more conformal plans and enables the use of tighter margins, an erroneously executed plan can have detrimental effects on the treatment outcome. The purpose of this study is to develop a DV technique to verify TomoTherapy's multileaf collimator (MLC) using the onboard mega-voltage CT detectors. Methods: The proposed DV method uses temporal changes in the MVCT detector signal to predict actual leaf open times delivered on the treatment machine. Penumbra and scattered radiation effects may produce confounding results when determining leaf open times from the raw detector data. To reduce the impact of the effects, an iterative, Richardson-Lucy (R-L) deconvolution algorithm is applied. Optical sensors installed on each MLC leaf are used to verify the accuracy of the DV technique. The robustness of the DV technique is examined by introducing different attenuation materials in the beam. Additionally, the DV technique has been used to investigate several clinical plans which failed to pass delivery quality assurance (DQA) and was successful in identifying MLC timing discrepancies as the root cause. Results: The leaf open time extracted from the exit detector showed good agreement with the optical sensors under a variety of conditions. Detector-measured leaf open times agreed with optical sensor data to within 0.2 ms, and 99% of the results agreed within 8.5 ms. These results changed little when attenuation was added in the beam. For the clinical plans failing DQA, the dose calculated from reconstructed leaf open times played an instrumental role in discovering the root-cause of the problem. Throughout the retrospective study, it is found that the reconstructed dose always agrees with measured doses to within 1%. Conclusions: The exit detectors in the TomoTherapy treatment

  19. TomoTherapy MLC verification using exit detector data

    International Nuclear Information System (INIS)

    Chen Quan; Westerly, David; Fang Zhenyu; Sheng, Ke; Chen Yu

    2012-01-01

    Purpose: Treatment delivery verification (DV) is important in the field of intensity modulated radiation therapy (IMRT). While IMRT and image guided radiation therapy (IGRT), allow us to create more conformal plans and enables the use of tighter margins, an erroneously executed plan can have detrimental effects on the treatment outcome. The purpose of this study is to develop a DV technique to verify TomoTherapy's multileaf collimator (MLC) using the onboard mega-voltage CT detectors. Methods: The proposed DV method uses temporal changes in the MVCT detector signal to predict actual leaf open times delivered on the treatment machine. Penumbra and scattered radiation effects may produce confounding results when determining leaf open times from the raw detector data. To reduce the impact of the effects, an iterative, Richardson-Lucy (R-L) deconvolution algorithm is applied. Optical sensors installed on each MLC leaf are used to verify the accuracy of the DV technique. The robustness of the DV technique is examined by introducing different attenuation materials in the beam. Additionally, the DV technique has been used to investigate several clinical plans which failed to pass delivery quality assurance (DQA) and was successful in identifying MLC timing discrepancies as the root cause. Results: The leaf open time extracted from the exit detector showed good agreement with the optical sensors under a variety of conditions. Detector-measured leaf open times agreed with optical sensor data to within 0.2 ms, and 99% of the results agreed within 8.5 ms. These results changed little when attenuation was added in the beam. For the clinical plans failing DQA, the dose calculated from reconstructed leaf open times played an instrumental role in discovering the root-cause of the problem. Throughout the retrospective study, it is found that the reconstructed dose always agrees with measured doses to within 1%. Conclusions: The exit detectors in the TomoTherapy treatment systems

  20. Mutation of a Broadly Conserved Operon (RL3499-RL3502) from Rhizobium leguminosarum Biovar viciae Causes Defects in Cell Morphology and Envelope Integrity▿†

    Science.gov (United States)

    Vanderlinde, Elizabeth M.; Magnus, Samantha A.; Tambalo, Dinah D.; Koval, Susan F.; Yost, Christopher K.

    2011-01-01

    The bacterial cell envelope is of critical importance to the function and survival of the cell; it acts as a barrier against harmful toxins while allowing the flow of nutrients into the cell. It also serves as a point of physical contact between a bacterial cell and its host. Hence, the cell envelope of Rhizobium leguminosarum is critical to cell survival under both free-living and symbiotic conditions. Transposon mutagenesis of R. leguminosarum strain 3841 followed by a screen to isolate mutants with defective cell envelopes led to the identification of a novel conserved operon (RL3499-RL3502) consisting of a putative moxR-like AAA+ ATPase, a hypothetical protein with a domain of unknown function (designated domain of unknown function 58), and two hypothetical transmembrane proteins. Mutation of genes within this operon resulted in increased sensitivity to membrane-disruptive agents such as detergents, hydrophobic antibiotics, and alkaline pH. On minimal media, the mutants retain their rod shape but are roughly 3 times larger than the wild type. On media containing glycine or peptides such as yeast extract, the mutants form large, distorted spheres and are incapable of sustained growth under these culture conditions. Expression of the operon is maximal during the stationary phase of growth and is reduced in a chvG mutant, indicating a role for this sensor kinase in regulation of the operon. Our findings provide the first functional insight into these genes of unknown function, suggesting a possible role in cell envelope development in Rhizobium leguminosarum. Given the broad conservation of these genes among the Alphaproteobacteria, the results of this study may also provide insight into the physiological role of these genes in other Alphaproteobacteria, including the animal pathogen Brucella. PMID:21357485

  1. Evaluating the performance of the ORTECR DetectiveTM for emergency urine bioassay

    International Nuclear Information System (INIS)

    Li, C.; Ko, R.; Moodie, G.; Kramer, G. H.

    2011-01-01

    The performance of the ORTEC R Detective TM as a field deployable tool for emergency urine bioassay of 137 Cs, 60 Co, 192 Ir, 169 Yb and 75 Se was evaluated against ANSI N13.30. The tested activity levels represent 10 % RL (reference level) and 1 % RL defined by [Li C., Vlahovich S., Dai X., Richardson R. B., Daka J. N. and Kramer G. H. Requirements for radiation emergency urine bioassay techniques for the public and first responders. Health Phys (in press, 99(5), 702-707 (2010)]. The tests were conducted for both single radionuclide and mixed radionuclides at two geometries, one conventional geometry (CG) and one improved geometry (IG) which improved the MDAs (minimum detectable amounts) by a factor of 1.6-2.7. The most challenging radionuclide was 169 Yb. The measurement of the mixture radionuclides for 169 Yb at the CG did not satisfy the ANSI N13.30 requirements even at 10 % RL. At 1 % RL, 169 Yb and 192 Ir were not detectable at either geometry, while the measurement of 60 Co in the mixed radionuclides satisfied the ANSI N13.30 requirements only at the IG. (authors)

  2. Peli, Luci, Bom... transgresión sexual y cultura popular

    Directory of Open Access Journals (Sweden)

    María Dolores Arroyo Fdez.

    2011-05-01

    Full Text Available El presente estudio se asienta en la narrativa de la película de Pedro Almodóvar Pepi, Luci, Bom y otras chicas del montón, estrenada en 1980, en plena transición a la democracia en España. Se analiza el film considerando sus componentes audiovisuales  como escenario a  los específicos de identidad y comportamiento sexual.  Se contempla su rodaje, en las calles de Madrid, y la construcción de sus diálogos e imágenes, que beben directamente de las fuentes de la cultura de masas  (fotonovelas, viñetas de cómic, publicidad, música y el lenguaje de la calle, en contraste con las creaciones de la alta cultura.  Precisamente, en esta película han participado varios artistas plásticos y dibujantes emergentes entonces: Ceesepe y el grupo Costus.  Al mismo nivel transgresor se presentan, de forma rebelde y escatológica, los comportamientos e identidades sexuales de sus protagonistas.  Esta cinta supo encauzar toda una serie de reivindicaciones a favor de la libertad sexual y la eliminación de tabúes,  protagonizada por parte de la juventud de la época; pero no deja de profundizar en los sentimientos como el amor y el desamor, la amistad y la soledad.   

  3. Contourlet domain multiband deblurring based on color correlation for fluid lens cameras.

    Science.gov (United States)

    Tzeng, Jack; Liu, Chun-Chen; Nguyen, Truong Q

    2010-10-01

    Due to the novel fluid optics, unique image processing challenges are presented by the fluidic lens camera system. Developed for surgical applications, unique properties, such as no moving parts while zooming and better miniaturization than traditional glass optics, are advantages of the fluid lens. Despite these abilities, sharp color planes and blurred color planes are created by the nonuniform reaction of the liquid lens to different color wavelengths. Severe axial color aberrations are caused by this reaction. In order to deblur color images without estimating a point spread function, a contourlet filter bank system is proposed. Information from sharp color planes is used by this multiband deblurring method to improve blurred color planes. Compared to traditional Lucy-Richardson and Wiener deconvolution algorithms, significantly improved sharpness and reduced ghosting artifacts are produced by a previous wavelet-based method. Directional filtering is used by the proposed contourlet-based system to adjust to the contours of the image. An image is produced by the proposed method which has a similar level of sharpness to the previous wavelet-based method and has fewer ghosting artifacts. Conditions for when this algorithm will reduce the mean squared error are analyzed. While improving the blue color plane by using information from the green color plane is the primary focus of this paper, these methods could be adjusted to improve the red color plane. Many multiband systems such as global mapping, infrared imaging, and computer assisted surgery are natural extensions of this work. This information sharing algorithm is beneficial to any image set with high edge correlation. Improved results in the areas of deblurring, noise reduction, and resolution enhancement can be produced by the proposed algorithm.

  4. Research on Adaptive Optics Image Restoration Algorithm by Improved Expectation Maximization Method

    Directory of Open Access Journals (Sweden)

    Lijuan Zhang

    2014-01-01

    Full Text Available To improve the effect of adaptive optics images’ restoration, we put forward a deconvolution algorithm improved by the EM algorithm which joints multiframe adaptive optics images based on expectation-maximization theory. Firstly, we need to make a mathematical model for the degenerate multiframe adaptive optics images. The function model is deduced for the points that spread with time based on phase error. The AO images are denoised using the image power spectral density and support constraint. Secondly, the EM algorithm is improved by combining the AO imaging system parameters and regularization technique. A cost function for the joint-deconvolution multiframe AO images is given, and the optimization model for their parameter estimations is built. Lastly, the image-restoration experiments on both analog images and the real AO are performed to verify the recovery effect of our algorithm. The experimental results show that comparing with the Wiener-IBD or RL-IBD algorithm, our iterations decrease 14.3% and well improve the estimation accuracy. The model distinguishes the PSF of the AO images and recovers the observed target images clearly.

  5. Interim Report on the Investigation of the Fresh Properties of Synthetic Fiber-Reinforced Concrete for the Richardson Landing Casting Field

    Science.gov (United States)

    2017-04-01

    PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATION REPORT NUMBER U.S. Army Engineer Research and Development Center Geotechnical ...2017 Approved for public release; distribution is unlimited. The U.S. Army Engineer Research and Development Center (ERDC) solves the...the Richardson Landing Casting Field Wendy R. Long, Kirk E. Walker, and Brian H. Green Geotechnical and Structures Laboratory U.S. Army Engineer

  6. Accurate estimation of motion blur parameters in noisy remote sensing image

    Science.gov (United States)

    Shi, Xueyan; Wang, Lin; Shao, Xiaopeng; Wang, Huilin; Tao, Zhong

    2015-05-01

    The relative motion between remote sensing satellite sensor and objects is one of the most common reasons for remote sensing image degradation. It seriously weakens image data interpretation and information extraction. In practice, point spread function (PSF) should be estimated firstly for image restoration. Identifying motion blur direction and length accurately is very crucial for PSF and restoring image with precision. In general, the regular light-and-dark stripes in the spectrum can be employed to obtain the parameters by using Radon transform. However, serious noise existing in actual remote sensing images often causes the stripes unobvious. The parameters would be difficult to calculate and the error of the result relatively big. In this paper, an improved motion blur parameter identification method to noisy remote sensing image is proposed to solve this problem. The spectrum characteristic of noisy remote sensing image is analyzed firstly. An interactive image segmentation method based on graph theory called GrabCut is adopted to effectively extract the edge of the light center in the spectrum. Motion blur direction is estimated by applying Radon transform on the segmentation result. In order to reduce random error, a method based on whole column statistics is used during calculating blur length. Finally, Lucy-Richardson algorithm is applied to restore the remote sensing images of the moon after estimating blur parameters. The experimental results verify the effectiveness and robustness of our algorithm.

  7. Comparative evaluation of the modified Scarff-Bloom-Richardson grading system on breast carcinoma aspirates and histopathology

    Directory of Open Access Journals (Sweden)

    Cherry Bansal

    2012-01-01

    Full Text Available Background: Fine needle aspiration (FNA is a quick, minimally invasive procedure for evaluation of breast tumors. The Scarff-Bloom-Richardson (SBR grade on histological sections is a well-established tool to guide selection of adjuvant systemic therapy. Grade evaluation is possible on cytology smears to avoid and minimize the morbidity associated with overtreatment of lower grade tumors. Aim : The aim was to test the hypothesis whether breast FNA from the peripheral portion of the lesion is representative of Scarff-Bloom-Richardson grade on histopathology as compared to FNA from the central portion. Materials and Methods : Fine-needle aspirates and subsequent tissue specimens from 45 women with ductal carcinoma (not otherwise specified were studied. FNAs were performed under ultrasound guidance from the central as well as the peripheral third of the lesion for each case avoiding areas of necrosis/calcification. The SBR grading was compared on alcohol fixed aspirates and tissue sections for each case. Results : Comparative analysis of SBR grade on aspirates from the peripheral portion and histopathology by the Pearson chi-square test (χ2 =78.00 showed that it was statistically significant (P<0.001 with 93% concordance. Lower mitotic score on aspirates from the peripheral portion was observed in only 4 out of 45 (9% cases. The results of the Pearson chi-square test (χ2 = 75.824 with statistically significant (P=0.000. Conclusion : This prospective study shows that FNA smears from the peripheral portion of the lesion are representative of the grading performed on the corresponding histopathological sections. It is possible to score and grade by SBR system on FNA smears.

  8. 216-A-29 Ditch supplemental information to the Hanford Facility Contingency Plan (DOE/RL-93-75)

    International Nuclear Information System (INIS)

    Ingle, S.J.

    1996-05-01

    This document is a unit-specific contingency plan for the 216-A-29 Ditch and is intended to be used as a supplement to DOE/RL-93-75, Hanford Facility Contingency Plan (DOE-RL 1993). This unit-specific plan is to be used to demonstrate compliance with the contingency plan requirements of the Washington Administrative Code, Chapter 173- 303 for certain Resource Conservation and Recovery Act of 1976 waste management units. The 216-A-29 Ditch is a surface impoundment that received nonregulated process and cooling water and other dangerous wastes primarily from operations of the Plutonium/Uranium Extraction Plant. Active between 1955 and 1991, the ditch has been physically isolated and will be closed. Because it is no longer receiving discharges, waste management activities are no longer required at the unit. The ditch does not present a significant hazard to adjacent units, personnel, or the environment. It is unlikely that any incidents presenting hazards to public health or the environment would occur at the 216-A-29 Ditch

  9. 216-U-12 Crib supplemental information to the Hanford Facility Contingency Plan (DOE/RL-93-75)

    International Nuclear Information System (INIS)

    Ingle, S.J.

    1996-05-01

    This document is a unit-specific contingency plan for the 216-U-12 Crib and is intended to be used as a supplement to DOE/RL-93-75, Hanford Facility Contingency Plan (DOE-RL 1993). This unit-specific plan is to be used to demonstrate compliance with the contingency plan requirements of the Washington Administrative Code, Chapter 173- 303 for certain Resource Conservation and Recovery Act of 1976 waste management units. The 216-U-12 Crib is a landfill that received waste from the 291-U-1 Stack, 244-WR Vault, 244-U via tank C-5, and the UO 3 Plant. The crib pipeline was cut and permanently capped in 1988, and the crib has been backfilled. The unit will be closed under final facility standards. Waste management activities are no longer required at the unit. The crib does not present a significant hazard to adjacent units, personnel, or the environment. It is unlikely that any incidents presenting hazards to public health or the environment would occur at the 216-U-12 Crib

  10. Restoration of motion-blurred image based on border deformation detection: a traffic sign restoration model.

    Directory of Open Access Journals (Sweden)

    Yiliang Zeng

    Full Text Available Due to the rapid development of motor vehicle Driver Assistance Systems (DAS, the safety problems associated with automatic driving have become a hot issue in Intelligent Transportation. The traffic sign is one of the most important tools used to reinforce traffic rules. However, traffic sign image degradation based on computer vision is unavoidable during the vehicle movement process. In order to quickly and accurately recognize traffic signs in motion-blurred images in DAS, a new image restoration algorithm based on border deformation detection in the spatial domain is proposed in this paper. The border of a traffic sign is extracted using color information, and then the width of the border is measured in all directions. According to the width measured and the corresponding direction, both the motion direction and scale of the image can be confirmed, and this information can be used to restore the motion-blurred image. Finally, a gray mean grads (GMG ratio is presented to evaluate the image restoration quality. Compared to the traditional restoration approach which is based on the blind deconvolution method and Lucy-Richardson method, our method can greatly restore motion blurred images and improve the correct recognition rate. Our experiments show that the proposed method is able to restore traffic sign information accurately and efficiently.

  11. De Pierre de Sainte-Lucie à Benoit Rigaud, les mutations lyonnaises du «Chevalier de la Croix» (1534, 1581

    Directory of Open Access Journals (Sweden)

    Anne Réach-Ngô

    2015-07-01

    Full Text Available Le Chevalier de la Croix constitue l’un des récits de chevalerie de la Bibliothèque de Troyes (Oudot, 1612 qui a été importé d’Espagne, via Lyon, sur la scène éditoriale française au XVIe siècle (Saint-Lucie, 1534, avant de paraître à Paris l’année qui suit (Janot, 1535. C’est également à Lyon, un demi- siècle plus tard, que l’ouvrage connaît une seconde jeunesse (Rigaud, 1581, rapidement prolongée par une nouvelle édition parisienne (Bonfons, 1584. L’examen des mutations éditoriales de l’œuvre tout au long du siècle, et l’analyse de ces deux vagues de publication, permet de mettre au jour l’évolution des relations de concurrence entre Paris et Lyon dans l’histoire éditoriale de l’œuvre. Le Chevalier de la Croix is one of the Troyes Library (Oudot, 1612 chivalric romances imported from Spain through Lyon onto the French publishing scene in the 16th century (Saint-Lucie, 1534, before its publication in Paris the following year (Janot, 1535. Half a century later, Lyon was also the place where the book had a second youth (Rigaud, 1581, which was soon to be extended through a new parisian publication (Bonfons, 1584. The examination of the book’s editorial transformations throughout the century, along with the analysis of those two waves of publication, enables to bring to light the evolution of the competitive relationship between Paris and Lyon in the editorial history of this work. 

  12. An Interface Tracking Algorithm for the Porous Medium Equation.

    Science.gov (United States)

    1983-03-01

    equation (1.11). N [v n n 2(2) = n . AV k + wk---IY" 2] +l~ x A t K Ax E E 2+ VeTA i;- 2k1 n- o (nr+l) <k-<.(n+l) N [Av] [ n+l <Ax Z m(v ) I~+lIAxAt...RD-R127 685 AN INTERFACE TRACKING ALGORITHM FOR THE POROUS MEDIUM / EQURTION(U) WISCONSIN UNIV-MRDISON MATHEMATICS RESEARCH CENTER E DIBENEDETTO ET...RL. MAR 83 NRC-TSR-249 UNCLASSIFIED DAG29-88-C-8041 F/G 12/1i N E -EEonshhhhI EhhhMhhhhhhhhE mhhhhhhhhhhhhE mhhhhhhhhhhhhI IMhhhhhhhMhhhE

  13. 7,12-Dimethylbenzanthracene induces apoptosis in RL95-2 human endometrial cancer cells: Ligand-selective activation of cytochrome P450 1B1

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ji Young [Department of Anatomy and Cell Biology, College of Medicine, Dong-A University, Busan 602-714 (Korea, Republic of); Medical Research Science Center, Dong-A University, Busan 602-714 (Korea, Republic of); Lee, Seung Gee [Department of Anatomy and Cell Biology, College of Medicine, Dong-A University, Busan 602-714 (Korea, Republic of); Mitochondria Hub Regulation Center, Dong-A University, Busan 602-714 (Korea, Republic of); Chung, Jin-Yong [Department of Anatomy and Cell Biology, College of Medicine, Dong-A University, Busan 602-714 (Korea, Republic of); Medical Research Science Center, Dong-A University, Busan 602-714 (Korea, Republic of); Kim, Yoon-Jae [Department of Anatomy and Cell Biology, College of Medicine, Dong-A University, Busan 602-714 (Korea, Republic of); Mitochondria Hub Regulation Center, Dong-A University, Busan 602-714 (Korea, Republic of); Park, Ji-Eun [Department of Anatomy and Cell Biology, College of Medicine, Dong-A University, Busan 602-714 (Korea, Republic of); Medical Research Science Center, Dong-A University, Busan 602-714 (Korea, Republic of); Oh, Seunghoon [Department of Physiology, College of Medicine, Dankook University, Cheonan 330-714 (Korea, Republic of); Lee, Se Yong [Department of Obstetrics and Gynecology, Busan Medical Center, Busan 611-072 (Korea, Republic of); Choi, Hong Jo [Department of General Surgery, College of Medicine, Dong-A University, Busan 602-714 (Korea, Republic of); Yoo, Young Hyun, E-mail: yhyoo@dau.ac.kr [Department of Anatomy and Cell Biology, College of Medicine, Dong-A University, Busan 602-714 (Korea, Republic of); Mitochondria Hub Regulation Center, Dong-A University, Busan 602-714 (Korea, Republic of); Medical Research Science Center, Dong-A University, Busan 602-714 (Korea, Republic of); and others

    2012-04-15

    7,12-Dimethylbenzanthracene (DMBA), a polycyclic aromatic hydrocarbon, exhibits mutagenic, carcinogenic, immunosuppressive, and apoptogenic properties in various cell types. To achieve these functions effectively, DMBA is modified to its active form by cytochrome P450 1 (CYP1). Exposure to DMBA causes cytotoxicity-mediated apoptosis in bone marrow B cells and ovarian cells. Although uterine endometrium constitutively expresses CYP1A1 and CYP1B1, their apoptotic role after exposure to DMBA remains to be elucidated. Therefore, we chose RL95-2 endometrial cancer cells as a model system for studying DMBA-induced cytotoxicity and cell death and hypothesized that exposure to DMBA causes apoptosis in this cell type following CYP1A1 and/or CYP1B1 activation. We showed that DMBA-induced apoptosis in RL95-2 cells is associated with activation of caspases. In addition, mitochondrial changes, including decrease in mitochondrial potential and release of mitochondrial cytochrome c into the cytosol, support the hypothesis that a mitochondrial pathway is involved in DMBA-induced apoptosis. Exposure to DMBA upregulated the expression of AhR, Arnt, CYP1A1, and CYP1B1 significantly; this may be necessary for the conversion of DMBA to DMBA-3,4-diol-1,2-epoxide (DMBA-DE). Although both CYP1A1 and CYP1B1 were significantly upregulated by DMBA, only CYP1B1 exhibited activity. Moreover, knockdown of CYP1B1 abolished DMBA-induced apoptosis in RL95-2 cells. Our data show that RL95-2 cells are susceptible to apoptosis by exposure to DMBA and that CYP1B1 plays a pivotal role in DMBA-induced apoptosis in this system. -- Highlights: ► Cytotoxicity-mediated apoptogenic action of DMBA in human endometrial cancer cells. ► Mitochondrial pathway in DMBA-induced apoptosis of RL95-2 endometrial cancer cells. ► Requirement of ligand-selective activation of CYP1B1 in DMBA-induced apoptosis.

  14. Reduction of characteristic RL time for fast, efficient magnetic levitation

    Directory of Open Access Journals (Sweden)

    Yuqing Li

    2017-09-01

    Full Text Available We demonstrate the reduction of characteristic time in resistor-inductor (RL circuit for fast, efficient magnetic levitation according to Kirchhoff’s circuit laws. The loading time is reduced by a factor of ∼4 when a high-power resistor is added in series with the coils. By using the controllable output voltage of power supply and voltage of feedback circuit, the loading time is further reduced by ∼ 3 times. The overshoot loading in advance of the scheduled magnetic field gradient is equivalent to continuously adding a resistor without heating. The magnetic field gradient with the reduced loading time is used to form the upward magnetic force against to the gravity of the cooled Cs atoms, and we obtain an effectively levitated loading of the Cs atoms to a crossed optical dipole trap.

  15. Optimal and Autonomous Control Using Reinforcement Learning: A Survey.

    Science.gov (United States)

    Kiumarsi, Bahare; Vamvoudakis, Kyriakos G; Modares, Hamidreza; Lewis, Frank L

    2018-06-01

    This paper reviews the current state of the art on reinforcement learning (RL)-based feedback control solutions to optimal regulation and tracking of single and multiagent systems. Existing RL solutions to both optimal and control problems, as well as graphical games, will be reviewed. RL methods learn the solution to optimal control and game problems online and using measured data along the system trajectories. We discuss Q-learning and the integral RL algorithm as core algorithms for discrete-time (DT) and continuous-time (CT) systems, respectively. Moreover, we discuss a new direction of off-policy RL for both CT and DT systems. Finally, we review several applications.

  16. 216-A-36B Crib supplemental information to the Hanford Facility Contingency Plan (DOE/RL-93-75)

    International Nuclear Information System (INIS)

    Ingle, S.J.

    1996-05-01

    This document is a unit-specific contingency plan for the 216-A-36B Crib and is intended to be used as a supplement to DOE/RL-93-75, Hanford Facility Contingency Plan (DOE-RL 1993). This unit-specific plan is to be used to demonstrate compliance with the contingency plan requirements of the Washington Administrative Code, Chapter 173- 303 for certain Resource Conservation and Recovery Act of 1976 waste management units. The 216-A-36B Crib is a landfill that received ammonia scrubber waste from the 202-A Building (Plutonium/Uranium Extraction Plant) between 1966 and 1972. In 1982, the unit was reactivated to receive additional waste from Plutonium/Uranium Extraction operations. Discharges ceased in 1987, and the crib will be closed under final facility standards. Because the crib is not receiving discharges, waste management activities are no longer required. The crib does not present a significant hazard to adjacent units, personnel, or the environment. There is little likelihood that any incidents presenting hazards to public health or the environment would occur at the 216-A-36B Crib

  17. Corticostriatal circuit mechanisms of value-based action selection: Implementation of reinforcement learning algorithms and beyond.

    Science.gov (United States)

    Morita, Kenji; Jitsev, Jenia; Morrison, Abigail

    2016-09-15

    Value-based action selection has been suggested to be realized in the corticostriatal local circuits through competition among neural populations. In this article, we review theoretical and experimental studies that have constructed and verified this notion, and provide new perspectives on how the local-circuit selection mechanisms implement reinforcement learning (RL) algorithms and computations beyond them. The striatal neurons are mostly inhibitory, and lateral inhibition among them has been classically proposed to realize "Winner-Take-All (WTA)" selection of the maximum-valued action (i.e., 'max' operation). Although this view has been challenged by the revealed weakness, sparseness, and asymmetry of lateral inhibition, which suggest more complex dynamics, WTA-like competition could still occur on short time scales. Unlike the striatal circuit, the cortical circuit contains recurrent excitation, which may enable retention or temporal integration of information and probabilistic "soft-max" selection. The striatal "max" circuit and the cortical "soft-max" circuit might co-implement an RL algorithm called Q-learning; the cortical circuit might also similarly serve for other algorithms such as SARSA. In these implementations, the cortical circuit presumably sustains activity representing the executed action, which negatively impacts dopamine neurons so that they can calculate reward-prediction-error. Regarding the suggested more complex dynamics of striatal, as well as cortical, circuits on long time scales, which could be viewed as a sequence of short WTA fragments, computational roles remain open: such a sequence might represent (1) sequential state-action-state transitions, constituting replay or simulation of the internal model, (2) a single state/action by the whole trajectory, or (3) probabilistic sampling of state/action. Copyright © 2016. Published by Elsevier B.V.

  18. Balanced calibration of resonant piezoelectric RL shunts with quasi-static background flexibility correction

    DEFF Research Database (Denmark)

    Høgsberg, Jan Becker; Krenk, Steen

    2015-01-01

    Resonant RL shunt circuits constitute a robust approach to piezoelectric damping, where the performance with respect to damping of flexible structures requires a precise calibration of the corresponding circuit components. The balanced calibration procedure of the present paper is based on equal ...... that the procedure leads to equal modal damping and effective response reduction, even for rather indirect placement of the transducer, provided that the correction for background flexibility is included in the calibration procedure....

  19. The validity of the PAM-RL device for evaluating periodic limb movements in sleep and an investigation on night-to-night variability of periodic limb movements during sleep in patients with restless legs syndrome or periodic limb movement disorder using this system.

    Science.gov (United States)

    Kobayashi, Mina; Namba, Kazuyoshi; Ito, Eiki; Nishida, Shingo; Nakamura, Masaki; Ueki, Yoichiro; Furudate, Naomichi; Kagimura, Tatsuo; Usui, Akira; Inoue, Yuichi

    2014-01-01

    The status of night-to-night variability for periodic limb movements in sleep (PLMS) has not been clarified. With this in mind, we investigated the validity of PLMS measurement by actigraphy with the PAM-RL device in Japanese patients with suspected restless legs syndrome (RLS) or periodic limb movement disorder (PLMD) and the night-to-night variability of PLMS among the subjects. Forty-one subjects (mean age, 52.1±16.1 years) underwent polysomnography (PSG) and PAM-RL measurement simultaneously. Thereafter, subjects used the PAM-RL at home on four more consecutive nights. The correlation between PLMS index on PSG (PLMSI-PSG) and PLM index on PAM-RL (PLMI-PAM) was 0.781 (PPAM-RL. PAM-RL is thought to be valuable for assessing PLMS even in Japanese subjects. Recording of PAM-RL for three or more consecutive nights may be required to ensure the screening reliability of a patient with suspected pathologically frequent PLMS. Copyright © 2013 Elsevier B.V. All rights reserved.

  20. Studies on mixing of the waters of different salinity gradients using Richardsons number and the suspended sediment distribution in the Beypore Estuary, south west coast of India

    Digital Repository Service at National Institute of Oceanography (India)

    AnilKumar, N.; Sankaranarayanan, V.N.; Josanto, V.

    of Richardsons number (log R sub(L)) shows high variation at river mouth section of the estuary (section-1) and at about 10 km upstream (section-2) during the postmonsoon period. During the premonsoon period there was no noticeable variation in log R sub...

  1. Equivariant quantum Schubert calculus

    OpenAIRE

    Mihalcea, Leonardo Constantin

    2006-01-01

    We study the T-equivariant quantum cohomology of the Grassmannian. We prove the vanishing of a certain class of equivariant quantum Littlewood-Richardson coefficients, which implies an equivariant quantum Pieri rule. As in the equivariant case, this implies an algorithm to compute the equivariant quantum Littlewood-Richardson coefficients.

  2. Efficient Actor-Critic Algorithm with Hierarchical Model Learning and Planning

    Science.gov (United States)

    Fu, QiMing

    2016-01-01

    To improve the convergence rate and the sample efficiency, two efficient learning methods AC-HMLP and RAC-HMLP (AC-HMLP with ℓ 2-regularization) are proposed by combining actor-critic algorithm with hierarchical model learning and planning. The hierarchical models consisting of the local and the global models, which are learned at the same time during learning of the value function and the policy, are approximated by local linear regression (LLR) and linear function approximation (LFA), respectively. Both the local model and the global model are applied to generate samples for planning; the former is used only if the state-prediction error does not surpass the threshold at each time step, while the latter is utilized at the end of each episode. The purpose of taking both models is to improve the sample efficiency and accelerate the convergence rate of the whole algorithm through fully utilizing the local and global information. Experimentally, AC-HMLP and RAC-HMLP are compared with three representative algorithms on two Reinforcement Learning (RL) benchmark problems. The results demonstrate that they perform best in terms of convergence rate and sample efficiency. PMID:27795704

  3. A Novel adaptative Discrete Cuckoo Search Algorithm for parameter optimization in computer vision

    Directory of Open Access Journals (Sweden)

    loubna benchikhi

    2017-10-01

    Full Text Available Computer vision applications require choosing operators and their parameters, in order to provide the best outcomes. Often, the users quarry on expert knowledge and must experiment many combinations to find manually the best one. As performance, time and accuracy are important, it is necessary to automate parameter optimization at least for crucial operators. In this paper, a novel approach based on an adaptive discrete cuckoo search algorithm (ADCS is proposed. It automates the process of algorithms’ setting and provides optimal parameters for vision applications. This work reconsiders a discretization problem to adapt the cuckoo search algorithm and presents the procedure of parameter optimization. Some experiments on real examples and comparisons to other metaheuristic-based approaches: particle swarm optimization (PSO, reinforcement learning (RL and ant colony optimization (ACO show the efficiency of this novel method.

  4. Cationic PLGA/Eudragit RL nanoparticles for increasing retention time in synovial cavity after intra-articular injection in knee joint

    Directory of Open Access Journals (Sweden)

    Kim SR

    2015-08-01

    Full Text Available Sung Rae Kim,1 Myoung Jin Ho,2 Eugene Lee,3 Joon Woo Lee,3 Young Wook Choi,1 Myung Joo Kang21College of Pharmacy, Chung-Ang University, Dongjak-gu, Seoul, 2College of Pharmacy, Dankook University, Dongnam-gu, Cheonan, Chungnam, 3Department of Radiology, Seoul National University Bundang Hospital, Bundang-gu, Seongnam, Gyeonggi-do, South KoreaAbstract: Positively surface-charged poly(lactide-co-glycolide (PLGA/Eudragit RL nanoparticles (NPs were designed to increase retention time and sustain release profile in joints after intra-articular injection, by forming micrometer-sized electrostatic aggregates with hyaluronic acid, an endogenous anionic polysaccharide found in high amounts in synovial fluid. The cationic NPs consisting of PLGA, Eudragit RL, and polyvinyl alcohol were fabricated by solvent evaporation technique. The NPs were 170.1 nm in size, with a zeta potential of 21.3 mV in phosphate-buffered saline. Hyperspectral imaging (CytoViva® revealed the formation of the micrometer-sized filamentous aggregates upon admixing, due to electrostatic interaction between NPs and the polysaccharides. NPs loaded with a fluorescent probe (1,1'-dioctadecyl-3,3,3',3' tetramethylindotricarbocyanine iodide, DiR displayed a significantly improved retention time in the knee joint, with over 50% preservation of the fluorescent signal 28 days after injection. When DiR solution was injected intra-articularly, the fluorescence levels rapidly decreased to 30% of the initial concentration within 3 days in mice. From these findings, we suggest that PLGA-based cationic NPs could be a promising tool for prolonged delivery of therapeutic agents in joints selectively.Keywords: PLGA, Eudragit RL, hyaluronic acid, cationic nanoparticles, intra-articular injection, electrostatic interaction

  5. Optimal Control via Reinforcement Learning with Symbolic Policy Approximation

    NARCIS (Netherlands)

    Kubalìk, Jiřì; Alibekov, Eduard; Babuska, R.; Dochain, Denis; Henrion, Didier; Peaucelle, Dimitri

    2017-01-01

    Model-based reinforcement learning (RL) algorithms can be used to derive optimal control laws for nonlinear dynamic systems. With continuous-valued state and input variables, RL algorithms have to rely on function approximators to represent the value function and policy mappings. This paper

  6. Development of Human-level Decision Making Algorithm for NPPs through Deep Neural Networks : Conceptual Approach

    International Nuclear Information System (INIS)

    Kim, Seung Geun; Seong, Poong Hyun

    2017-01-01

    Development of operation support systems and automation systems are closely related to machine learning field. However, since it is hard to achieve human-level delicacy and flexibility for complex tasks with conventional machine learning technologies, only operation support systems with simple purposes were developed and high-level automation related studies were not actively conducted. As one of the efforts for reducing human error in NPPs and technical advance toward automation, the ultimate goal of this research is to develop human-level decision making algorithm for NPPs during emergency situations. The concepts of SL, RL, policy network, value network, and MCTS, which were applied to decision making algorithm for other fields are introduced and combined with nuclear field specifications. Since the research is currently at the conceptual stage, more research is warranted.

  7. US Department of Energy Secretary Bill Richardson (centre) at an LHC interaction region quadrupole test cryostat. part of the US contribution to LHC construction and built by the US-LHC collaboration (hence the Fermilab logo)

    CERN Multimedia

    Barbara Warmbein

    2000-01-01

    Photo 01 : September 2000 - Mr Bill Richardson, Secretary of Energy, United States of America (centre) at an LHC interaction region quadrupole test cryostat, part of the US contribution to LHC construction and built by the US-LHC collaboration (hence the Fermilab logo); with l. to r. Dr Mildred Dresselhaus, Dr Carlo Wyss, CERN Director General, Profesor Luciano Maiani, Professor Roger Cashmore, Ambassador George Moose, Dr Peter Rosen, Dr John Ellis. Photo 02 : Mr. Bill Richardson (right), Secretary of Energy United States of America with Prof. Luciano Maiani leaning over one of the LHC magnets produced at Fermilab during his visit to CERN on 16th September 2000.

  8. Reinforcement Learning in Distributed Domains: Beyond Team Games

    Science.gov (United States)

    Wolpert, David H.; Sill, Joseph; Turner, Kagan

    2000-01-01

    Distributed search algorithms are crucial in dealing with large optimization problems, particularly when a centralized approach is not only impractical but infeasible. Many machine learning concepts have been applied to search algorithms in order to improve their effectiveness. In this article we present an algorithm that blends Reinforcement Learning (RL) and hill climbing directly, by using the RL signal to guide the exploration step of a hill climbing algorithm. We apply this algorithm to the domain of a constellations of communication satellites where the goal is to minimize the loss of importance weighted data. We introduce the concept of 'ghost' traffic, where correctly setting this traffic induces the satellites to act to optimize the world utility. Our results indicated that the bi-utility search introduced in this paper outperforms both traditional hill climbing algorithms and distributed RL approaches such as team games.

  9. On the Spatial Distribution of High Velocity Al-26 Near the Galactic Center

    Science.gov (United States)

    Sturner, Steven J.

    2000-01-01

    We present results of simulations of the distribution of 1809 keV radiation from the decay of Al-26 in the Galaxy. Recent observations of this emission line using the Gamma Ray Imaging Spectrometer (GRIS) have indicated that the bulk of the AL-26 must have a velocity of approx. 500 km/ s. We have previously shown that a velocity this large could be maintained over the 10(exp 6) year lifetime of the Al-26 if it is trapped in dust grains that are reaccelerated periodically in the ISM. Here we investigate whether a dust grain velocity of approx. 500 km/ s will produce a distribution of 1809 keV emission in latitude that is consistent with the narrow distribution seen by COMPTEL. We find that dust grain velocities in the range 275 - 1000 km/ s are able to reproduce the COMPTEL 1809 keV emission maps reconstructed using the Richardson-Lucy and Maximum Entropy image reconstruction methods while the emission map reconstructed using the Multiresolution Regularized Expectation Maximization algorithm is not well fit by any of our models. The Al-26 production rate that is needed to reproduce the observed 1809 keV intensity yields in a Galactic mass of Al-26 of approx. 1.5 - 2 solar mass which is in good agreement with both other observations and theoretical production rates.

  10. Environmental Management Performance Report to DOE-RL September 2001

    International Nuclear Information System (INIS)

    EDER, D.M.

    2001-01-01

    The purpose of this report is to provide the Department of Energy Richland Operations Office (RL) a monthly summary of the Central Plateau Contractor's Environmental Management (EM) performance by Fluor Hanford (FH) and its subcontractors. Section A, Executive Summary, provides an executive level summary of the cost, schedule, and technical performance described in this report. It summarizes performance for the period covered, highlights areas worthy of management attention, and provides a forward look to some of the upcoming key performance activities as extracted from the contractor baseline. The remaining sections provide detailed performance data relative to each individual Project (e.g., Waste Management, Spent Nuclear Fuels, etc.), in support of Section A of the report. Unless otherwise noted, the Safety, Conduct of Operations, and Cost/Schedule data contained herein is as of July 31, 2001. All other information is updated as of August 22, 2001 unless otherwise noted. ''Stoplight'' boxes are used to indicate at a glance the condition of a particular area. Green boxes denote on schedule. Yellows denote behind schedule but recoverable. Red is either missed or unrecoverable, without agreement by the regulating party

  11. A Leu to Ile but not Leu to Val change at HIV-1 reverse transcriptase codon 74 in the background of K65R mutation leads to an increased processivity of K65R+L74I enzyme and a replication competent virus

    Directory of Open Access Journals (Sweden)

    Crumpacker Clyde S

    2011-01-01

    Full Text Available Abstract Background The major hurdle in the treatment of Human Immunodeficiency virus type 1 (HIV-1 includes the development of drug resistance-associated mutations in the target regions of the virus. Since reverse transcriptase (RT is essential for HIV-1 replication, several nucleoside analogues have been developed to target RT of the virus. Clinical studies have shown that mutations at RT codon 65 and 74 which are located in β3-β4 linkage group of finger sub-domain of RT are selected during treatment with several RT inhibitors, including didanosine, deoxycytidine, abacavir and tenofovir. Interestingly, the co-selection of K65R and L74V is rare in clinical settings. We have previously shown that K65R and L74V are incompatible and a R→K reversion occurs at codon 65 during replication of the virus. Analysis of the HIV resistance database has revealed that similar to K65R+L74V, the double mutant K65R+L74I is also rare. We sought to compare the impact of L→V versus L→I change at codon 74 in the background of K65R mutation, on the replication of doubly mutant viruses. Methods Proviral clones containing K65R, L74V, L74I, K65R+L74V and K65R+L74I RT mutations were created in pNL4-3 backbone and viruses were produced in 293T cells. Replication efficiencies of all the viruses were compared in peripheral blood mononuclear (PBM cells in the absence of selection pressure. Replication capacity (RC of mutant viruses in relation to wild type was calculated on the basis of antigen p24 production and RT activity, and paired analysis by student t-test was performed among RCs of doubly mutant viruses. Reversion at RT codons 65 and 74 was monitored during replication in PBM cells. In vitro processivity of mutant RTs was measured to analyze the impact of amino acid changes at RT codon 74. Results Replication kinetics plot showed that all of the mutant viruses were attenuated as compared to wild type (WT virus. Although attenuated in comparison to WT virus

  12. Socioeconomic impacts of nuclear generating stations: St. lucie case study. Technical report 1 Oct 78-4 Jan 82

    International Nuclear Information System (INIS)

    Weisiger, M.L.; Pijawka, K.D.

    1982-07-01

    The report documents a case study of the socioeconomic impacts of the construction and operation of the St. Lucie nuclear power station. It is part of a major post-licensing study of the socioeconomic impacts at twelve nuclear power stations. The case study covers the period beginning with the announcement of plans to construct the reactor and ending in the period, 1980-81. The case study deals with changes in the economy, population, settlement patterns and housing, local government and public services, social structure, and public response in the study are during the construction/operation of the reactor. A regional modeling approach is used to trace the impact of construction/operation on the local economy, labor market, and housing market. Emphasis in the study is on the attribution of socioeconomic impacts to the reactor or other causal factors. As part of the study of local public response to the construction/operation of the reactor, the effects of the Three Mile Island accident are examined

  13. Preparation of the study of the quark-gluon plasma in ALICE: the V0 detector and the low masses resonances in the muon spectrometer

    International Nuclear Information System (INIS)

    Nendaz, F.

    2009-09-01

    The ALICE (A Large Ion Collider Experiment) experiment at LHC will study from 2010 the quark-gluon plasma (QGP), phase of the matter in which quarks and gluons are deconfined. The work presented here was done within the ALICE collaboration, for preparing the analysis of the incoming experimental data. Besides a theoretical approach of the QGP and of the chiral symmetry, we develop three experimental aspects: the V0 sub-detector, the study of the low mass mesons and the deconvolution. First, we detail the measures of luminosity and multiplicity that can be done with the V0. We then develop the study of the dimuons in the muon spectrometer. We concentrate on the low masses mesons: the rho, the omega and the phi. Finally, we present a method for improving the spectrometer data: the Richardson-Lucy deconvolution. (author)

  14. Environmental Management Performance Report to DOE-RL December 2001

    International Nuclear Information System (INIS)

    EDER, D.M.

    2001-01-01

    The purpose of this report is to provide the Department of Energy Richland Operations Office (RL) a monthly summary of the Central Plateau Contractor's Environmental Management (EM) performance by Fluor Hanford (FH) and its subcontractors. Only current FH workscope responsibilities are described. Please refer to other sections (BHI, PNNL) for other contractor information. Section A, Executive Summary, provides an executive level summary of the cost, schedule, and technical performance described in this report. It summarizes performance for the period covered, highlights areas worthy of management attention, and provides a forward look to some of the upcoming key performance activities as extracted from the contractor baseline. The remaining sections provide detailed performance data relative to each individual subproject (e.g., Plutonium Finishing Plant, Spent Nuclear Fuels, etc.), in support of Section A of the report. All information is updated as of October 31, 2001 unless otherwise noted. ''Stoplight'' boxes are used to indicate at a glance the condition of a particular safety area. Green boxes denote either (1) the data are stable at a level representing ''acceptable'' performance, or (2) an improving trend exists. Yellows denote the data are stable at a level from which improvement is needed. Red denotes a trend exists in a non-improving direction

  15. Precision of RL/OSL medical dosimetry with fiber-coupled Al2O3:C: Influence of readout delay and temperature variations

    DEFF Research Database (Denmark)

    Andersen, Claus Erik; Morgenthaler Edmund, Jens; Damkjær, Sidsel Marie Skov

    2010-01-01

    Carbon-doped aluminum oxide (Al2O3:C) crystals attached to 15 m optical fiber cables can be used for online in vivo dosimetry during, for example, remotely afterloaded brachytherapy. Radioluminescence (RL) is generated spontaneously in Al2O3:C during irradiation, and this scintillator-like signal...

  16. Use of a dissolved-gas measurement system for reducing the dissolved oxygen at St. Lucie Unit 2

    International Nuclear Information System (INIS)

    Snyder, D.T.; Coit, R.L.

    1993-02-01

    When the dissolved oxygen in the condensate at St. Lucie Unit 2 could not be reduced below the administrative limit of 10 ppB, EPRI cooperated with Florida Power and Light to find the cause and develop remedies. Two problems were identified with the assistance of a dissolved gas measurement system (DGMS) that can detect leaks into condensate when used with argon blanketing. Drain piping from the air ejection system had flooded which decreased its performance, and leaks were found at a strainer flange and a couple expansion joints. Initially the dissolved oxygen content was reduced to about 9 ppB; owever, the dissolved oxygen from Condenser A was consistently higher than that from condenser B. Injection of about 0.4 cubic per minute (CFM) of argon above the hotwell considerably improved the ventilation of Condenser A, reducing the dissolved oxygen about 30% to about 6 ppB. The use of nitrogen was equally effective. While inert gas injection is helpful, it may be better to have separate air ejectors for each condenser. Several recommendations for improving oxygen removal are given

  17. Imaging reconstruction based on improved wavelet denoising combined with parallel-beam filtered back-projection algorithm

    Science.gov (United States)

    Ren, Zhong; Liu, Guodong; Huang, Zhen

    2012-11-01

    The image reconstruction is a key step in medical imaging (MI) and its algorithm's performance determinates the quality and resolution of reconstructed image. Although some algorithms have been used, filter back-projection (FBP) algorithm is still the classical and commonly-used algorithm in clinical MI. In FBP algorithm, filtering of original projection data is a key step in order to overcome artifact of the reconstructed image. Since simple using of classical filters, such as Shepp-Logan (SL), Ram-Lak (RL) filter have some drawbacks and limitations in practice, especially for the projection data polluted by non-stationary random noises. So, an improved wavelet denoising combined with parallel-beam FBP algorithm is used to enhance the quality of reconstructed image in this paper. In the experiments, the reconstructed effects were compared between the improved wavelet denoising and others (directly FBP, mean filter combined FBP and median filter combined FBP method). To determine the optimum reconstruction effect, different algorithms, and different wavelet bases combined with three filters were respectively test. Experimental results show the reconstruction effect of improved FBP algorithm is better than that of others. Comparing the results of different algorithms based on two evaluation standards i.e. mean-square error (MSE), peak-to-peak signal-noise ratio (PSNR), it was found that the reconstructed effects of the improved FBP based on db2 and Hanning filter at decomposition scale 2 was best, its MSE value was less and the PSNR value was higher than others. Therefore, this improved FBP algorithm has potential value in the medical imaging.

  18. The Lipkin-Meshkov-Glick model from the perspective of the SU(1,1) Richardson-Gaudin models

    International Nuclear Information System (INIS)

    Lerma-H, Sergio; Dukelsky, Jorge

    2014-01-01

    Originally introduced in nuclear physics as a numerical laboratory to test different many-body approximation methods, the Lipkin-Meshkov-Glick (LMG) model has received much attention as a simple enough but non-trivial model with many interesting features for areas of physics beyond the nuclear one. In this contribution we look at the LMG model as a particular example of an SU(1,1) Richardson-Gaudin model. The characteristics of the model are analyzed in terms of the behavior of the spectral-parameters or pairons which determine both eigenvalues and eigenfunctions of the model Hamiltonian. The problem of finding these pairons is mathematically equivalent to obtain the equilibrium positions of a set of electric charges moving in a two dimensional space. The electrostatic problems for the different regions of the model parameter space are discussed and linked to the different energy density of states already identified in the LMG spectrum.

  19. Tree exploration for Bayesian RL exploration

    NARCIS (Netherlands)

    Dimitrakakis, C.; Mohammadian, M.

    2008-01-01

    Research in reinforcement learning has produced algo-rithms for optimal decision making under uncertainty thatfall within two main types. The first employs a Bayesianframework, where optimality improves with increased com-putational time. This is because the resulting planning tasktakes the form of

  20. Public conference | Past, present future: LHC and future possibilities | Michelangelo Mangano, Lucie Linssen and Günther Dissertori | 20 November

    CERN Multimedia

    2014-01-01

    Public conference “Past, present future: LHC and future possibilities” by Michelangelo Mangano, Lucie Linssen and Günther Dissertori.   Thursday, 20 November, 7.30 p.m. in the Globe of Science and Innovation Talk in English with simultaneous interpreting into French. Entrance free. Limited number of seats. Reservation essential : +41 22 767 76 76 or cern.reception@cern.ch Webcast at www.cern.ch/webcast “Open problems in particle physics after the Higgs discovery”, by Michelangelo Mangano Michelangelo Mangano. Abstract The discovery of the Higgs boson is the most significant outcome so far of the LHC experiments. This discovery addresses issues in our understanding of nature that have been on the table for almost 50 years. It also provides us with a more solid basis from which to continue our exploration of the other open problems in particle physics, such as: what is the nature of dark matter? What is the origin of matter? Do all forces o...

  1. Towards social autonomous vehicles: Efficient collision avoidance scheme using Richardson's arms race model.

    Science.gov (United States)

    Riaz, Faisal; Niazi, Muaz A

    2017-01-01

    This paper presents the concept of a social autonomous agent to conceptualize such Autonomous Vehicles (AVs), which interacts with other AVs using social manners similar to human behavior. The presented AVs also have the capability of predicting intentions, i.e. mentalizing and copying the actions of each other, i.e. mirroring. Exploratory Agent Based Modeling (EABM) level of the Cognitive Agent Based Computing (CABC) framework has been utilized to design the proposed social agent. Furthermore, to emulate the functionality of mentalizing and mirroring modules of proposed social agent, a tailored mathematical model of the Richardson's arms race model has also been presented. The performance of the proposed social agent has been validated at two levels-firstly it has been simulated using NetLogo, a standard agent-based modeling tool and also, at a practical level using a prototype AV. The simulation results have confirmed that the proposed social agent-based collision avoidance strategy is 78.52% more efficient than Random walk based collision avoidance strategy in congested flock-like topologies. Whereas practical results have confirmed that the proposed scheme can avoid rear end and lateral collisions with the efficiency of 99.876% as compared with the IEEE 802.11n-based existing state of the art mirroring neuron-based collision avoidance scheme.

  2. Towards social autonomous vehicles: Efficient collision avoidance scheme using Richardson's arms race model.

    Directory of Open Access Journals (Sweden)

    Faisal Riaz

    Full Text Available This paper presents the concept of a social autonomous agent to conceptualize such Autonomous Vehicles (AVs, which interacts with other AVs using social manners similar to human behavior. The presented AVs also have the capability of predicting intentions, i.e. mentalizing and copying the actions of each other, i.e. mirroring. Exploratory Agent Based Modeling (EABM level of the Cognitive Agent Based Computing (CABC framework has been utilized to design the proposed social agent. Furthermore, to emulate the functionality of mentalizing and mirroring modules of proposed social agent, a tailored mathematical model of the Richardson's arms race model has also been presented. The performance of the proposed social agent has been validated at two levels-firstly it has been simulated using NetLogo, a standard agent-based modeling tool and also, at a practical level using a prototype AV. The simulation results have confirmed that the proposed social agent-based collision avoidance strategy is 78.52% more efficient than Random walk based collision avoidance strategy in congested flock-like topologies. Whereas practical results have confirmed that the proposed scheme can avoid rear end and lateral collisions with the efficiency of 99.876% as compared with the IEEE 802.11n-based existing state of the art mirroring neuron-based collision avoidance scheme.

  3. Development of Reinforcement Learning Algorithm for Automation of Slide Gate Check Structure in Canals

    Directory of Open Access Journals (Sweden)

    K. Shahverdi

    2016-02-01

    Full Text Available Introduction: Nowadays considering water shortage and weak management in agricultural water sector and for optimal uses of water, irrigation networks performance need to be improveed. Recently, intelligent management of water conveyance and delivery, and better control technologies have been considered for improving the performance of irrigation networks and their operation. For this affair, providing of mathematical model of automatic control system and related structures, which connected with hydrodynamic models, is necessary. The main objective of this research, is development of mathematical model of RL upstream control algorithm inside ICSS hydrodynamic model as a subroutine. Materials and Methods: In the learning systems, a set of state-action rules called classifiers compete to control the system based on the system's receipt from the environment. One could be identified five main elements of the RL: an agent, an environment, a policy, a reward function, and a simulator. The learner (decision-maker is called the agent. The thing it interacts with, comprising everything outside the agent, is called the environment. The agent selects an action based on existing state in the environment. When the agent takes an action and performs on environment, the environment goes new state and reward is assigned based on it. The agent and the environment continually interact to maximize the reward. The policy is a set of state-action pair, which have higher rewards. It defines the agent's behavior and says which action must be taken in which state. The reward function defines the goal in a RL problem. The reward function defines what the good and bad events are for the agent. The higher the reward, the better the action. The simulator provides environment information. In irrigation canals, the agent is the check structures. The action and state are the check structures adjustment and the water depth, respectively. The environment comprises the hydraulic

  4. Anayasa Mahkemesi ve Avrupa İnsan Hakları Mahkemesi Kararlarına Göre İfade Özgürlüğünün Sınırlanması

    OpenAIRE

    ÇAMAK, Sultan

    2016-01-01

    Sağlıklı bir toplum ve devlet yapısının oluşturulması için ifade özgürlüğünün sağlanması esastır. Toplumsal yaşamın sonucu olarak bireyler arasında yaşanan menfaat çatışmalarında dengenin sağlanması amacıyla ifade özgürlüğünün dahi sınırlanabileceği genel olarak hukuk sistemlerinde kabul görmektedir. Temel hak ve özgürlüklerin uluslararası alanda korunması anlamında en önemli belge sayılabilecek Avrupa İnsan Hakları Sözleşmesi’nde de sınırlamanın meşru sayıldığı haller düzenlenmiştir. Bu sebe...

  5. Improved real-time dosimetry using the radioluminescence signal from Al2O3:C

    International Nuclear Information System (INIS)

    Damkjaer, S.M.S.; Andersen, C.E.; Aznar, M.C.

    2008-01-01

    Carbon-doped aluminum oxide (Al 2 O 3 :C) is a highly sensitive luminescence material for ionizing radiation dosimetry, and it is well established that the optically stimulated luminescence (OSL) signal from Al 2 O 3 :C can be used for absorbed-dose measurements. During irradiation, Al 2 O 3 :C also emits prompt radioluminescence (RL) which allows for real-time dose verification. The RL-signal is not linear in the absorbed dose due to sensitivity changes and the presence of shallow traps. Despite this the signal can be processed to obtain a reliable dose rate signal in real time. Previously a simple algorithm for correcting the RL-signal has been published and here we report two improvements: a better and more stable calibration method which is independent of a reference dose rate and a correction for the effect of the shallow traps. Good agreement was found between reference doses and doses derived from the RL-signal using the new algorithm (the standard deviation of the residuals were ∼2% including phantom positioning errors). The RL-algorithm was found to greatly reduce the influence of shallow traps in the range from 0 to 3 Gy and the RL dose-rate measurements with a time resolution of 0.1 s closely matched dose-rate changes monitored with an ionization chamber

  6. Nucleon and isobar properties in a relativistic Hartree-Fock calculation with vector Richardson potential and various radial forms for scalar mass terms

    International Nuclear Information System (INIS)

    Dey, J.; Dey, M.; Mukhopadhyay, G.; Samanta, B.C.

    1989-01-01

    Mean field models of the nucleon and the delta are established with the two-quark vector Richardson potential along with various prescriptions for a running quark mass. This is taken to be a one-particle operator in the Dirac-Hartree Fock formalism. An effective density dependent one body potential U(ρ) for quarks at a given density ρ inside the nucleon is derived. It shows an interesting structure. Asymptotic freedom and confinement properties are built-in at high and low densities in U (ρ) and the model dependence is restricted to the intermediate desnsities. (author) [pt

  7. Fourier analysis of parallel block-Jacobi splitting with transport synthetic acceleration in two-dimensional geometry

    International Nuclear Information System (INIS)

    Rosa, M.; Warsa, J. S.; Chang, J. H.

    2007-01-01

    A Fourier analysis is conducted in two-dimensional (2D) Cartesian geometry for the discrete-ordinates (SN) approximation of the neutron transport problem solved with Richardson iteration (Source Iteration) and Richardson iteration preconditioned with Transport Synthetic Acceleration (TSA), using the Parallel Block-Jacobi (PBJ) algorithm. The results for the un-accelerated algorithm show that convergence of PBJ can degrade, leading in particular to stagnation of GMRES(m) in problems containing optically thin sub-domains. The results for the accelerated algorithm indicate that TSA can be used to efficiently precondition an iterative method in the optically thin case when implemented in the 'modified' version MTSA, in which only the scattering in the low order equations is reduced by some non-negative factor β<1. (authors)

  8. Fourier analysis of parallel inexact Block-Jacobi splitting with transport synthetic acceleration in slab geometry

    International Nuclear Information System (INIS)

    Rosa, M.; Warsa, J. S.; Chang, J. H.

    2006-01-01

    A Fourier analysis is conducted for the discrete-ordinates (SN) approximation of the neutron transport problem solved with Richardson iteration (Source Iteration) and Richardson iteration preconditioned with Transport Synthetic Acceleration (TSA), using the Parallel Block-Jacobi (PBJ) algorithm. Both 'traditional' TSA (TTSA) and a 'modified' TSA (MTSA), in which only the scattering in the low order equations is reduced by some non-negative factor β and < 1, are considered. The results for the un-accelerated algorithm show that convergence of the PBJ algorithm can degrade. The PBJ algorithm with TTSA can be effective provided the β parameter is properly tuned for a given scattering ratio c, but is potentially unstable. Compared to TTSA, MTSA is less sensitive to the choice of β, more effective for the same computational effort (c'), and it is unconditionally stable. (authors)

  9. A convergent blind deconvolution method for post-adaptive-optics astronomical imaging

    International Nuclear Information System (INIS)

    Prato, M; Camera, A La; Bertero, M; Bonettini, S

    2013-01-01

    In this paper, we propose a blind deconvolution method which applies to data perturbed by Poisson noise. The objective function is a generalized Kullback–Leibler (KL) divergence, depending on both the unknown object and unknown point spread function (PSF), without the addition of regularization terms; constrained minimization, with suitable convex constraints on both unknowns, is considered. The problem is non-convex and we propose to solve it by means of an inexact alternating minimization method, whose global convergence to stationary points of the objective function has been recently proved in a general setting. The method is iterative and each iteration, also called outer iteration, consists of alternating an update of the object and the PSF by means of a fixed number of iterations, also called inner iterations, of the scaled gradient projection (SGP) method. Therefore, the method is similar to other proposed methods based on the Richardson–Lucy (RL) algorithm, with SGP replacing RL. The use of SGP has two advantages: first, it allows one to prove global convergence of the blind method; secondly, it allows the introduction of different constraints on the object and the PSF. The specific constraint on the PSF, besides non-negativity and normalization, is an upper bound derived from the so-called Strehl ratio (SR), which is the ratio between the peak value of an aberrated versus a perfect wavefront. Therefore, a typical application, but not a unique one, is to the imaging of modern telescopes equipped with adaptive optics systems for the partial correction of the aberrations due to atmospheric turbulence. In the paper, we describe in detail the algorithm and we recall the results leading to its convergence. Moreover, we illustrate its effectiveness by means of numerical experiments whose results indicate that the method, pushed to convergence, is very promising in the reconstruction of non-dense stellar clusters. The case of more complex astronomical targets

  10. Ângulo de giro e espaçamento entre carreadores em sistemas autopropelidos de irrigação com o aspersor Plona-RL400 Wetted angle and towpath spacing for traveling gun systems with the Plona-RL400 gun sprinkler model

    Directory of Open Access Journals (Sweden)

    Giuliani do Prado

    2007-08-01

    Full Text Available Valores determinados em laboratório, de vazão, raio de alcance e do perfil radial de aplicação de água do aspersor canhão PLONA-RL400, foram utilizados em simulações digitais da uniformidade de aplicação de água desse aspersor operando, na ausência de ventos, com diferentes ângulos de giro e espaçamentos entre carreadores, em sistemas autopropelidos de irrigação. Os valores simulados de uniformidade de aplicação de água foram apresentados em três grupos distintos, cada um dos quais representando condições operacionais (bocal e pressão que determinam a ocorrência da mesma forma geométrica (I, II ou III do perfil radial adimensional de aplicação de água do aspersor avaliado. Para os perfis do tipo I, II e III, observou-se que espaçamentos de carreadores menores que 50% do diâmetro molhado ou situados entre 80 e 90% do diâmetro molhado proporcionaram os maiores valores de uniformidade. Em todas as formas geométricas do perfil, os melhores valores de uniformidade de aplicação de água foram obtidos com ângulos de giro do aspersor entre 180 e 210º.Values of flow rate, radius of throw and radial precipitation profile obtained in laboratory with the PLONA-RL400 gun sprinkler are presented. These values were used on digital simulations of water application uniformity, under no wind conditions, provided by traveling gun machines operating with this sprinkler under different combinations of wetted angle and towpath spacing. Simulated uniformity values were presented arranged under three different clusters, each one corresponding to a different set of sprinkler operational conditions (nozzle versus service pressure that results on the same geometrical shape (I, II or III of PLONA-RL400 radial precipitation profile. For the three profile shapes, it was observed that towpath spacings shorter than 50% of the sprinkler wetted diameter and on the range between 80 and 90% of the sprinkler wetted diameter provide higher

  11. Akkeçi oğlaklarında doğum ve sütten kesim ağırlığına etki eden bazı çevre faktörleri üzerine araştırmalar

    OpenAIRE

    KAHRAMAN, Züleyha

    1991-01-01

     Bu araştırmada, Akkeçi oğlaklarında doğum ve sütten kesim ağırlıkları üzerine ana yaşı, cinsiyet, doğum şekli, ananın vücut ağırlığı ve bunlara ek ola rak oğlakların doğumdaki ağırlıklarının sütten kesim ağırlığına etkileri incelenmiştir. Araştırmanın ma teryalini Ankara Üniversitesi Ziraat Fakültesi Zootekni Bölümü' nde yetiştirilen çeşitli yaştaki Akkeçiler ve bunlardan elde edilen oğlaklar oluşturmuştur. Yapılan önem kontrolleri sonucunda; oğlakların doğum ağ...

  12. 105-DR Large Sodium Fire Facility Supplemental Information to the Hanford Facility Contingency Plan (DOE/RL-93-75)

    International Nuclear Information System (INIS)

    Edens, V.G.

    1998-05-01

    This document is a unit-specific contingency plan for the 105-DR Large Sodium Fire Facility and is intended to be used as a supplement to DOE/RL-93-75, Hanford Facility Contingency Plan (DOE-RL 1993). This unit-specific plan is to be used to demonstrate compliance with the contingency plan requirements of Washington Administrative Code (WAC) 173-303 for certain Resource Conservation and Recovery Act of 1976 (RCRA) waste management units.The LSFF occupied the former ventilation supply fan room and was established to provide a means of investigating fire and safety aspects associated with large sodium or other metal alkali fires. The unit was used to conduct experiments for studying the behavior of molten alkali metals and alkali metal fires. This unit had also been used for the storage and treatment of alkali metal dangerous waste. Additionally, the Fusion Safety Support Studies programs sponsored intermediate-size safety reaction tests in the LSFF with lithium and lithium-lead compounds. The LSFF, which is a RCRA site, was partially clean closed in 1995 and is documented in 'Transfer of the 105-DR Large Sodium Fire Facility to Bechtel Hanford, Inc.' (BHI 1998). In summary, the 105-DR supply fan room (1720-DR) has been demolished, and a majority of the surrounding soils were clean-closed. The 117-DR Filter Building, 116-DR Exhaust Stack, 119- DR Sampling Building, and associated ducting/tunnels were not covered under this closure

  13. Diffusible gradients are out - an interview with Lewis Wolpert. Interviewed by Richardson, Michael K.

    Science.gov (United States)

    Wolpert, Lewis

    2009-01-01

    In 1969, Lewis Wolpert published a paper outlining his new concepts of "pattern formation" and "positional information". He had already published research on the mechanics of cell membranes in amoebae, and a series of classic studies of sea urchin gastrulation with Trygve Gustavson. Wolpert had presented his 1969 paper a year earlier at a Woods Hole conference, where it received a very hostile reception: "I wasnt asked back to America for many years!". But with Francis Crick lining up in support of diffusible morphogen gradients, positional information eventually became established as a guiding principle for research into biological pattern formation. It is now clear that pattern formation is much more complex than could possibly have been imagined in 1969. But Wolpert still believes in positional information, and regards intercalation during regeneration as its best supporting evidence. However, he and others doubt that diffusible morphogen gradients are a plausible mechanism: "Diffusible gradients are too messy", he says. Since his retirement, Lewis Wolpert has remained active as a theoretical biologist and continues to publish in leading journals. He has also campaigned for a greater public understanding of the stigma of depression. He was interviewed at home in London on July 26th, 2007 by Michael Richardson.

  14. Mudslide and/or animal attack are more plausible causes and circumstances of death for AL 288 ('Lucy'): A forensic anthropology analysis.

    Science.gov (United States)

    Charlier, Phillippe; Coppens, Yves; Augias, Anaïs; Deo, Saudamini; Froesch, Philippe; Huynh-Charlier, Isabelle

    2018-01-01

    Following a global morphological and micro-CT scan examination of the original and cast of the skeleton of Australopithecus afarensis AL 288 ('Lucy'), Kappelman et al. have recently proposed a diagnosis of a fall from a significant height (a tree) as a cause of her death. According to topographical data from the discovery site, complete re-examination of a high-quality resin cast of the whole skeleton and forensic experience, we propose that the physical process of a vertical deceleration cannot be the only cause for her observed injuries. Two different factors were involved: rolling and multiple impacts in the context of a mudslide and an animal attack with bite marks, multi-focal fractures and violent movement of the body. It is important to consider a differential diagnosis of the observed fossil lesions because environmental factors should not be excluded in this ancient archaeological context as with any modern forensic anthropological case.

  15. Türkiye'nin 7 akarsu havzasında horozbina, Salaria fluviatilis (Asso,1801, balığına ait boy-ağırlık ilişkisi.

    Directory of Open Access Journals (Sweden)

    Ali İlhan

    2015-12-01

    Full Text Available Bu çalışmada Türkiye‟nin 7 akarsu havzasından toplanmış olan Horozbina Balığı (Salaria fluviatilis‟na ait boy-ağırlık ilişkisinin ortaya çıkarılması amaçlanmıştır. Marmara, Küçük Menderes, Batı Karadeniz, Antalya, Doğu Akdeniz, Seyhan ve Ceyhan havzalarına ait akarsulardan toplanmış olan 652 birey incelenmiştir. Tüm bireyler dikkate alındığında türün Türkiye içsularındaki total boy dağılımı 2.0-12.9 cm, total ağırlık dağılımı 0.10-33.82 g, boy-ağırlık ilişkisi parametreleri a= 0.0135, b= 3.004, r2= 0.986 olarak hesaplanmıştır. Ayrıca, büyüme tipi havzaların 5‟inde izometrik, 1 havzada pozitif allometrik ve 1 havzada da negatif allometrik olarak belirlenmiştir

  16. Effect of artificial hyperglycemia and dissection of the primary focus on metastatic spreading of carcinoma RL-67

    International Nuclear Information System (INIS)

    Istomin, Yu.P.; Furmanchuk, A.V.

    1988-01-01

    In the experiments on the C57B1 mice the authors studied the effect of artificial hyperglycemia (AH), amputation of the extremities with tumors as well as combinations of these effects on the intensity of metastatic spreading of carcinoma RL-67 to the lungs. AH did not prove to intensify the process of metastatic spreading if it was conducted on the 1, 3, 5, 7, 9 and 11th days. The average number of metastases did not differ from that in the control group. AH which was conducted one day before amputation of the extremety with the tumor caused a more significant inhibition of metastatic spreading than a surgical intervention

  17. Fine-mapping of qRL6.1, a major QTL for root length of rice seedlings grown under a wide range of NH4+ concentrations in hydroponic conditions

    Science.gov (United States)

    Tamura, Wataru; Ebitani, Takeshi; Yano, Masahiro; Sato, Tadashi; Yamaya, Tomoyuki

    2010-01-01

    Root system development is an important target for improving yield in cereal crops. Active root systems that can take up nutrients more efficiently are essential for enhancing grain yield. In this study, we attempted to identify quantitative trait loci (QTL) involved in root system development by measuring root length of rice seedlings grown in hydroponic culture. Reliable growth conditions for estimating the root length were first established to renew nutrient solutions daily and supply NH4+ as a single nitrogen source. Thirty-eight chromosome segment substitution lines derived from a cross between ‘Koshihikari’, a japonica variety, and ‘Kasalath’, an indica variety, were used to detect QTL for seminal root length of seedlings grown in 5 or 500 μM NH4+. Eight chromosomal regions were found to be involved in root elongation. Among them, the most effective QTL was detected on a ‘Kasalath’ segment of SL-218, which was localized to the long-arm of chromosome 6. The ‘Kasalath’ allele at this QTL, qRL6.1, greatly promoted root elongation under all NH4+ concentrations tested. The genetic effect of this QTL was confirmed by analysis of the near-isogenic line (NIL) qRL6.1. The seminal root length of the NIL was 13.5–21.1% longer than that of ‘Koshihikari’ under different NH4+ concentrations. Toward our goal of applying qRL6.1 in a molecular breeding program to enhance rice yield, a candidate genomic region of qRL6.1 was delimited within a 337 kb region in the ‘Nipponbare’ genome by means of progeny testing of F2 plants/F3 lines derived from a cross between SL-218 and ‘Koshihikari’. Electronic supplementary material The online version of this article (doi:10.1007/s00122-010-1328-3) contains supplementary material, which is available to authorized users. PMID:20390245

  18. Fruit fly optimization based least square support vector regression for blind image restoration

    Science.gov (United States)

    Zhang, Jiao; Wang, Rui; Li, Junshan; Yang, Yawei

    2014-11-01

    The goal of image restoration is to reconstruct the original scene from a degraded observation. It is a critical and challenging task in image processing. Classical restorations require explicit knowledge of the point spread function and a description of the noise as priors. However, it is not practical for many real image processing. The recovery processing needs to be a blind image restoration scenario. Since blind deconvolution is an ill-posed problem, many blind restoration methods need to make additional assumptions to construct restrictions. Due to the differences of PSF and noise energy, blurring images can be quite different. It is difficult to achieve a good balance between proper assumption and high restoration quality in blind deconvolution. Recently, machine learning techniques have been applied to blind image restoration. The least square support vector regression (LSSVR) has been proven to offer strong potential in estimating and forecasting issues. Therefore, this paper proposes a LSSVR-based image restoration method. However, selecting the optimal parameters for support vector machine is essential to the training result. As a novel meta-heuristic algorithm, the fruit fly optimization algorithm (FOA) can be used to handle optimization problems, and has the advantages of fast convergence to the global optimal solution. In the proposed method, the training samples are created from a neighborhood in the degraded image to the central pixel in the original image. The mapping between the degraded image and the original image is learned by training LSSVR. The two parameters of LSSVR are optimized though FOA. The fitness function of FOA is calculated by the restoration error function. With the acquired mapping, the degraded image can be recovered. Experimental results show the proposed method can obtain satisfactory restoration effect. Compared with BP neural network regression, SVR method and Lucy-Richardson algorithm, it speeds up the restoration rate and

  19. Leapfrog variants of iterative methods for linear algebra equations

    Science.gov (United States)

    Saylor, Paul E.

    1988-01-01

    Two iterative methods are considered, Richardson's method and a general second order method. For both methods, a variant of the method is derived for which only even numbered iterates are computed. The variant is called a leapfrog method. Comparisons between the conventional form of the methods and the leapfrog form are made under the assumption that the number of unknowns is large. In the case of Richardson's method, it is possible to express the final iterate in terms of only the initial approximation, a variant of the iteration called the grand-leap method. In the case of the grand-leap variant, a set of parameters is required. An algorithm is presented to compute these parameters that is related to algorithms to compute the weights and abscissas for Gaussian quadrature. General algorithms to implement the leapfrog and grand-leap methods are presented. Algorithms for the important special case of the Chebyshev method are also given.

  20. Brain-Machine Interface control of a robot arm using actor-critic rainforcement learning.

    Science.gov (United States)

    Pohlmeyer, Eric A; Mahmoudi, Babak; Geng, Shijia; Prins, Noeline; Sanchez, Justin C

    2012-01-01

    Here we demonstrate how a marmoset monkey can use a reinforcement learning (RL) Brain-Machine Interface (BMI) to effectively control the movements of a robot arm for a reaching task. In this work, an actor-critic RL algorithm used neural ensemble activity in the monkey's motor cortext to control the robot movements during a two-target decision task. This novel approach to decoding offers unique advantages for BMI control applications. Compared to supervised learning decoding methods, the actor-critic RL algorithm does not require an explicit set of training data to create a static control model, but rather it incrementally adapts the model parameters according to its current performance, in this case requiring only a very basic feedback signal. We show how this algorithm achieved high performance when mapping the monkey's neural states (94%) to robot actions, and only needed to experience a few trials before obtaining accurate real-time control of the robot arm. Since RL methods responsively adapt and adjust their parameters, they can provide a method to create BMIs that are robust against perturbations caused by changes in either the neural input space or the output actions they generate under different task requirements or goals.

  1. “Mücadele ve Direnişin” cesur ajanı Ken Loach’un sinemasında insanın “Özgürleşme” sorunu: Psikanalitik yöntemle “Ülke ve Özgürlük” filmi analizi

    OpenAIRE

    Kaplan, Neşe; Kaplan, Ali Barış

    2011-01-01

    Bu çalışmada amacımız, Ken Loach Sinemasının genel özelliklerini ortaya koymak ve spesifik olarak “Ülke ve özgürlük” filmini psikanalitik yöntemle analiz etmektir. Filmlerinde, “Sınıf mücadelesi” ve “bireysel özgürlük” sorununu tartışan Ken Loach, “Ülke ve özgürlük” filmi ile esasen globalleşme süreci içindeki Modern toplumu eleştirmektedir. Yakın geçmişin hikayesini anlatan film, nostaljik değildir; bugüne de mesajı olan dinamik bir anlatı sunar....

  2. Estimates of gradient Richardson numbers from vertically smoothed data in the Gulf Stream region

    Directory of Open Access Journals (Sweden)

    Paul van Gastel

    2004-12-01

    Full Text Available We use several hydrographic and velocity sections crossing the Gulf Stream to examine how the gradient Richardson number, Ri, is modified due to both vertical smoothing of the hydrographic and/or velocity fields and the assumption of parallel or geostrophic flow. Vertical smoothing of the original (25 m interval velocity field leads to a substantial increase in the Ri mean value, of the same order as the smoothing factor, while its standard deviation remains approximately constant. This contrasts with very minor changes in the distribution of the Ri values due to vertical smoothing of the density field over similar lengths. Mean geostrophic Ri values remain always above the actual unsmoothed Ri values, commonly one to two orders of magnitude larger, but the standard deviation is typically a factor of five larger in geostrophic than in actual Ri values. At high vertical wavenumbers (length scales below 3 m the geostrophic shear only leads to near critical conditions in already rather mixed regions. At these scales, hence, the major contributor to shear mixing is likely to come from the interaction of the background flow with internal waves. At low vertical wavenumbers (scales above 25 m the ageostrophic motions provide the main source for shear, with cross-stream movements having a minor but non-negligible contribution. These large-scale motions may be associated with local accelerations taking place during frontogenetic phases of meanders.

  3. 'Creature' についての共時的・通時的考察 -Richardson の書簡体小説を中心として-

    OpenAIRE

    脇本, 恭子

    2008-01-01

    The present paper aims at examinig the word 'creature' from both synchronic and diachronic perspectives. The first section begins with providing several definitions of 'creature,' along with its etymology, in reference to such major dictionaries as the Oxford English Dictionary and Johnson's A Dictionary of the English Language(1755). In the subsequent three sections, the frequency and the collocation of this word are investigated through Richardson's Pamela and Clarissa as our main linguisti...

  4. Tunnel Ventilation Control Using Reinforcement Learning Methodology

    Science.gov (United States)

    Chu, Baeksuk; Kim, Dongnam; Hong, Daehie; Park, Jooyoung; Chung, Jin Taek; Kim, Tae-Hyung

    The main purpose of tunnel ventilation system is to maintain CO pollutant concentration and VI (visibility index) under an adequate level to provide drivers with comfortable and safe driving environment. Moreover, it is necessary to minimize power consumption used to operate ventilation system. To achieve the objectives, the control algorithm used in this research is reinforcement learning (RL) method. RL is a goal-directed learning of a mapping from situations to actions without relying on exemplary supervision or complete models of the environment. The goal of RL is to maximize a reward which is an evaluative feedback from the environment. In the process of constructing the reward of the tunnel ventilation system, two objectives listed above are included, that is, maintaining an adequate level of pollutants and minimizing power consumption. RL algorithm based on actor-critic architecture and gradient-following algorithm is adopted to the tunnel ventilation system. The simulations results performed with real data collected from existing tunnel ventilation system and real experimental verification are provided in this paper. It is confirmed that with the suggested controller, the pollutant level inside the tunnel was well maintained under allowable limit and the performance of energy consumption was improved compared to conventional control scheme.

  5. Utilizing state-of-art NeuroES and GPGPU to optimize Mario AI

    OpenAIRE

    Lövgren, Hans

    2014-01-01

    Context. Reinforcement Learning (RL) is a time consuming effort that requires a lot of computational power as well. There are mainly two approaches to improving RL efficiency, the theoretical mathematics and algorithmic approach or the practical implementation approach. In this study, the approaches are combined in an attempt to reduce time consumption.\

  6. Safety evaluation report related to the operation of St. Lucie Plant, Unit No. 2, Docket No. 50-389. Florida Power and Light Company, Orlando Utilities Commission of the City of Orlando, Florida

    International Nuclear Information System (INIS)

    1982-09-01

    On October 9, 1981, the Nuclear Regulatory Commission (NRC) staff issued a safety evaluation report (SER) related to the operation of St. Lucie Plant Unit 2. Supplement No. 1 (SSER 1) to the SER was issued in December, 1981. In the SER and SSER 1 the staff identified certain issues where either further information or additional staff effort was necessary to complete the review. The purpose of this supplement is to update the SER by providing (1) evaluation of additional information submitted by the applicant since SSER 1 to the SER was issued and (2) evaluation of the matters the staff had under review when the SSER 1 was issued

  7. Limb Bone Structural Proportions and Locomotor Behavior in A.L. 288-1 ("Lucy".

    Directory of Open Access Journals (Sweden)

    Christopher B Ruff

    Full Text Available While there is broad agreement that early hominins practiced some form of terrestrial bipedality, there is also evidence that arboreal behavior remained a part of the locomotor repertoire in some taxa, and that bipedal locomotion may not have been identical to that of modern humans. It has been difficult to evaluate such evidence, however, because of the possibility that early hominins retained primitive traits (such as relatively long upper limbs of little contemporaneous adaptive significance. Here we examine bone structural properties of the femur and humerus in the Australopithecus afarensis A.L. 288-1 ("Lucy", 3.2 Myr that are known to be developmentally plastic, and compare them with other early hominins, modern humans, and modern chimpanzees. Cross-sectional images were obtained from micro-CT scans of the original specimens and used to derive section properties of the diaphyses, as well as superior and inferior cortical thicknesses of the femoral neck. A.L. 288-1 shows femoral/humeral diaphyseal strength proportions that are intermediate between those of modern humans and chimpanzees, indicating more mechanical loading of the forelimb than in modern humans, and by implication, a significant arboreal locomotor component. Several features of the proximal femur in A.L. 288-1 and other australopiths, including relative femoral head size, distribution of cortical bone in the femoral neck, and cross-sectional shape of the proximal shaft, support the inference of a bipedal gait pattern that differed slightly from that of modern humans, involving more lateral deviation of the body center of mass over the support limb, which would have entailed increased cost of terrestrial locomotion. There is also evidence consistent with increased muscular strength among australopiths in both the forelimb and hind limb, possibly reflecting metabolic trade-offs between muscle and brain development during hominin evolution. Together these findings imply

  8. Path-finding in real and simulated rats

    DEFF Research Database (Denmark)

    Tamosiunaite, Minija; Ainge, James; Kulvicius, Tomas

    2008-01-01

    without affecting the path characteristic two additional mechanisms are implemented: a gradual drop of the learned weights (weight decay) and path length limitation, which prevents learning if the reward is not found after some expected time. Both mechanisms limit the memory of the system and thereby......A large body of experimental evidence suggests that the hippocampal place field system is involved in reward based navigation learning in rodents. Reinforcement learning (RL) mechanisms have been used to model this, associating the state space in an RL-algorithm to the place-field map in a rat...... convergence of RL-algorithms is also influenced by the state space characteristics, different PF-sizes and densities, leading to a different degree of overlap, were also investigated. The model rat learns finding a reward opposite to its starting point. We observed that the combination of biased straight...

  9. Cycloheximide and 4-OH-TEMPO suppress chloramphenicol-induced apoptosis in RL-34 cells via the suppression of the formation of megamitochondria.

    Science.gov (United States)

    Karbowski, M; Kurono, C; Wozniak, M; Ostrowski, M; Teranishi, M; Soji, T; Wakabayashi, T

    1999-02-04

    Toxic effects of chloramphenicol, an antibiotic inhibitor of mitochondrial protein synthesis, on rat liver derived RL-34 cell line were completely blocked by a combined treatment with substances endowed with direct or indirect antioxidant properties. A stable, nitroxide free radical scavenger, 4-hydroxy-2,2,6, 6-tetramethylpiperidine-1-oxyl, and a protein synthesis inhibitor, cycloheximide, suppressed in a similar manner the following manifestations of the chloramphenicol cytotoxicity: (1) Oxidative stress state as evidenced by FACS analysis of cells loaded with carboxy-dichlorodihydrofluorescein diacetate and Mito Tracker CMTH2MRos; (2) megamitochondria formation detected by staining of mitochondria with MitoTracker CMXRos under a laser confocal microscopy and electron microscopy; (3) apoptotic changes of the cell detected by the phase contrast microscopy, DNA laddering analysis and cell cycle analysis. Since increases of ROS generation in chloramphenicol-treated cells were the first sign of the chloramphenicol toxicity, we assume that oxidative stress state is a mediator of above described alternations of RL-34 cells including MG formation. Pretreatment of cells with cycloheximide or 4-hydroxy-2,2, 6,6-tetramethylpiperidine-1-oxyl, which is known to be localized into mitochondria, inhibited the megamitochondria formation and succeeding apoptotic changes of the cell. Protective effects of cycloheximide, which enhances the expression of Bcl-2 protein, may further confirm our hypothesis that the megamitochondria formation is a cellular response to an increased ROS generation and raise a possibility that antiapoptotic action of the drug is exerted via the protection of the mitochondria functions.

  10. Final environmental statement related to the operation of St. Lucie Plant, Unit No. 2. Docket No. 50-389, Florida Power and Light Company, Orlando Utilities Commission of the City of Orlando, Florida

    International Nuclear Information System (INIS)

    1982-04-01

    This final environmental statement was prepared by the US Nuclear Regulatory Commission (NRC), Office of Nuclear Reactor Regulation (the staff) in accordance with the Commission's Regulations, set forth in 10 CFR Part 51, which implement the requirements of the National Environmental Policy Act of 1969 (NEPA). Sections related to the aquatic environment were prepared in cooperation with the US Environmental Protection Agency, Region IV. This statement reviews the impact of operation of the St. Lucie Plant, Unit 2. Assessments that are found in this statement supplement those described in the Final Environmental Statement (FES-CP) that was issued in May 1974 in support of issuance of a construction permit for the unit

  11. Arnavutluk'ta Enver Hoca Dönemi İnsan Hakları ve Özgürlükler(1945-1985 Humanrights and Freedom in Albania Under Enver Hoxha Period(1945-1985

    Directory of Open Access Journals (Sweden)

    Ali ÖZKAN

    2012-09-01

    Full Text Available Human rights are the basic rights of people which have been born from the first days of humanity. According to Jean Jacques Rousseua, getting read of freedom means getting read of all values of humanbeings. It is almost imposible to think this kind of giving up of human rights and freedom. The Universal Declaration of Human Rights and the EuroeanConvention on Human Rights both generally comprises rights andfreedom, prohibition of torture, slavery, forced labor, freedom, the rightto a fair tail, legal penalties, private and family life, freedom of opinion,speech, religio and conscrence.In this essay, Enver Hoxha was subjected to value himself and hisperiod on ten different areas of human rights and freedom. Comperingthe human rights and freedom of Enver Hoxha’s to the other Balkanand Eastern European States, it is seen that there is a little e diferenceat that time.Enver Hoxha rescued his country from the invaderes country andruled it forty years without any gap. What makes different is that EnverHoca never accepts human rights and freedom although he was themost educated and enlightened man among the dictatorships in theworld at that time. The main reason of this is that Enver Hoca wants tolimit the human rights and freedom or to control all these rights.Additionally, it is well known that Markist Leninist İdeology andStalinist Opinion is a kind of structure in which you can not findfreedom and human rights. İnsan hakları insanlık tarihinin başlamasıyla birlikte doğan ve insanlığın gelişimiyle birlikte ilerleme gösteren en temel haklardır. Jean Jacques Rousseau’ya göre, insan için hürriyetinden vazgeçmek, insanlık sıfatından, insanlığın haklarından hatta görevlerinden vazgeçmek demektir. Böyle bir vazgeçmenin insan tabiatı ile bağdaşması mümkün değildir İnsan Hakları Evrensel Bildirisi ve Avrupa İnsan Hakları Sözleşmesi genel olarak haklar ve özgürlükler, işkence yasağı, kölelilik ve zorla

  12. A novel data processing technique for image reconstruction of penumbral imaging

    Science.gov (United States)

    Xie, Hongwei; Li, Hongyun; Xu, Zeping; Song, Guzhou; Zhang, Faqiang; Zhou, Lin

    2011-06-01

    CT image reconstruction technique was applied to the data processing of the penumbral imaging. Compared with other traditional processing techniques for penumbral coded pinhole image such as Wiener, Lucy-Richardson and blind technique, this approach is brand new. In this method, the coded aperture processing method was used for the first time independent to the point spread function of the image diagnostic system. In this way, the technical obstacles was overcome in the traditional coded pinhole image processing caused by the uncertainty of point spread function of the image diagnostic system. Then based on the theoretical study, the simulation of penumbral imaging and image reconstruction was carried out to provide fairly good results. While in the visible light experiment, the point source of light was used to irradiate a 5mm×5mm object after diffuse scattering and volume scattering. The penumbral imaging was made with aperture size of ~20mm. Finally, the CT image reconstruction technique was used for image reconstruction to provide a fairly good reconstruction result.

  13. Nonradioactive Dangerous Waste Landfill supplemental information to the Hanford Facility Contingency Plan (DOE/RL-93-75)

    International Nuclear Information System (INIS)

    Ingle, S.J.

    1996-05-01

    This document is a unit-specific contingency plan for the Nonradioactive Dangerous Waste Landfill and is intended to be used as a supplement to DOE/RL-93-75, 'Hanford Facility Contingency Plan.' This unit-specific plan is to be used to demonstrate compliance with the contingency plan requirements of the Washington Administrative Code, Chapter 173-303 for certain Resource, Conservation and Recovery Act of 1976 waste management units. The Nonradioactive Dangerous Waste Landfill (located approximately 3.5 miles southeast of the 200 East Area at the Hanford Site) was used for disposal of nonradioactive dangerous waste from January 1975 to May 1985. Currently, there are no dangerous waste streams disposed in the Nonradioactive Dangerous Waste Landfill. Dangerous waste management activities are no longer required at the landfill. The landfill does not present a significant hazard to adjacent units, personnel, or the environment. It is unlikely that incidents presenting hazards to public health or the environment would occur at the Nonradioactive Dangerous Waste Landfill

  14. Off-Policy Reinforcement Learning for Synchronization in Multiagent Graphical Games.

    Science.gov (United States)

    Li, Jinna; Modares, Hamidreza; Chai, Tianyou; Lewis, Frank L; Xie, Lihua

    2017-10-01

    This paper develops an off-policy reinforcement learning (RL) algorithm to solve optimal synchronization of multiagent systems. This is accomplished by using the framework of graphical games. In contrast to traditional control protocols, which require complete knowledge of agent dynamics, the proposed off-policy RL algorithm is a model-free approach, in that it solves the optimal synchronization problem without knowing any knowledge of the agent dynamics. A prescribed control policy, called behavior policy, is applied to each agent to generate and collect data for learning. An off-policy Bellman equation is derived for each agent to learn the value function for the policy under evaluation, called target policy, and find an improved policy, simultaneously. Actor and critic neural networks along with least-square approach are employed to approximate target control policies and value functions using the data generated by applying prescribed behavior policies. Finally, an off-policy RL algorithm is presented that is implemented in real time and gives the approximate optimal control policy for each agent using only measured data. It is shown that the optimal distributed policies found by the proposed algorithm satisfy the global Nash equilibrium and synchronize all agents to the leader. Simulation results illustrate the effectiveness of the proposed method.

  15. Evaluation of Release-05 GRACE time-variable gravity coefficients over the ocean

    Directory of Open Access Journals (Sweden)

    D. P. Chambers

    2012-10-01

    Full Text Available The latest release of GRACE (Gravity Recovery and Climate Experiment gravity field coefficients (Release-05, or RL05 are evaluated for ocean applications. Data have been processed using the current methodology for Release-04 (RL04 coefficients, and have been compared to output from two different ocean models. Results indicate that RL05 data from the three Science Data Centers – the Center for Space Research (CSR, GeoForschungsZentrum (GFZ, and Jet Propulsion Laboratory (JPL – are more consistent among themselves than the previous RL04 data. Moreover, the variance of residuals with the output of an ocean model is 50–60% lower for RL05 data than for RL04 data. A more optimized destriping algorithm is also tested, which improves the results slightly. By comparing the GRACE maps with two different ocean models, we can better estimate the uncertainty in the RL05 maps. We find the standard error to be about 1 cm (equivalent water thickness in the low- and mid-latitudes, and between 1.5 and 2 cm in the polar and subpolar oceans, which is comparable to estimated uncertainty for the output from the ocean models.

  16. Estimating Planetary Boundary Layer Heights from NOAA Profiler Network Wind Profiler Data

    Science.gov (United States)

    Molod, Andrea M.; Salmun, H.; Dempsey, M

    2015-01-01

    An algorithm was developed to estimate planetary boundary layer (PBL) heights from hourly archived wind profiler data from the NOAA Profiler Network (NPN) sites located throughout the central United States. Unlike previous studies, the present algorithm has been applied to a long record of publicly available wind profiler signal backscatter data. Under clear conditions, summertime averaged hourly time series of PBL heights compare well with Richardson-number based estimates at the few NPN stations with hourly temperature measurements. Comparisons with clear sky reanalysis based estimates show that the wind profiler PBL heights are lower by approximately 250-500 m. The geographical distribution of daily maximum PBL heights corresponds well with the expected distribution based on patterns of surface temperature and soil moisture. Wind profiler PBL heights were also estimated under mostly cloudy conditions, and are generally higher than both the Richardson number based and reanalysis PBL heights, resulting in a smaller clear-cloudy condition difference. The algorithm presented here was shown to provide a reliable summertime climatology of daytime hourly PBL heights throughout the central United States.

  17. Comparison of regional brain atrophy and cognitive impairment between pure akinesia with gait freezing and Richardson's syndrome

    Science.gov (United States)

    Hong, Jin Yong; Yun, Hyuk Jin; Sunwoo, Mun Kyung; Ham, Jee Hyun; Lee, Jong-Min; Sohn, Young H.; Lee, Phil Hyu

    2015-01-01

    Pure akinesia with gait freezing (PAGF) is considered a clinical phenotype of progressive supranuclear palsy. The brain atrophy and cognitive deficits in PAGF are expected to be less prominent than in classical Richardson's syndrome (RS), but this hypothesis has not been explored yet. We reviewed the medical records of 28 patients with probable RS, 19 with PAGF, and 29 healthy controls, and compared cortical thickness, subcortical gray matter volume, and neuropsychological performance among the three groups. Patients with PAGF had thinner cortices in frontal, inferior parietal, and temporal areas compared with controls; however, areas of cortical thinning in PAGF patients were less extensive than those in RS patients. In PAGF patients, hippocampal, and thalamic volumes were also smaller than controls, whereas subcortical gray matter volumes in PAGF and RS patients were comparable. In a comparison of neuropsychological tests, PAGF patients had better cognitive performance in executive function, visual memory, and visuospatial function than RS patients had. These results demonstrate that cognitive impairment, cortical thinning, and subcortical gray matter atrophy in PAGF patients resemble to those in RS patients, though the severity of cortical thinning and cognitive dysfunction is milder. Our results suggest that, PAGF and RS may share same pathology but that it appears to affect a smaller proportion of the cortex in PAGF. PMID:26483680

  18. Elevator Group Supervisory Control System Using Genetic Network Programming with Macro Nodes and Reinforcement Learning

    Science.gov (United States)

    Zhou, Jin; Yu, Lu; Mabu, Shingo; Hirasawa, Kotaro; Hu, Jinglu; Markon, Sandor

    Elevator Group Supervisory Control System (EGSCS) is a very large scale stochastic dynamic optimization problem. Due to its vast state space, significant uncertainty and numerous resource constraints such as finite car capacities and registered hall/car calls, it is hard to manage EGSCS using conventional control methods. Recently, many solutions for EGSCS using Artificial Intelligence (AI) technologies have been reported. Genetic Network Programming (GNP), which is proposed as a new evolutionary computation method several years ago, is also proved to be efficient when applied to EGSCS problem. In this paper, we propose an extended algorithm for EGSCS by introducing Reinforcement Learning (RL) into GNP framework, and an improvement of the EGSCS' performances is expected since the efficiency of GNP with RL has been clarified in some other studies like tile-world problem. Simulation tests using traffic flows in a typical office building have been made, and the results show an actual improvement of the EGSCS' performances comparing to the algorithms using original GNP and conventional control methods. Furthermore, as a further study, an importance weight optimization algorithm is employed based on GNP with RL and its efficiency is also verified with the better performances.

  19. Bio-robots automatic navigation with graded electric reward stimulation based on Reinforcement Learning.

    Science.gov (United States)

    Zhang, Chen; Sun, Chao; Gao, Liqiang; Zheng, Nenggan; Chen, Weidong; Zheng, Xiaoxiang

    2013-01-01

    Bio-robots based on brain computer interface (BCI) suffer from the lack of considering the characteristic of the animals in navigation. This paper proposed a new method for bio-robots' automatic navigation combining the reward generating algorithm base on Reinforcement Learning (RL) with the learning intelligence of animals together. Given the graded electrical reward, the animal e.g. the rat, intends to seek the maximum reward while exploring an unknown environment. Since the rat has excellent spatial recognition, the rat-robot and the RL algorithm can convergent to an optimal route by co-learning. This work has significant inspiration for the practical development of bio-robots' navigation with hybrid intelligence.

  20. Richardson constant and electrostatics in transfer-free CVD grown few-layer MoS2/graphene barristor with Schottky barrier modulation >0.6eV

    Science.gov (United States)

    Jahangir, Ifat; Uddin, M. Ahsan; Singh, Amol K.; Koley, Goutam; Chandrashekhar, M. V. S.

    2017-10-01

    We demonstrate a large area MoS2/graphene barristor, using a transfer-free method for producing 3-5 monolayer (ML) thick MoS2. The gate-controlled diodes show good rectification, with an ON/OFF ratio of ˜103. The temperature dependent back-gated study reveals Richardson's coefficient to be 80.3 ± 18.4 A/cm2/K and a mean electron effective mass of (0.66 ± 0.15)m0. Capacitance and current based measurements show the effective barrier height to vary over a large range of 0.24-0.91 eV due to incomplete field screening through the thin MoS2. Finally, we show that this barristor shows significant visible photoresponse, scaling with the Schottky barrier height. A response time of ˜10 s suggests that photoconductive gain is present in this device, resulting in high external quantum efficiency.

  1. The history of NATO TNF policy: The role of studies, analysis and exercises conference proceedings. Volume 3: Papers by Gen. Robert C. Richardson III (Ret.)

    Energy Technology Data Exchange (ETDEWEB)

    Rinne, R.L. [ed.

    1994-02-01

    This conference was organized to study and analyze the role of simulation, analysis, modeling, and exercises in the history of NATO policy. The premise was not that the results of past studies will apply to future policy, but rather that understanding what influenced the decision process-and how-would be of value. The structure of the conference was built around discussion panels. The panels were augmented by a series of papers and presentations focusing on particular TNF events, issues, studies, or exercises. The conference proceedings consist of three volumes. Volume 1 contains the conference introduction, agenda, biographical sketches of principal participants, and analytical summary of the presentations and discussion panels. Volume 2 contains a short introduction and the papers and presentations from the conference. This volume contains selected papers by Brig. Gen. Robert C. Richardson III (Ret.).

  2. Fine structure of the retinal pigment epithelium and cones of Antarctic fish Notohenia coriiceps Richardson in light and dark-conditions Ultraestrutra do epitélio pigmentar da retina e dos cones do peixe Antártico Notothenia coriiceps Richardson submetido à luz e ao escuro

    Directory of Open Access Journals (Sweden)

    Lucélia Donatti

    2007-03-01

    Full Text Available The Antarctic fish Notothenia coriiceps Richardson, 1844 lives in an environment of daily and annual photic variation and retina cells have to adjust morphologically to environmental luminosity. After seven day dark or seven day light acclimation of two groups of fish, retinas were extracted and processed for light and transmission electron microscopy. In seven day dark adapted, retina pigment epithelium melanin granules were aggregated at the basal region of cells, and macrophages were seen adjacent to the apical microvilli, between the photoreceptors. In seven day light adapted epithelium, melanin granules were inside the apical microvilli of epithelial cells and macrophages were absent. The supranuclear region of cones adapted to seven day light had less electron dense cytoplasm, and an endoplasmic reticulum with broad tubules. The mitochondria in the internal segment of cones adapted to seven day light were larger, and less electron dense. The differences in the morphology of cones and pigment epithelial cells indicate that N. coriiceps has retinal structural adjustments presumably optimizing vision in different light conditions.O peixe Antártico Notothenia coriiceps Richardson, 1844 habita meios com variações fóticas diária e anual e as células da retina se adaptam morfologicamente a esta luminosidade ambiental. Dois grupos de peixes foram aclimatados durante sete dias à luz constante ou ao escuro constante. Após secção medular, as retinas foram extraídas e processadas para microscopia de luz e microscopia eletrônica de transmissão. No epitélio pigmentar da retina adaptado sete dias ao escuro, os pigmentos de melanina agregam-se na base coroidal das células epiteliais pigmentares e macrófagos são encontrados no interior do processos apicais entre as células fotorreceptoras. No epitélio adaptado sete dias à luz os pigmentos de melanina se dispõem ao longo das projeções apicais das células epiteliais pigmentares e os

  3. A Survey of Collective Intelligence

    Science.gov (United States)

    Wolpert, David H.; Tumer, Kagan

    1999-01-01

    This chapter presents the science of "COllective INtelligence" (COIN). A COIN is a large multi-agent systems where: i) the agents each run reinforcement learning (RL) algorithms; ii) there is little to no centralized communication or control; iii) there is a provided world utility function that, rates the possible histories of tile full system. Tile conventional approach to designing large distributed systems to optimize a world utility does not use agents running RL algorithms. Rather that approach begins with explicit modeling of the overall system's dynamics, followed by detailed hand-tuning of the interactions between the components to ensure that they "cooperate" as far as the world utility is concerned. This approach is labor-intensive, often results in highly non-robust systems, and usually results in design techniques that, have limited applicability. In contrast, with COINs we wish to solve the system design problems implicitly, via the 'adaptive' character of the RL algorithms of each of the agents. This COIN approach introduces an entirely new, profound design problem: Assuming the RL algorithms are able to achieve high rewards, what reward functions for the individual agents will, when pursued by those agents, result in high world utility? In other words, what reward functions will best ensure that we do not have phenomena like the tragedy of the commons, or Braess's paradox? Although still very young, the science of COINs has already resulted in successes in artificial domains, in particular in packet-routing, the leader-follower problem, and in variants of Arthur's "El Farol bar problem". It is expected that as it matures not only will COIN science expand greatly the range of tasks addressable by human engineers, but it will also provide much insight into already established scientific fields, such as economics, game theory, or population biology.

  4. Distinct prediction errors in mesostriatal circuits of the human brain mediate learning about the values of both states and actions: evidence from high-resolution fMRI.

    Science.gov (United States)

    Colas, Jaron T; Pauli, Wolfgang M; Larsen, Tobias; Tyszka, J Michael; O'Doherty, John P

    2017-10-01

    Prediction-error signals consistent with formal models of "reinforcement learning" (RL) have repeatedly been found within dopaminergic nuclei of the midbrain and dopaminoceptive areas of the striatum. However, the precise form of the RL algorithms implemented in the human brain is not yet well determined. Here, we created a novel paradigm optimized to dissociate the subtypes of reward-prediction errors that function as the key computational signatures of two distinct classes of RL models-namely, "actor/critic" models and action-value-learning models (e.g., the Q-learning model). The state-value-prediction error (SVPE), which is independent of actions, is a hallmark of the actor/critic architecture, whereas the action-value-prediction error (AVPE) is the distinguishing feature of action-value-learning algorithms. To test for the presence of these prediction-error signals in the brain, we scanned human participants with a high-resolution functional magnetic-resonance imaging (fMRI) protocol optimized to enable measurement of neural activity in the dopaminergic midbrain as well as the striatal areas to which it projects. In keeping with the actor/critic model, the SVPE signal was detected in the substantia nigra. The SVPE was also clearly present in both the ventral striatum and the dorsal striatum. However, alongside these purely state-value-based computations we also found evidence for AVPE signals throughout the striatum. These high-resolution fMRI findings suggest that model-free aspects of reward learning in humans can be explained algorithmically with RL in terms of an actor/critic mechanism operating in parallel with a system for more direct action-value learning.

  5. The production of arabitol by a novel plant yeast isolate Candida parapsilosis 27RL-4

    Directory of Open Access Journals (Sweden)

    Kordowska-Wiater Monika

    2017-10-01

    Full Text Available Polyalcohol arabitol can be used in the food and pharmaceutical industries as a natural sweetener, a dental caries reducer, and texturing agent. Environmental samples were screened to isolate effective yeast producers of arabitol. The most promising isolate 27RL-4, obtained from raspberry leaves, was identified genetically and biochemically as Candida parapsilosis. It secreted 10.42– 10.72 g l-1 of product from 20 g l-1 of L-arabinose with a yield of 0.51 - 0.53 g g-1 at 28°C and a rotational speed of 150 rpm. Batch cultures showed that optimal pH value for arabitol production was 5.5. High yields and productivities of arabitol were obtained during incubation of the yeast at 200 rpm, or at 32°C, but the concentrations of the polyol did not exceed 10 g l-1. In modified medium, with reduced amounts of nitrogen compounds and pH 5.5-6.5, lower yeast biomass produced a similar concentration of arabitol, suggesting higher efficiency of yeast cells. This strain also produced arabitol from glucose, with much lower yields. The search for new strains able to successfully produce arabitol is important for allowing the utilization of sugars abundant in plant biomass.

  6. Immunologic basis of resistance to rl male 1 induced by immunoselected thy-1.2 negative variants

    International Nuclear Information System (INIS)

    Buxbaum, J.N.; Basch, R.S.

    1980-01-01

    Variant cell lines that have lost the Thy-1 antigen have a reduced capacity to induce tumors in syngeneic recipients when compared to Thy-1 positive clones. The negative variants are cloned, cultured cells obtained from the Thy-1.2(theta) positive BALB/c lymphoma RL male 1 in a single-step immunoselection procedure. The reduction appears to be related to an alteration in the host response to the tumor, since both the variant and parental cells induce tumors equally well in irradiated mice. Males are much more susceptible to the inoculated tumor cells, suggesting that the relevant response is restricted to females. A majority of female animals that survive challenge with the variant do not allow growth of the parental tumor when they are injected with a quantity of cells that is uniformly fatal in untreated recipients. Most of the surviving females have an antibody in their sera that is cytotoxic for the variant, its parent, and normal thymocytes. None of the few surviving males had significant titers of the antibody. Cell-mediated immunity directed toward the positive and negative tumor cells was demonstrable in half the surviving animals of both sexes

  7. A novel construction scheme of QC-LDPC codes based on the RU algorithm for optical transmission systems

    Science.gov (United States)

    Yuan, Jian-guo; Liang, Meng-qi; Wang, Yong; Lin, Jin-zhao; Pang, Yu

    2016-03-01

    A novel lower-complexity construction scheme of quasi-cyclic low-density parity-check (QC-LDPC) codes for optical transmission systems is proposed based on the structure of the parity-check matrix for the Richardson-Urbanke (RU) algorithm. Furthermore, a novel irregular QC-LDPC(4 288, 4 020) code with high code-rate of 0.937 is constructed by this novel construction scheme. The simulation analyses show that the net coding gain ( NCG) of the novel irregular QC-LDPC(4 288,4 020) code is respectively 2.08 dB, 1.25 dB and 0.29 dB more than those of the classic RS(255, 239) code, the LDPC(32 640, 30 592) code and the irregular QC-LDPC(3 843, 3 603) code at the bit error rate ( BER) of 10-6. The irregular QC-LDPC(4 288, 4 020) code has the lower encoding/decoding complexity compared with the LDPC(32 640, 30 592) code and the irregular QC-LDPC(3 843, 3 603) code. The proposed novel QC-LDPC(4 288, 4 020) code can be more suitable for the increasing development requirements of high-speed optical transmission systems.

  8. "Self-critical perfectionism, daily stress, and disclosure of daily emotional events": Correction to Richardson and Rice (2015).

    Science.gov (United States)

    2016-01-01

    Reports an error in "Self-critical perfectionism, daily stress, and disclosure of daily emotional events" by Clarissa M. E. Richardson and Kenneth G. Rice (Journal of Counseling Psychology, 2015[Oct], Vol 62[4], 694-702). In the article, the labels of the two lines in Figure 1 were inadvertently transposed. The dotted line should be labeled High SCP and the solid line should be labeled Low SCP. The correct version is present in the erratum. (The following abstract of the original article appeared in record 2015-30890-001.) Although disclosure of stressful events can alleviate distress, self-critical perfectionism may pose an especially strong impediment to disclosure during stress, likely contributing to poorer psychological well-being. In the current study, after completing a measure of self-critical perfectionism (the Discrepancy subscale of the Almost Perfect Scale-Revised; Slaney, Rice, Mobley, Trippi, & Ashby, 2001), 396 undergraduates completed measures of stress and disclosure at the end of each day for 1 week. Consistent with hypotheses and previous research, multilevel modeling results indicated significant intraindividual coupling of daily stress and daily disclosure where disclosure was more likely when experiencing high stress than low stress. As hypothesized, Discrepancy moderated the relationship between daily stress and daily disclosure. Individuals higher in self-critical perfectionism (Discrepancy) were less likely to engage in disclosure under high stress, when disclosure is often most beneficial, than those with lower Discrepancy scores. These results have implications for understanding the role of stress and coping in the daily lives of self-critical perfectionists. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  9. Integral reinforcement learning for continuous-time input-affine nonlinear systems with simultaneous invariant explorations.

    Science.gov (United States)

    Lee, Jae Young; Park, Jin Bae; Choi, Yoon Ho

    2015-05-01

    This paper focuses on a class of reinforcement learning (RL) algorithms, named integral RL (I-RL), that solve continuous-time (CT) nonlinear optimal control problems with input-affine system dynamics. First, we extend the concepts of exploration, integral temporal difference, and invariant admissibility to the target CT nonlinear system that is governed by a control policy plus a probing signal called an exploration. Then, we show input-to-state stability (ISS) and invariant admissibility of the closed-loop systems with the policies generated by integral policy iteration (I-PI) or invariantly admissible PI (IA-PI) method. Based on these, three online I-RL algorithms named explorized I-PI and integral Q -learning I, II are proposed, all of which generate the same convergent sequences as I-PI and IA-PI under the required excitation condition on the exploration. All the proposed methods are partially or completely model free, and can simultaneously explore the state space in a stable manner during the online learning processes. ISS, invariant admissibility, and convergence properties of the proposed methods are also investigated, and related with these, we show the design principles of the exploration for safe learning. Neural-network-based implementation methods for the proposed schemes are also presented in this paper. Finally, several numerical simulations are carried out to verify the effectiveness of the proposed methods.

  10. Reinforcement Learning for Ramp Control: An Analysis of Learning Parameters

    Directory of Open Access Journals (Sweden)

    Chao Lu

    2016-08-01

    Full Text Available Reinforcement Learning (RL has been proposed to deal with ramp control problems under dynamic traffic conditions; however, there is a lack of sufficient research on the behaviour and impacts of different learning parameters. This paper describes a ramp control agent based on the RL mechanism and thoroughly analyzed the influence of three learning parameters; namely, learning rate, discount rate and action selection parameter on the algorithm performance. Two indices for the learning speed and convergence stability were used to measure the algorithm performance, based on which a series of simulation-based experiments were designed and conducted by using a macroscopic traffic flow model. Simulation results showed that, compared with the discount rate, the learning rate and action selection parameter made more remarkable impacts on the algorithm performance. Based on the analysis, some suggestionsabout how to select suitable parameter values that can achieve a superior performance were provided.

  11. Library of sophisticated functions for analysis of nuclear spectra

    Science.gov (United States)

    Morháč, Miroslav; Matoušek, Vladislav

    2009-10-01

    Markov chains. The peak searching algorithms use the smoothed second differences and they can search for peaks of general form. The deconvolution (decomposition - unfolding) functions use the Gold iterative algorithm, its improved high resolution version and Richardson-Lucy algorithm. In the algorithms of peak fitting we have implemented two approaches. The first one is based on the algorithm without matrix inversion - AWMI algorithm. It allows it to fit large blocks of data and large number of parameters. The other one is based on the calculation of the system of linear equations using Stiefel-Hestens method. It converges faster than the AWMI, however it is not suitable for fitting large number of parameters. Restrictions: Dimensionality of the analyzed data is limited to two. Unusual features: Dynamically loadable library (DLL) of processing functions users can call from their own programs. Running time: Most processing routines execute interactively or in a few seconds. Computationally intensive routines (deconvolution, fitting) execute longer, depending on the number of iterations specified and volume of the processed data.

  12. An Improved Reinforcement Learning System Using Affective Factors

    Directory of Open Access Journals (Sweden)

    Takashi Kuremoto

    2013-07-01

    Full Text Available As a powerful and intelligent machine learning method, reinforcement learning (RL has been widely used in many fields such as game theory, adaptive control, multi-agent system, nonlinear forecasting, and so on. The main contribution of this technique is its exploration and exploitation approaches to find the optimal solution or semi-optimal solution of goal-directed problems. However, when RL is applied to multi-agent systems (MASs, problems such as “curse of dimension”, “perceptual aliasing problem”, and uncertainty of the environment constitute high hurdles to RL. Meanwhile, although RL is inspired by behavioral psychology and reward/punishment from the environment is used, higher mental factors such as affects, emotions, and motivations are rarely adopted in the learning procedure of RL. In this paper, to challenge agents learning in MASs, we propose a computational motivation function, which adopts two principle affective factors “Arousal” and “Pleasure” of Russell’s circumplex model of affects, to improve the learning performance of a conventional RL algorithm named Q-learning (QL. Compared with the conventional QL, computer simulations of pursuit problems with static and dynamic preys were carried out, and the results showed that the proposed method results in agents having a faster and more stable learning performance.

  13. Verifica dell'appropriatezza dei ricoveri chirurgici secondo lo strumento RL-PVACE: valutazione economica e organizzativa nell'Azienda Ospedaliera G. Salvini

    Directory of Open Access Journals (Sweden)

    R. Barni

    2003-05-01

    Full Text Available

    Obiettivi: analizzare con lo strumento RL-PVACE l’appropriatezza dei ricoveri ordinari esitati in DRG chirurgici ad alto rischio di inappropriatezza; individuare e valutare economicamente i cambiamenti organizzativi necessari per raggiungere il livello di appropriatezza della circolare regionale 39/SAN.

    Metodi: il disegno dello studio è longitudinale e retrospettivo. È stata analizzata la sola giornata di ammissione di tutti i casi ordinari dimessi nel primo semestre dell’anno 2001, con esclusione dei casi 0- 1 giorno, ed esitati in DRG chirurgici che presentavano una quota di ricoveri ordinari, valutabili con RLPVA, superiore alla soglia di ammissibilità definita nel ddg n° 20180 (2002. I DRG così selezionati sono stati in ordine decrescente di numerosità: 119, 162, 55, 158, 222, 160, 267, 232, 270, 262, 40. Utilizzando i dati della contabilità analitica e ipotizzando lo spostamento in 1^ giornata o in day hospital dei ricoveri ordinari, è stato calcolato il punto di pareggio mediante la break-even analysis.

    Risultati: i dati presentati sono relativi a 845 cartelle cliniche corrispondenti ai DRG selezionati nei Presidi di Garbagnate e Bollate. È stato rilevato un livello assistenziale appropriato nel 38,3% dei casi mentre solo il 3,8% delle ammissioni si è rilevata tempestiva (appropriatezza complessiva 40,9%. Il criterio più adottato per l’attribuzione dell’appropriatezza del livello assistenziale è stato K3 (58,3%, seguito da L4 (20%, K1 (7%; meno frequenti i criteri J6 (3,4%, L3 (3,4%, J6 (3,4%, J4 (2,8%. Gli override sono stati attivati in una quota minoritaria di casi (2,1%. Dal punto di vista economico risulta relativamente conveniente il ricorso alla modalità del “one day surgery”.

    Conclusioni: l’utilizzo del protocollo RL-PVACE è risultato agevole (basso ricorso agli override. Il livello di appropriatezza raggiunto è in

  14. İl sağlık müdürlüklerinde bulaşıcı hastalıklar insan gücünün değerlendirmesi

    Directory of Open Access Journals (Sweden)

    Raika Durusoy

    2011-09-01

    Full Text Available ÖzetAmaç: Türkiye’de il sağlık müdürlüklerinin bulaşıcı hastalıklar şubelerinde görev yapan personelin ve yöneticilerinin; meslek, deneyim, personel değişimi ve hizmet içi eğitimlerini değerlendirmek. Yöntem: Kesitsel tipteki bu araştırma kapsamında 2007 yılında Türkiye’de İl Sağlık Müdürlüklerinde bulaşıcı hastalıklar konusunda çalışan sağlık personeline (müdür yardımcıları ve bulaşıcı hastalıklar şube personeli bir anket uygulanmıştır. Anket resmi yazı ile şubelere ulaştırılmış, şubeler tarafından doldurulan anketler değerlendirilmiştir. Seksen bir ilin 78’inden yanıt alınmıştır (yanıt oranı %96.3. Ankette bulaşıcı hastalıklar konusunda çalışan personelin sosyodemografik özellikleri, meslekleri, görev süreleri, öğrenim durumları ve aldıkları hizmet içi eğitimler sorulmuştur. Ayrıca Bulaşıcı Hastalıklar Şubesinden son iki yılda ayrılanların sayısı ve özellikleri de sorgulanmıştır. Bulgular: Sağlık Bakanlığı’nın il düzeyinde bulaşıcı hastalık kontrolünde çalışan insan gücü, yanıt alınan 78 Sağlık Müdürlüğü için 77 il sağlık müdür yardımcısı, 71 şube müdürü ve 518 şube personelinden oluşmaktadır. Müdür yardımcılarının %97.4’ü, şube müdürlerinin %80.3’ü hekimdir. Müdür yardımcılarının Bulaşıcı Hastalıklar Şube Müdürlüğünde çalışma deneyimine bakıldığında %73.3’ünün daha önce şube müdürlüğü yapmadığı saptanmıştır, %97.3’ünün şube müdürlüğü dışındaki pozisyonlardan herhangi biri için şubede çalışmadığı saptanmıştır. Şube müdürlerinin %22.5’inin şubede çalışma deneyimi vardır. Kırk bir (%57.7 şube müdürü ve 22 (%28.6 müdür yardımcısı, bulaşıcı hastalıklar konusunda eğitici eğitimi almıştır. Bulaşıcı Hastalık Şube Müdürlüklerinde Şube Müdürü dışında çalışan personelin meslek

  15. Algebraic dynamics algorithm: Numerical comparison with Runge-Kutta algorithm and symplectic geometric algorithm

    Institute of Scientific and Technical Information of China (English)

    WANG ShunJin; ZHANG Hua

    2007-01-01

    Based on the exact analytical solution of ordinary differential equations,a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm.A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models.The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision,and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.

  16. Algebraic dynamics algorithm:Numerical comparison with Runge-Kutta algorithm and symplectic geometric algorithm

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Based on the exact analytical solution of ordinary differential equations, a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm. A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models. The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision, and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.

  17. The Algorithm for Algorithms: An Evolutionary Algorithm Based on Automatic Designing of Genetic Operators

    Directory of Open Access Journals (Sweden)

    Dazhi Jiang

    2015-01-01

    Full Text Available At present there is a wide range of evolutionary algorithms available to researchers and practitioners. Despite the great diversity of these algorithms, virtually all of the algorithms share one feature: they have been manually designed. A fundamental question is “are there any algorithms that can design evolutionary algorithms automatically?” A more complete definition of the question is “can computer construct an algorithm which will generate algorithms according to the requirement of a problem?” In this paper, a novel evolutionary algorithm based on automatic designing of genetic operators is presented to address these questions. The resulting algorithm not only explores solutions in the problem space like most traditional evolutionary algorithms do, but also automatically generates genetic operators in the operator space. In order to verify the performance of the proposed algorithm, comprehensive experiments on 23 well-known benchmark optimization problems are conducted. The results show that the proposed algorithm can outperform standard differential evolution algorithm in terms of convergence speed and solution accuracy which shows that the algorithm designed automatically by computers can compete with the algorithms designed by human beings.

  18. Efficient collective swimming by harnessing vortices through deep reinforcement learning.

    Science.gov (United States)

    Verma, Siddhartha; Novati, Guido; Koumoutsakos, Petros

    2018-06-05

    Fish in schooling formations navigate complex flow fields replete with mechanical energy in the vortex wakes of their companions. Their schooling behavior has been associated with evolutionary advantages including energy savings, yet the underlying physical mechanisms remain unknown. We show that fish can improve their sustained propulsive efficiency by placing themselves in appropriate locations in the wake of other swimmers and intercepting judiciously their shed vortices. This swimming strategy leads to collective energy savings and is revealed through a combination of high-fidelity flow simulations with a deep reinforcement learning (RL) algorithm. The RL algorithm relies on a policy defined by deep, recurrent neural nets, with long-short-term memory cells, that are essential for capturing the unsteadiness of the two-way interactions between the fish and the vortical flow field. Surprisingly, we find that swimming in-line with a leader is not associated with energetic benefits for the follower. Instead, "smart swimmer(s)" place themselves at off-center positions, with respect to the axis of the leader(s) and deform their body to synchronize with the momentum of the oncoming vortices, thus enhancing their swimming efficiency at no cost to the leader(s). The results confirm that fish may harvest energy deposited in vortices and support the conjecture that swimming in formation is energetically advantageous. Moreover, this study demonstrates that deep RL can produce navigation algorithms for complex unsteady and vortical flow fields, with promising implications for energy savings in autonomous robotic swarms.

  19. Adaptive, Distributed Control of Constrained Multi-Agent Systems

    Science.gov (United States)

    Bieniawski, Stefan; Wolpert, David H.

    2004-01-01

    Product Distribution (PO) theory was recently developed as a broad framework for analyzing and optimizing distributed systems. Here we demonstrate its use for adaptive distributed control of Multi-Agent Systems (MASS), i.e., for distributed stochastic optimization using MAS s. First we review one motivation of PD theory, as the information-theoretic extension of conventional full-rationality game theory to the case of bounded rational agents. In this extension the equilibrium of the game is the optimizer of a Lagrangian of the (Probability dist&&on on the joint state of the agents. When the game in question is a team game with constraints, that equilibrium optimizes the expected value of the team game utility, subject to those constraints. One common way to find that equilibrium is to have each agent run a Reinforcement Learning (E) algorithm. PD theory reveals this to be a particular type of search algorithm for minimizing the Lagrangian. Typically that algorithm i s quite inefficient. A more principled alternative is to use a variant of Newton's method to minimize the Lagrangian. Here we compare this alternative to RL-based search in three sets of computer experiments. These are the N Queen s problem and bin-packing problem from the optimization literature, and the Bar problem from the distributed RL literature. Our results confirm that the PD-theory-based approach outperforms the RL-based scheme in all three domains.

  20. Model-Based Reasoning in Humans Becomes Automatic with Training.

    Directory of Open Access Journals (Sweden)

    Marcos Economides

    2015-09-01

    Full Text Available Model-based and model-free reinforcement learning (RL have been suggested as algorithmic realizations of goal-directed and habitual action strategies. Model-based RL is more flexible than model-free but requires sophisticated calculations using a learnt model of the world. This has led model-based RL to be identified with slow, deliberative processing, and model-free RL with fast, automatic processing. In support of this distinction, it has recently been shown that model-based reasoning is impaired by placing subjects under cognitive load--a hallmark of non-automaticity. Here, using the same task, we show that cognitive load does not impair model-based reasoning if subjects receive prior training on the task. This finding is replicated across two studies and a variety of analysis methods. Thus, task familiarity permits use of model-based reasoning in parallel with other cognitive demands. The ability to deploy model-based reasoning in an automatic, parallelizable fashion has widespread theoretical implications, particularly for the learning and execution of complex behaviors. It also suggests a range of important failure modes in psychiatric disorders.

  1. The pattern of verbal, visuospatial and procedural learning in Richardson variant of progressive supranuclear palsy in comparison to Parkinson's disease.

    Science.gov (United States)

    Sitek, Emilia J; Wieczorek, Dariusz; Konkel, Agnieszka; Dąbrowska, Magda; Sławek, Jarosław

    2017-08-29

    Progressive supranuclear palsy (PSP) is regarded either within spectrum of atypical parkinsonian syndromes or frontotemporal lobar degeneration. We compared the verbal, visuospatial and procedural learning profiles in patients with PSP and Parkinson's disease (PD). Furthermore, the relationship between executive factors (initiation and inhibition) and learning outcomes was analyzed. Thirty-three patients with the clinical diagnosis of PSP-Richardson's syndrome (PSP-RS), 39 patients with PD and 29 age -and education -matched controls were administered Mini-Mental State Examination (MMSE), phonemic and semantic fluency tasks, Auditory Verbal Learning Test (AVLT), Visual Learning and Memory Test for Neuropsychological Assessment by Lamberti and Weidlich (Diagnosticum für Cerebralschädigung, DCS), Tower of Toronto (ToT) and two motor sequencing tasks. Patients with PSP-RS and PD were matched in terms of MMSE scores and mood. Performance on DCS was lower in PSP-RS than in PD. AVLT delayed recall was better in PSP-RS than PD. Motor sequencing task did not differentiate between patients. Scores on AVLT correlated positively with phonemic fluency scores in both PSP-RS and PD. ToT rule violation scores were negatively associated with DCS performance in PSP-RS and PD as well as with AVLT performance in PD. Global memory performance is relatively similar in PSP-RS and PD. Executive factors (initiation and inhibition) are closely related to memory performance in PSP-RS and PD. Visuospatial learning impairment in PSP-RS is possibly linked to impulsivity and failure to inhibit automatic responses.

  2. Real-time in vivo luminescence dosimetry in radiotherapy and mammography using Al{sub 2}O{sub 3}:C

    Energy Technology Data Exchange (ETDEWEB)

    Aznar, M.C.

    2005-06-15

    New treatment and clinical imaging techniques have created a need for accurate and practical in vivo dosimeters in radiation medicine. This work describes the development of a new optical-fiber radiation dosimeter system, based on radioluminescence (RL) and optically stimulated luminescence (OSL) from carbon-doped aluminium oxide (Al2O3:C), for applications in radiotherapy and mammography. This system offers several features, such as a small detector, high sensitivity, real-time read-out, and the ability to measure both dose rate and absorbed dose. Measurement protocols and algorithms for the correction of responses were developed to enable a reliable absorbed dose assessment from the RL and OSL signals. At radiotherapy energies, the variation of the signal with beam parameters was smaller than 1% (1 SD). Treatment-like experiments in phantoms, and in vivo measurements during complex patient treatments (such as intensity-modulated radiation therapy) indicate that the RL/OSL dosimetry system can reliably measure the absorbed dose within 2%. The real-time RL signal also enables an individual dose assessment from each field. The RL/OSL dosimetry system was also used during mammography examinations. In such conditions, the reproducibility of the measurements showed to be around 3%. In vivo measurements on three patients showed that the presence of the RL/OSL probes did not degrade the diagnostic quality of the radiograph and that the system could be used to measure exit doses (i.e., absorbed doses on the inferior surface of the breast). A Monte Carlo study proved that the energy dependence of the RL/OSL system at these low energies could be reduced by optimizing the design of the probes. It is concluded that the new RL/OSL dosimetry system shows considerable potential for applications in both radiotherapy and mammography. (au)

  3. Real-time in vivo luminescence dosimetry in radiotherapy and mammography using Al2O3:C

    International Nuclear Information System (INIS)

    Aznar, M.C.

    2005-07-01

    New treatment and clinical imaging techniques have created a need for accurate and practical in vivo dosimeters in radiation medicine. This work describes the development of a new optical-fiber radiation dosimeter system, based on radioluminescence (RL) and optically stimulated luminescence (OSL) from carbon-doped aluminium oxide (Al2O3:C), for applications in radiotherapy and mammography. This system offers several features, such as a small detector, high sensitivity, real-time read-out, and the ability to measure both dose rate and absorbed dose. Measurement protocols and algorithms for the correction of responses were developed to enable a reliable absorbed dose assessment from the RL and OSL signals. At radiotherapy energies, the variation of the signal with beam parameters was smaller than 1% (1 SD). Treatment-like experiments in phantoms, and in vivo measurements during complex patient treatments (such as intensity-modulated radiation therapy) indicate that the RL/OSL dosimetry system can reliably measure the absorbed dose within 2%. The real-time RL signal also enables an individual dose assessment from each field. The RL/OSL dosimetry system was also used during mammography examinations. In such conditions, the reproducibility of the measurements showed to be around 3%. In vivo measurements on three patients showed that the presence of the RL/OSL probes did not degrade the diagnostic quality of the radiograph and that the system could be used to measure exit doses (i.e., absorbed doses on the inferior surface of the breast). A Monte Carlo study proved that the energy dependence of the RL/OSL system at these low energies could be reduced by optimizing the design of the probes. It is concluded that the new RL/OSL dosimetry system shows considerable potential for applications in both radiotherapy and mammography. (au)

  4. Uncapacitated facility location problem with self-serving demands

    African Journals Online (AJOL)

    facility locations together with servers in J P . Define decision variables xij and yj as in ... The Erlenkotter algorithm is a heuristic weak-dual based method for UFLP. ..... 119–171 in Mirchandani PB & Francis RL (Eds), Discrete location theory, ...

  5. Applying reinforcement learning to the weapon assignment problem in air defence

    CSIR Research Space (South Africa)

    Mouton, H

    2011-12-01

    Full Text Available . The techniques investigated in this article were two methods from the machine-learning subfield of reinforcement learning (RL), namely a Monte Carlo (MC) control algorithm with exploring starts (MCES), and an off-policy temporal-difference (TD) learning...

  6. Algorithms

    Indian Academy of Sciences (India)

    polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming.

  7. EDA Çekirdekli Amin, TRIS ve Karboksil Sonlu PAMAM Dendrimerleri Kullanarak Ketoprofenin Çözünürlüğünü Geliştirme

    Directory of Open Access Journals (Sweden)

    Ali Serol ERTÜRK

    2017-10-01

    Full Text Available Steroid olmayan antienflamatuar (NSAİ ilaçlar ağrı kesici, ateş düşürücü ve antiinflamatuar etkilerinden dolayı yaygın bir şekilde kullanılmaktadırlar. Son yıllarda NSAİ ilaçların iyi bilinen klasik etkilerinin yanında birçok farklı terapötik etkilerinin (kanser, Alzheimer ve Parkinson hastalıkları de bulunduğu keşfedilmiştir. Sonuçlar gösteriyor ki, poli(amidoamin (PAMAM dendrimerler varlığında ketoprofen’in (KETO sudaki çözünürlüğü jenerasyon büyüklüğü (E2-E4 ve dendrimer konsantrasyonun (0-2 mM artmasıyla önemli ölçüde geliştirilmiştir. KETO’nun (0.22 ± 0.003 mg/mL çözünürlüğünü arttırmada PAMAM dendrimerlerin rolü E4.TRIS (52.77 ± 2.06 mg/mL> E4.COOH (36.42 ± 0.54 mg/mL> E3.TRIS (13.70 ± 0.17 mg/mL> E3.COOH (11.97 ± 0.14 mg/mL> E4.NH2 (6.53 ± 0.19 mg/mL> E2.COOH (5.95 ± 0.10 mg/mL> E2.TRIS (5.72 ± 0.10 mg/mL> E3.NH2 (4.21 ± 0.04 mg/mL> E2.NH2 (2.35 ± 0.04 mg/mL sırasına göre ve 0.002 M dendrimer varlığında 11 ile 240 kat aralığındadır.

  8. Ricky and Lucy: gender stereotyping among young Black men who have sex with men in the US Deep South and the implications for HIV risk in a severely affected population.

    Science.gov (United States)

    Lichtenstein, Bronwen; Kay, Emma Sophia; Klinger, Ian; Mutchler, Matt G

    2018-03-01

    HIV disproportionately affects young Black men who have sex with men in the USA, with especially high rates in the Deep South. In this Alabama study, we interviewed 24 pairs of young Black men who have sex with men aged 19-24 and their close friends (n = 48) about sexual scripts, dating men and condom use. Three main themes emerged from the study: the power dynamics of 'top' and 'bottom' sexual positions for condom use; gender stereotyping in the iconic style of the 'I Love Lucy' show of the 1950s; and the sexual dominance of 'trade' men. Gender stereotyping was attributed to the cultural mores of Black families in the South, to the preferences of 'trade' men who exerted sexual and financial control and to internalised stigma relating to being Black, gay and marginalised. The findings suggest that HIV prevention education for young Black men who have sex with men is misguided if gendered power dynamics are ignored, and that funded access to self-protective strategies such as pre-exposure prophylaxis and post-exposure prophylaxis could reduce HIV risk for this severely affected population.

  9. Occurrence and Distribution of Pesticides in the St. Lucie River Watershed, South-Central Florida, 2000-01, Based on Enzyme-Linked Immunosorbent Assay (ELISA) Screening

    Science.gov (United States)

    Lietz, A.C.

    2003-01-01

    The St. Lucie River watershed is a valuable estuarine ecosystem and resource in south-central Florida. The watershed has undergone extensive changes over the last century because of anthropogenic activities. These activities have resulted in a complex urban and agricultural drainage network that facilitates the transport of contaminants, including pesticides, to the primary canals and then to the estuary. Historical data indicate that aquatic life criteria for selected pesticides have been exceeded. To address this concern, a reconnaissance was conducted to assess the occurrence and distribution of selected pesticides within the St. Lucie River watershed. Numerous water samples were collected from 37 sites among various land-use categories (urban/built-up, citrus, cropland/pastureland, and inte-grated). Samples were collected at inflow points to primary canals (C-23, C-24, and C-44) and at control structures along these canals from October 2000 to September 2001. Samples were screened for four pesticide classes (triazines, chloroacetanilides, chlorophenoxy compounds, and organophosphates) by using Enzyme-Linked Immunosorbent Assay (ELISA) screening. A temporal distribution of pesticides within the watershed was made based on samples collected at the integrated sites during different rainfall events between October 2000 and September 2001. Triazines were detected in 32 percent of the samples collected at the integrated sites. Chloroacetanilides were detected in 60 percent of the samples collected at the integrated sites, with most detections occurring at one site. Chlorophenoxy compounds were detected in 17 percent of the samples collected at the integrated sites. Organophosphates were detected in only one sample. A spatial distribution and range of concentration of pesticides at the 37 sampling sites in the watershed were determined among land-use categories. Triazine concentrations ranged from highest to lowest in the citrus, urban/built-up, and integrated areas

  10. STAR Algorithm Integration Team - Facilitating operational algorithm development

    Science.gov (United States)

    Mikles, V. J.

    2015-12-01

    The NOAA/NESDIS Center for Satellite Research and Applications (STAR) provides technical support of the Joint Polar Satellite System (JPSS) algorithm development and integration tasks. Utilizing data from the S-NPP satellite, JPSS generates over thirty Environmental Data Records (EDRs) and Intermediate Products (IPs) spanning atmospheric, ocean, cryosphere, and land weather disciplines. The Algorithm Integration Team (AIT) brings technical expertise and support to product algorithms, specifically in testing and validating science algorithms in a pre-operational environment. The AIT verifies that new and updated algorithms function in the development environment, enforces established software development standards, and ensures that delivered packages are functional and complete. AIT facilitates the development of new JPSS-1 algorithms by implementing a review approach based on the Enterprise Product Lifecycle (EPL) process. Building on relationships established during the S-NPP algorithm development process and coordinating directly with science algorithm developers, the AIT has implemented structured reviews with self-contained document suites. The process has supported algorithm improvements for products such as ozone, active fire, vegetation index, and temperature and moisture profiles.

  11. Towards Behavior Control for Evolutionary Robot Based on RL with ENN

    Directory of Open Access Journals (Sweden)

    Jingan Yang

    2012-03-01

    Full Text Available This paper proposes a behavior-switching control strategy of anevolutionary robotics based on Artificial NeuralNetwork (ANN and Genetic Algorithms (GA. This method is able not only to construct thereinforcement learning models for autonomous robots and evolutionary robot modules thatcontrol behaviors and reinforcement learning environments, and but also to perform thebehavior-switching control and obstacle avoidance of an evolutionary robotics (ER intime-varying environments with static and moving obstacles by combining ANN and GA.The experimental results on thebasic behaviors and behavior-switching control have demonstrated that ourmethod can perform the decision-making strategy and parameters set opimization ofFNN and GA by learning and can escape successfully from the trap of a localminima and avoid \\emph{"motion deadlock" status} of humanoid soccer robotics agents,and reduce the oscillation of the planned trajectory betweenthe multiple obstacles by crossover and mutation. Some results of the proposed algorithmhave been successfully applied to our simulation humanoid robotics soccer team CIT3Dwhich won \\emph{the 1st prize} of RoboCup Championship and ChinaOpen2010 (July 2010 and \\emph{the $2^{nd}$ place}of the official RoboCup World Championship on 5-11 July, 2011 in Istanbul, Turkey.As compared with the conventional behavior network and the adaptive behavior method,the genetic encoding complexity of our algorithm is simplified, and the networkperformance and the {\\em convergence rate $\\rho$} have been greatlyimproved.

  12. Selfish Gene Algorithm Vs Genetic Algorithm: A Review

    Science.gov (United States)

    Ariff, Norharyati Md; Khalid, Noor Elaiza Abdul; Hashim, Rathiah; Noor, Noorhayati Mohamed

    2016-11-01

    Evolutionary algorithm is one of the algorithms inspired by the nature. Within little more than a decade hundreds of papers have reported successful applications of EAs. In this paper, the Selfish Gene Algorithms (SFGA), as one of the latest evolutionary algorithms (EAs) inspired from the Selfish Gene Theory which is an interpretation of Darwinian Theory ideas from the biologist Richards Dawkins on 1989. In this paper, following a brief introduction to the Selfish Gene Algorithm (SFGA), the chronology of its evolution is presented. It is the purpose of this paper is to present an overview of the concepts of Selfish Gene Algorithm (SFGA) as well as its opportunities and challenges. Accordingly, the history, step involves in the algorithm are discussed and its different applications together with an analysis of these applications are evaluated.

  13. Algorithms

    Indian Academy of Sciences (India)

    to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted algorithm design paradigms. In this article, we illustrate algorithm design techniques such as balancing, greedy strategy, dynamic programming strategy, and backtracking or traversal of ...

  14. Reinforcement-Learning-Based Robust Controller Design for Continuous-Time Uncertain Nonlinear Systems Subject to Input Constraints.

    Science.gov (United States)

    Liu, Derong; Yang, Xiong; Wang, Ding; Wei, Qinglai

    2015-07-01

    The design of stabilizing controller for uncertain nonlinear systems with control constraints is a challenging problem. The constrained-input coupled with the inability to identify accurately the uncertainties motivates the design of stabilizing controller based on reinforcement-learning (RL) methods. In this paper, a novel RL-based robust adaptive control algorithm is developed for a class of continuous-time uncertain nonlinear systems subject to input constraints. The robust control problem is converted to the constrained optimal control problem with appropriately selecting value functions for the nominal system. Distinct from typical action-critic dual networks employed in RL, only one critic neural network (NN) is constructed to derive the approximate optimal control. Meanwhile, unlike initial stabilizing control often indispensable in RL, there is no special requirement imposed on the initial control. By utilizing Lyapunov's direct method, the closed-loop optimal control system and the estimated weights of the critic NN are proved to be uniformly ultimately bounded. In addition, the derived approximate optimal control is verified to guarantee the uncertain nonlinear system to be stable in the sense of uniform ultimate boundedness. Two simulation examples are provided to illustrate the effectiveness and applicability of the present approach.

  15. Algorithmic mathematics

    CERN Document Server

    Hougardy, Stefan

    2016-01-01

    Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.

  16. Çokkültürlülük ve Yurtseverlik Bağlamında Azınlık Öğrencilerinin Tarih Dersleriyle İlgili Algıları

    Directory of Open Access Journals (Sweden)

    Fatih YAZICI

    2015-10-01

    Full Text Available Bu araştırmada, Türkiye’deki azınlık okullarına devam eden lise öğrencilerinin tarih dersleriyle ilgili görüşlerinin çokkültürlülük ve yurtseverlik bağlamında değerlendirilmesi amaçlanmıştır. Araştırmanın çalışma grubunu kasti örneklem yöntemiyle seçilen, İstanbul’un çeşitli ilçelerindeki azınlık okulunda öğrenim gören 199 öğrenci oluşturmaktadır. Bu çalışma grubundan veri toplamak amacıyla araştırmacılar tarafından geliştirilen Çokkültürlülük Tutum Ölçeği, Yurtseverlik Ölçeği ve tarih derslerini kendi kimlikleri açısından değerlendirmelerini amaçlayan bir anket uygulanmıştır. Verilerin analizinde aritmetik ortalama, standart sapma ve t-testi kullanılmıştır. Araştırmaya katılan öğrencilerin çoğunluğu tarih derslerinde kendi kültürel kimliklerinin yeterince yer bulamadığını, bu kimliklere yer veren bölümlerde ise ötekileştirici bir söylemin kullanıldığını belirtmişlerdir. Tarih dersleri, kültürel farklılığa sahip öğrenciler için bu şekliyle, yurtseverlik duygusu oluşturmanın uzağında kalmaktadır.Anahtar Kelimeler: Tarih Öğretimi, Çokkültürlülük, Yurtseverlik, Çokkültürlü Eğitim, Azınlık Okulları PERCEPTION of MINORITY STUDENTS about HISTORY LESSONS in the CONTEXT of MULTICULTURALISM and PATRIOTISMAbstract: The purpose of this study is to evaluate perceptions of students who attend minority school with respect to history lessons in the context of multiculturalism and patriotism in Turkey. The study group consists of intentionally chosen 199 students attending at different minority schools in Istanbul. Survey method was used in this study. The Multiculturalism Attitude Scale, Patriotism Scale and a questionnaire, developed by the researchers, examining cultural diversity in history lessons were used. In analyzing date, t- test, arithmetic mean and standard deviation were used. The findings of the study indicated

  17. A detailed study of the supernova remnant RCW 86 in TeV {gamma}-rays

    Energy Technology Data Exchange (ETDEWEB)

    Heinz, Sebastian

    2012-03-29

    A detailed study of the supernova remnant RCW 86 is presented. RCW 86 encountered a shell-like structure in radio, X-rays and optical, whereas in the discovery paper of RCW 86 in the very high energy regime the structure could not be confirmed. In this thesis for the first time the shell was resolved in the very high energy gamma rays. The shell width was determined to be 0.125 {+-}0.014 , the radius to be 0.194 {+-} 0.016 and the center to be -62.433 {+-}0.014 in declination and 220.734 {+-}0.016 in rectascension. The spectral analysis was performed for the whole SNR and for the south-east part, which is more pronounced in X-rays separately. But the results were comparable within errors. Additionally a power-law with an exponential cut off described the spectra best with the parameters: an spectral index of 1.50{+-}0.28, a cut-off energy of (2.69{+-}0.99 TeV) and an integral flux above 1 TeV of (6.51{+-}2.69) . 10{sup -12} cm{sup -2}s{sup -1}. The study of the correlation of the X-ray and VHE {gamma}-ray data of RCW 86 was hampered by the poor angular resolution of the VHE data. Therefore detailed studies of the Richardson-Lucy deconvolution algorithm have been performed. The outcome is, that deconvolution techniques are applicable to strong VHE {gamma}-ray sources and that fine structure well below the angular resolution can be studied. The application to RX J1713-3946, the brightest SNR in the VHE regime, has shown, that the correlation coefficient of the X-ray data and the VHE data of is stable down to 0.01 and has a value of 0.85. On the other side the significance of the data set is not sufficient in the case of RCW 86 to apply the deconvolution technique.

  18. Essential algorithms a practical approach to computer algorithms

    CERN Document Server

    Stephens, Rod

    2013-01-01

    A friendly and accessible introduction to the most useful algorithms Computer algorithms are the basic recipes for programming. Professional programmers need to know how to use algorithms to solve difficult programming problems. Written in simple, intuitive English, this book describes how and when to use the most practical classic algorithms, and even how to create new algorithms to meet future needs. The book also includes a collection of questions that can help readers prepare for a programming job interview. Reveals methods for manipulating common data structures s

  19. Algorithmic cryptanalysis

    CERN Document Server

    Joux, Antoine

    2009-01-01

    Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic

  20. Propuesta para el análisis financiero por medio de la aplicación de indicadores financieros para la toma de decisiones gerenciales de la Cooperativa Coopeacosta R.L.

    OpenAIRE

    Fallas Calderón, Fernando de Jesús

    2014-01-01

    Tesis de maestría -- Universidad de Costa Rica. Posgrado en Administración y Dirección de Empresas. Maestría Profesional en Administración y Dirección de Empresas con énfasis en Finanzas, 2014 Contar con una buena gestión para el manejo de los recursos financieros es un aspecto fundamental para COOPEACOSTA R.L., esto se debe a que su operación gira en torno al manejo financiero de los recursos de los asociados. De ahí el interés por contar con un proceso bien diseñado que vaya dirigido a m...

  1. 2015. aasta parimad sisustusblogid / Kaili Kannel

    Index Scriptorium Estoniae

    Kannel, Kaili

    2015-01-01

    Amara interjööriblogide konkursil selgunud parimad blogid: Copperline, The Lovely Drawer, English Buildings, Lucy Loves Ya, Mad About The House, Lucy Gough Stylist Blog, Don't Cramp my Style, Flat 15, Anna E. Lee Interior Design

  2. Denni Algorithm An Enhanced Of SMS (Scan, Move and Sort) Algorithm

    Science.gov (United States)

    Aprilsyah Lubis, Denni; Salim Sitompul, Opim; Marwan; Tulus; Andri Budiman, M.

    2017-12-01

    Sorting has been a profound area for the algorithmic researchers, and many resources are invested to suggest a more working sorting algorithm. For this purpose many existing sorting algorithms were observed in terms of the efficiency of the algorithmic complexity. Efficient sorting is important to optimize the use of other algorithms that require sorted lists to work correctly. Sorting has been considered as a fundamental problem in the study of algorithms that due to many reasons namely, the necessary to sort information is inherent in many applications, algorithms often use sorting as a key subroutine, in algorithm design there are many essential techniques represented in the body of sorting algorithms, and many engineering issues come to the fore when implementing sorting algorithms., Many algorithms are very well known for sorting the unordered lists, and one of the well-known algorithms that make the process of sorting to be more economical and efficient is SMS (Scan, Move and Sort) algorithm, an enhancement of Quicksort invented Rami Mansi in 2010. This paper presents a new sorting algorithm called Denni-algorithm. The Denni algorithm is considered as an enhancement on the SMS algorithm in average, and worst cases. The Denni algorithm is compared with the SMS algorithm and the results were promising.

  3. Reinforcement learning: Solving two case studies

    Science.gov (United States)

    Duarte, Ana Filipa; Silva, Pedro; dos Santos, Cristina Peixoto

    2012-09-01

    Reinforcement Learning algorithms offer interesting features for the control of autonomous systems, such as the ability to learn from direct interaction with the environment, and the use of a simple reward signalas opposed to the input-outputs pairsused in classic supervised learning. The reward signal indicates the success of failure of the actions executed by the agent in the environment. In this work, are described RL algorithmsapplied to two case studies: the Crawler robot and the widely known inverted pendulum. We explore RL capabilities to autonomously learn a basic locomotion pattern in the Crawler, andapproach the balancing problem of biped locomotion using the inverted pendulum.

  4. Python algorithms mastering basic algorithms in the Python language

    CERN Document Server

    Hetland, Magnus Lie

    2014-01-01

    Python Algorithms, Second Edition explains the Python approach to algorithm analysis and design. Written by Magnus Lie Hetland, author of Beginning Python, this book is sharply focused on classical algorithms, but it also gives a solid understanding of fundamental algorithmic problem-solving techniques. The book deals with some of the most important and challenging areas of programming and computer science in a highly readable manner. It covers both algorithmic theory and programming practice, demonstrating how theory is reflected in real Python programs. Well-known algorithms and data struc

  5. Integration of supercapacitive storage in renewable energy system to compare the response of two level and five level inverter with RL type load

    Science.gov (United States)

    Jana, Suman; Biswas, Pabitra Kumar; Das, Upama

    2018-04-01

    The analytical and simulation-based study in this presented paper shows a comparative report between two level inverter and five-level inverter with the integration of Supercapacitive storage in Renewable Energy system. Sometime dependent numerical models are used to measure the voltage and current response of two level and five level inverter in MATLAB Simulink based environment. In this study supercapacitive sources, which are fed by solar cells are used as input sources to experiment the response of multilevel inverter with integration of su-percapacitor as a storage device of Renewable Energy System. The RL load is used to compute the time response in MATLABSimulink based environment. With the simulation results a comparative study has been made of two different level types of inverters. Two basic types of inverter are discussed in the study with reference to their electrical behavior. It is also simulated that multilevel inverter can convert stored energy within supercapacitor which is extracted from Renewable Energy System.

  6. Nebular excitation in z ∼ 2 star-forming galaxies from the SINS and LUCI surveys: The influence of shocks and active galactic nuclei

    Energy Technology Data Exchange (ETDEWEB)

    Newman, Sarah F.; Genzel, Reinhard [Department of Astronomy, Campbell Hall, University of California, Berkeley, CA 94720 (United States); Buschkamp, Peter; Förster Schreiber, Natascha M.; Kurk, Jaron; Rosario, David; Davies, Ric; Eisenhauer, Frank; Lutz, Dieter [Max-Planck-Institut für extraterrestrische Physik (MPE), Giessenbachstr. 1, D-85748 Garching (Germany); Sternberg, Amiel [School of Physics and Astronomy, Tel Aviv University, Tel Aviv 69978 (Israel); Gnat, Orly [Racah Institute of Physics, The Hebrew University, Jerusalem 91904 (Israel); Mancini, Chiara; Renzini, Alvio [Osservatorio Astronomico di Padova, Vicolo dell' Osservatorio 5, I-35122 Padova (Italy); Lilly, Simon J.; Carollo, C. Marcella [Institute of Astronomy, Department of Physics, Eidgenössische Technische Hochschule, ETH, CH-8093 Zürich (Switzerland); Burkert, Andreas [Universitäts-Sternwarte Ludwig-Maximilians-Universität (USM), Scheinerstr. 1, D-81679 München (Germany); Cresci, Giovanni [Istituto Nazionale di Astrofisica Osservatorio di Bologna, Via Ranzani 1, I-40127 Bologna (Italy); Genel, Shy [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Shapiro Griffin, Kristen [Space Sciences Research Group, Northrop Grumman Aerospace Systems, Redondo Beach, CA 90278 (United States); Hicks, Erin K. S., E-mail: sfnewman@berkeley.edu [Department of Astronomy, University of Washington, Box 351580, U.W., Seattle, WA 98195-1580 (United States); and others

    2014-01-20

    Based on high-resolution, spatially resolved data of 10 z ∼ 2 star-forming galaxies from the SINS/zC-SINF survey and LUCI data for 12 additional galaxies, we probe the excitation properties of high-z galaxies and the impact of active galactic nuclei (AGNs), shocks, and photoionization. We explore how these spatially resolved line ratios can inform our interpretation of integrated emission line ratios obtained at high redshift. Many of our galaxies fall in the 'composite' region of the z ∼ 0 [N II]/Hα versus [O III]/Hβ diagnostic (BPT) diagram, between star-forming galaxies and those with AGNs. Based on our resolved measurements, we find that some of these galaxies likely host an AGN, while others appear to be affected by the presence of shocks possibly caused by an outflow or from an enhanced ionization parameter as compared with H II regions in normal, local star-forming galaxies. We find that the Mass-Excitation (MEx) diagnostic, which separates purely star-forming and AGN hosting local galaxies in the [O III]/Hβ versus stellar mass plane, does not properly separate z ∼ 2 galaxies classified according to the BPT diagram. However, if we shift the galaxies based on the offset between the local and z ∼ 2 mass-metallicity relation (i.e., to the mass they would have at z ∼ 0 with the same metallicity), we find better agreement between the MEx and BPT diagnostics. Finally, we find that metallicity calibrations based on [N II]/Hα are more biased by shocks and AGNs at high-z than the [O III]/Hβ/[N II]/Hα calibration.

  7. On a multigrid method for the coupled Stokes and porous media flow problem

    Science.gov (United States)

    Luo, P.; Rodrigo, C.; Gaspar, F. J.; Oosterlee, C. W.

    2017-07-01

    The multigrid solution of coupled porous media and Stokes flow problems is considered. The Darcy equation as the saturated porous medium model is coupled to the Stokes equations by means of appropriate interface conditions. We focus on an efficient multigrid solution technique for the coupled problem, which is discretized by finite volumes on staggered grids, giving rise to a saddle point linear system. Special treatment is required regarding the discretization at the interface. An Uzawa smoother is employed in multigrid, which is a decoupled procedure based on symmetric Gauss-Seidel smoothing for velocity components and a simple Richardson iteration for the pressure field. Since a relaxation parameter is part of a Richardson iteration, Local Fourier Analysis (LFA) is applied to determine the optimal parameters. Highly satisfactory multigrid convergence is reported, and, moreover, the algorithm performs well for small values of the hydraulic conductivity and fluid viscosity, that are relevant for applications.

  8. Hybrid Cryptosystem Using Tiny Encryption Algorithm and LUC Algorithm

    Science.gov (United States)

    Rachmawati, Dian; Sharif, Amer; Jaysilen; Andri Budiman, Mohammad

    2018-01-01

    Security becomes a very important issue in data transmission and there are so many methods to make files more secure. One of that method is cryptography. Cryptography is a method to secure file by writing the hidden code to cover the original file. Therefore, if the people do not involve in cryptography, they cannot decrypt the hidden code to read the original file. There are many methods are used in cryptography, one of that method is hybrid cryptosystem. A hybrid cryptosystem is a method that uses a symmetric algorithm to secure the file and use an asymmetric algorithm to secure the symmetric algorithm key. In this research, TEA algorithm is used as symmetric algorithm and LUC algorithm is used as an asymmetric algorithm. The system is tested by encrypting and decrypting the file by using TEA algorithm and using LUC algorithm to encrypt and decrypt the TEA key. The result of this research is by using TEA Algorithm to encrypt the file, the cipher text form is the character from ASCII (American Standard for Information Interchange) table in the form of hexadecimal numbers and the cipher text size increase by sixteen bytes as the plaintext length is increased by eight characters.

  9. Dynamic Resource Allocation with Integrated Reinforcement Learning for a D2D-Enabled LTE-A Network with Access to Unlicensed Band

    Directory of Open Access Journals (Sweden)

    Alia Asheralieva

    2016-01-01

    Full Text Available We propose a dynamic resource allocation algorithm for device-to-device (D2D communication underlying a Long Term Evolution Advanced (LTE-A network with reinforcement learning (RL applied for unlicensed channel allocation. In a considered system, the inband and outband resources are assigned by the LTE evolved NodeB (eNB to different device pairs to maximize the network utility subject to the target signal-to-interference-and-noise ratio (SINR constraints. Because of the absence of an established control link between the unlicensed and cellular radio interfaces, the eNB cannot acquire any information about the quality and availability of unlicensed channels. As a result, a considered problem becomes a stochastic optimization problem that can be dealt with by deploying a learning theory (to estimate the random unlicensed channel environment. Consequently, we formulate the outband D2D access as a dynamic single-player game in which the player (eNB estimates its possible strategy and expected utility for all of its actions based only on its own local observations using a joint utility and strategy estimation based reinforcement learning (JUSTE-RL with regret algorithm. A proposed approach for resource allocation demonstrates near-optimal performance after a small number of RL iterations and surpasses the other comparable methods in terms of energy efficiency and throughput maximization.

  10. Sound algorithms

    OpenAIRE

    De Götzen , Amalia; Mion , Luca; Tache , Olivier

    2007-01-01

    International audience; We call sound algorithms the categories of algorithms that deal with digital sound signal. Sound algorithms appeared in the very infancy of computer. Sound algorithms present strong specificities that are the consequence of two dual considerations: the properties of the digital sound signal itself and its uses, and the properties of auditory perception.

  11. Instance-based Policy Learning by Real-coded Genetic Algorithms and Its Application to Control of Nonholonomic Systems

    Science.gov (United States)

    Miyamae, Atsushi; Sakuma, Jun; Ono, Isao; Kobayashi, Shigenobu

    The stabilization control of nonholonomic systems have been extensively studied because it is essential for nonholonomic robot control problems. The difficulty in this problem is that the theoretical derivation of control policy is not necessarily guaranteed achievable. In this paper, we present a reinforcement learning (RL) method with instance-based policy (IBP) representation, in which control policies for this class are optimized with respect to user-defined cost functions. Direct policy search (DPS) is an approach for RL; the policy is represented by parametric models and the model parameters are directly searched by optimization techniques including genetic algorithms (GAs). In IBP representation an instance consists of a state and an action pair; a policy consists of a set of instances. Several DPSs with IBP have been previously proposed. In these methods, sometimes fail to obtain optimal control policies when state-action variables are continuous. In this paper, we present a real-coded GA for DPSs with IBP. Our method is specifically designed for continuous domains. Optimization of IBP has three difficulties; high-dimensionality, epistasis, and multi-modality. Our solution is designed for overcoming these difficulties. The policy search with IBP representation appears to be high-dimensional optimization; however, instances which can improve the fitness are often limited to active instances (instances used for the evaluation). In fact, the number of active instances is small. Therefore, we treat the search problem as a low dimensional problem by restricting search variables only to active instances. It has been commonly known that functions with epistasis can be efficiently optimized with crossovers which satisfy the inheritance of statistics. For efficient search of IBP, we propose extended crossover-like mutation (extended XLM) which generates a new instance around an instance with satisfying the inheritance of statistics. For overcoming multi-modality, we

  12. Genetic algorithms

    Science.gov (United States)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  13. Eğitim Fakültelerinde Görev Yapmakta Olan Öğretim Elemanlarının Çokkültürlü Yeterlik Algılarının İncelenmesi Examining Multicultural Competence Perceptions of Education Faculty

    Directory of Open Access Journals (Sweden)

    Alper BAŞBAY

    2013-03-01

    Full Text Available Multicultural education is a learning and teaching approach thatis based on democratic belief and values. The main assumption is toconstruct the learning and teaching process that encourages culturalpluralism and respects different cultures in a common sense. It isinevitable that teachers who work in the primary and the secondaryschools have to provide multicultural environments for students whohave different cultural backgrounds. In this manner it is required toidentify the Education faculty’s, who train teacher candidates,perceptions about multiculturalism. In the present study Educationfaculties’ perceptions regarding multicultural awareness, multiculturalknowledge and multicultural skill levels and whether these competencylevels show significant differences with respect to various variables wereexamined. 347 faculties (176 female, 171 male from 63 universitiesparticipated in the study. The data of the study was collected by“Perceptions of Multicultural Competence Scale” developed by theresearchers. In order to analyze data multivariate ANOVA (MANOVAwas used. According to the results, it was seen that Education faculties’perceived multicultural competencies were high. As the other result ofthe study, regarding perceived multicultural competencies although nosignificant differences were found in terms of academic position,geographical region, job experience and place lived for most of their livesvariables, two demographic variables namely gender and internationalexperience were found to be statistically significant. Female faculties’cultural awareness and skill levels were higher than male faculty andmulticultural competence perceptions of faculty who has internationalexperience were higher than faculty who does not. Çokkültürlü eğitim, demokratik inanç ve değerler üzerine kurulubir öğrenme ve öğretme yaklaşımıdır. Öğrenme ve öğretme sürecininkültürel çoğulculuğu teşvik edecek ve farklı kültürlere sayg

  14. Algorithm aversion: people erroneously avoid algorithms after seeing them err.

    Science.gov (United States)

    Dietvorst, Berkeley J; Simmons, Joseph P; Massey, Cade

    2015-02-01

    Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. In 5 studies, participants either saw an algorithm make forecasts, a human make forecasts, both, or neither. They then decided whether to tie their incentives to the future predictions of the algorithm or the human. Participants who saw the algorithm perform were less confident in it, and less likely to choose it over an inferior human forecaster. This was true even among those who saw the algorithm outperform the human.

  15. The Texas Medication Algorithm Project (TMAP) schizophrenia algorithms.

    Science.gov (United States)

    Miller, A L; Chiles, J A; Chiles, J K; Crismon, M L; Rush, A J; Shon, S P

    1999-10-01

    In the Texas Medication Algorithm Project (TMAP), detailed guidelines for medication management of schizophrenia and related disorders, bipolar disorders, and major depressive disorders have been developed and implemented. This article describes the algorithms developed for medication treatment of schizophrenia and related disorders. The guidelines recommend a sequence of medications and discuss dosing, duration, and switch-over tactics. They also specify response criteria at each stage of the algorithm for both positive and negative symptoms. The rationale and evidence for each aspect of the algorithms are presented.

  16. Richardson WCAP 39

    African Journals Online (AJOL)

    user

    Peer-reviewed paper: 10th World Conference on Animal Production. 204. Results and Discussion. The model was evaluated by comparing model output with the results of an experiment in South. Western Zimbabwe that examined the performance of individual cows subjected to widely different stocking rates, 0.123 (LSR) ...

  17. 15 CFR Appendix B to Subpart G of... - Marine Reserve Boundaries

    Science.gov (United States)

    2010-01-01

    ....] B.1. Richardson Rock (San Miguel Island) Marine Reserve The Richardson Rock Marine Reserve (Richardson Rock) boundary is defined by the 3 nmi State boundary, the coordinates provided in Table B-1, and the following textual description. The Richardson Rock boundary extends from Point 1 to Point 2 along...

  18. Algorithming the Algorithm

    DEFF Research Database (Denmark)

    Mahnke, Martina; Uprichard, Emma

    2014-01-01

    Imagine sailing across the ocean. The sun is shining, vastness all around you. And suddenly [BOOM] you’ve hit an invisible wall. Welcome to the Truman Show! Ever since Eli Pariser published his thoughts on a potential filter bubble, this movie scenario seems to have become reality, just with slight...... changes: it’s not the ocean, it’s the internet we’re talking about, and it’s not a TV show producer, but algorithms that constitute a sort of invisible wall. Building on this assumption, most research is trying to ‘tame the algorithmic tiger’. While this is a valuable and often inspiring approach, we...

  19. Effects of spray drying conditions on the physicochemical properties of the Tramadol-Hcl microparticles containing Eudragit® RS and RL

    Directory of Open Access Journals (Sweden)

    A S Patel

    2012-01-01

    Full Text Available The preparation of Tramadol-HCL spray-dried microspheres can be affected by the long drug recrystallization time. Polymer type and drug-polymer ratio as well as manufacturing parameters affect the preparation. The purpose of this work was to evaluate the possibility to obtain tramadol spray-dried microspheres using the Eudragit® RS and RL; the influence of the spray-drying parameters on morphology, dimension, and physical stability of microspheres was studied. The effects of matrix composition on microparticle properties were characterized by Laser Light scattering, differential scanning calorimetry (DSC, X-ray diffraction study, FT-infrared and UV-visible spectroscopy. The spray-dried microparticles were evaluated in terms of shape (SEM, size distribution (Laser light scattering method, production yield, drug content, initial drug loding and encapsulation efficiency. The results of X-ray diffraction and thermal analysis reveals the conversion of crystalline drug to amorphous. FTIR analysis confirmed the absence of any drug polymer interaction. The results indicated that the entrapment efficiency (EE, and product yield were depended on polymeric composition and polymeric ratios of the microspheres prepared. Tramadol microspheres based on Eudragit® blend can be prepared by spray-drying and the nebulization parameters do not influence significantly on particle properties.

  20. Pseudo-deterministic Algorithms

    OpenAIRE

    Goldwasser , Shafi

    2012-01-01

    International audience; In this talk we describe a new type of probabilistic algorithm which we call Bellagio Algorithms: a randomized algorithm which is guaranteed to run in expected polynomial time, and to produce a correct and unique solution with high probability. These algorithms are pseudo-deterministic: they can not be distinguished from deterministic algorithms in polynomial time by a probabilistic polynomial time observer with black box access to the algorithm. We show a necessary an...

  1. A Retroperitoneal Leiomyosarcoma Presenting as an Adrenal Incidentaloma in a Subject on Warfarin

    Directory of Open Access Journals (Sweden)

    Ishrat N. Khan

    2015-01-01

    Full Text Available Adrenal incidentalomas (AIs are mostly benign and nonsecretory. Management algorithms lack sensitivity when assessing malignant potential, although functional status is easier to assess. We present a subject whose AI was a retroperitoneal leiomyosarcoma (RL. Case Presentation. A woman on warfarin with SLE and the antiphospholipid syndrome, presented with left loin pain. She was normotensive and clinically normal. Ultrasound scans demonstrated left kidney scarring, but CT scans revealed an AI. MRI scans later confirmed the AI without significant fat and no interval growth. Cortisol after 1 mg dexamethasone, urinary free cortisol and catecholamines, plasma aldosterone renin ratio, and 17-hydroxyprogesterone were within the reference range. Initially, adrenal haemorrhage was diagnosed because of warfarin therapy and the acute presentation. However, she underwent adrenalectomy because of interval growth of the AI. Histology confirmed an RL. The patient received adjuvant radiotherapy. Discussion. Our subject presented with an NSAI. However, we highlight the following: (a the diagnosis of adrenal haemorrhage in this anticoagulated woman was revised because of interval growth; (b the tumour, an RL, was relatively small at diagnosis; (c this subject has survived well over 60 months despite an RL perhaps because of her acute presentation and early diagnosis of a small localised tumour.

  2. A Retroperitoneal Leiomyosarcoma Presenting as an Adrenal Incidentaloma in a Subject on Warfarin.

    Science.gov (United States)

    Khan, Ishrat N; Adlan, Mohamed A; Stechman, Michael J; Premawardhana, Lakdasa D

    2015-01-01

    Adrenal incidentalomas (AIs) are mostly benign and nonsecretory. Management algorithms lack sensitivity when assessing malignant potential, although functional status is easier to assess. We present a subject whose AI was a retroperitoneal leiomyosarcoma (RL). Case Presentation. A woman on warfarin with SLE and the antiphospholipid syndrome, presented with left loin pain. She was normotensive and clinically normal. Ultrasound scans demonstrated left kidney scarring, but CT scans revealed an AI. MRI scans later confirmed the AI without significant fat and no interval growth. Cortisol after 1 mg dexamethasone, urinary free cortisol and catecholamines, plasma aldosterone renin ratio, and 17-hydroxyprogesterone were within the reference range. Initially, adrenal haemorrhage was diagnosed because of warfarin therapy and the acute presentation. However, she underwent adrenalectomy because of interval growth of the AI. Histology confirmed an RL. The patient received adjuvant radiotherapy. Discussion. Our subject presented with an NSAI. However, we highlight the following: (a) the diagnosis of adrenal haemorrhage in this anticoagulated woman was revised because of interval growth; (b) the tumour, an RL, was relatively small at diagnosis; (c) this subject has survived well over 60 months despite an RL perhaps because of her acute presentation and early diagnosis of a small localised tumour.

  3. The Algorithmic Imaginary

    DEFF Research Database (Denmark)

    Bucher, Taina

    2017-01-01

    the notion of the algorithmic imaginary. It is argued that the algorithmic imaginary – ways of thinking about what algorithms are, what they should be and how they function – is not just productive of different moods and sensations but plays a generative role in moulding the Facebook algorithm itself...... of algorithms affect people's use of these platforms, if at all? To help answer these questions, this article examines people's personal stories about the Facebook algorithm through tweets and interviews with 25 ordinary users. To understand the spaces where people and algorithms meet, this article develops...

  4. Mixed Convection of Variable Properties Al2O3-EG-Water Nanofluid in a Two-Dimensional Lid-Driven Enclosure

    Directory of Open Access Journals (Sweden)

    G.A. Sheikhzadeh

    2013-07-01

    Full Text Available In this paper, mixed convection of Al2O3-EG-Water nanofluid in a square lid-driven enclosure is investigated numerically. The focus of this study is on the effects of variable thermophysical properties of the nanofluid on the heat transfer characteristics. The top moving and the bottom stationary horizontal walls are insulated, while the vertical walls are kept at different constant temperatures. The study is carried out for Richardson numbers of 0.01–1000, the solid volume fractions of 0–0.05 and the Grashof number of 104. The transport equations are solved numerically with a finite volume approach using the SIMPLER algorithm. The results show that the Nusselt number is mainly affected by the viscosity, density and conductivity variations. For low Richardson numbers, although viscosity increases by increasing the nanoparticles volume fraction, due to high intensity convection of enhanced conductivity nanofluid, the average Nusselt number increases for both constant and variable cases. However, for high Richardson numbers, as the volume fraction of nanoparticles increases heat transfer enhancement occurs for the constant properties cases but deterioration in heat transfer occurs for the variable properties cases. The distinction is due to underestimation of viscosity of the nanofluid by the constant viscosity model in the constant properties cases and states important effects of temperature dependency of thermophysical properties, in particular the viscosity distribution in the domain.

  5. An Adaptive Filtering Algorithm Based on Genetic Algorithm-Backpropagation Network

    Directory of Open Access Journals (Sweden)

    Kai Hu

    2013-01-01

    Full Text Available A new image filtering algorithm is proposed. GA-BPN algorithm uses genetic algorithm (GA to decide weights in a back propagation neural network (BPN. It has better global optimal characteristics than traditional optimal algorithm. In this paper, we used GA-BPN to do image noise filter researching work. Firstly, this paper uses training samples to train GA-BPN as the noise detector. Then, we utilize the well-trained GA-BPN to recognize noise pixels in target image. And at last, an adaptive weighted average algorithm is used to recover noise pixels recognized by GA-BPN. Experiment data shows that this algorithm has better performance than other filters.

  6. Model-Free Machine Learning in Biomedicine: Feasibility Study in Type 1 Diabetes

    Science.gov (United States)

    Daskalaki, Elena; Diem, Peter; Mougiakakou, Stavroula G.

    2016-01-01

    Although reinforcement learning (RL) is suitable for highly uncertain systems, the applicability of this class of algorithms to medical treatment may be limited by the patient variability which dictates individualised tuning for their usually multiple algorithmic parameters. This study explores the feasibility of RL in the framework of artificial pancreas development for type 1 diabetes (T1D). In this approach, an Actor-Critic (AC) learning algorithm is designed and developed for the optimisation of insulin infusion for personalised glucose regulation. AC optimises the daily basal insulin rate and insulin:carbohydrate ratio for each patient, on the basis of his/her measured glucose profile. Automatic, personalised tuning of AC is based on the estimation of information transfer (IT) from insulin to glucose signals. Insulin-to-glucose IT is linked to patient-specific characteristics related to total daily insulin needs and insulin sensitivity (SI). The AC algorithm is evaluated using an FDA-accepted T1D simulator on a large patient database under a complex meal protocol, meal uncertainty and diurnal SI variation. The results showed that 95.66% of time was spent in normoglycaemia in the presence of meal uncertainty and 93.02% when meal uncertainty and SI variation were simultaneously considered. The time spent in hypoglycaemia was 0.27% in both cases. The novel tuning method reduced the risk of severe hypoglycaemia, especially in patients with low SI. PMID:27441367

  7. Nature-inspired optimization algorithms

    CERN Document Server

    Yang, Xin-She

    2014-01-01

    Nature-Inspired Optimization Algorithms provides a systematic introduction to all major nature-inspired algorithms for optimization. The book's unified approach, balancing algorithm introduction, theoretical background and practical implementation, complements extensive literature with well-chosen case studies to illustrate how these algorithms work. Topics include particle swarm optimization, ant and bee algorithms, simulated annealing, cuckoo search, firefly algorithm, bat algorithm, flower algorithm, harmony search, algorithm analysis, constraint handling, hybrid methods, parameter tuning

  8. Convex hull ranking algorithm for multi-objective evolutionary algorithms

    NARCIS (Netherlands)

    Davoodi Monfrared, M.; Mohades, A.; Rezaei, J.

    2012-01-01

    Due to many applications of multi-objective evolutionary algorithms in real world optimization problems, several studies have been done to improve these algorithms in recent years. Since most multi-objective evolutionary algorithms are based on the non-dominated principle, and their complexity

  9. The Lund University Checklist for Incipient Exhaustion-a cross-sectional comparison of a new instrument with similar contemporary tools.

    Science.gov (United States)

    Persson, Roger; Österberg, Kai; Viborg, Njördur; Jönsson, Peter; Tenenbaum, Artur

    2016-04-21

    Stress-related health problems (e.g., work-related exhaustion) are a societal concern in many postindustrial countries. Experience suggests that early detection and intervention are crucial in preventing long-term negative consequences. In the present study, we benchmark a new tool for early identification of work-related exhaustion-the Lund University Checklist for Incipient Exhaustion (LUCIE)-against other contextually relevant inventories and two contemporary Swedish screening scales. A cross-sectional population sample (n = 1355) completed: LUCIE, Karolinska Exhaustion Disorder Scale (KEDS), Self-reported Exhaustion Disorder Scale (s-ED), Shirom-Melamed Burnout Questionnaire (SMBQ), Utrecht Work Engagement Scale (UWES-9), Job Content Questionnaire (JCQ), Big Five Inventory (BFI), and items concerning work-family interference and stress in private life. Increasing signs of exhaustion on LUCIE were positively associated with signs of exhaustion on KEDS and s-ED. The prevalence rates were 13.4, 13.8 and 7.8 %, respectively (3.8 % were identified by all three instruments). Increasing signs of exhaustion on LUCIE were also positively associated with reports of burnout, job demands, stress in private life, family-to-work interference and neuroticism as well as negatively associated with reports of job control, job support and work engagement. LUCIE, which is intended to detect pre-stages of ED, exhibits logical and coherent positive relations with KEDS and s-ED as well as other conceptually similar inventories. The results suggest that LUCIE has the potential to detect mild states of exhaustion (possibly representing pre-stages to ED) that if not brought to the attention of the healthcare system and treated, may develop in to ED. The prospective validity remains to be evaluated.

  10. Total algorithms

    NARCIS (Netherlands)

    Tel, G.

    We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of

  11. Progressive geometric algorithms

    NARCIS (Netherlands)

    Alewijnse, S.P.A.; Bagautdinov, T.M.; de Berg, M.T.; Bouts, Q.W.; ten Brink, Alex P.; Buchin, K.A.; Westenberg, M.A.

    2015-01-01

    Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms

  12. Progressive geometric algorithms

    NARCIS (Netherlands)

    Alewijnse, S.P.A.; Bagautdinov, T.M.; Berg, de M.T.; Bouts, Q.W.; Brink, ten A.P.; Buchin, K.; Westenberg, M.A.

    2014-01-01

    Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms

  13. Archimedean copula estimation of distribution algorithm based on artificial bee colony algorithm

    Institute of Scientific and Technical Information of China (English)

    Haidong Xu; Mingyan Jiang; Kun Xu

    2015-01-01

    The artificial bee colony (ABC) algorithm is a com-petitive stochastic population-based optimization algorithm. How-ever, the ABC algorithm does not use the social information and lacks the knowledge of the problem structure, which leads to in-sufficiency in both convergent speed and searching precision. Archimedean copula estimation of distribution algorithm (ACEDA) is a relatively simple, time-economic and multivariate correlated EDA. This paper proposes a novel hybrid algorithm based on the ABC algorithm and ACEDA cal ed Archimedean copula estima-tion of distribution based on the artificial bee colony (ACABC) algorithm. The hybrid algorithm utilizes ACEDA to estimate the distribution model and then uses the information to help artificial bees to search more efficiently in the search space. Six bench-mark functions are introduced to assess the performance of the ACABC algorithm on numerical function optimization. Experimen-tal results show that the ACABC algorithm converges much faster with greater precision compared with the ABC algorithm, ACEDA and the global best (gbest)-guided ABC (GABC) algorithm in most of the experiments.

  14. A filtered backprojection algorithm with characteristics of the iterative landweber algorithm

    OpenAIRE

    L. Zeng, Gengsheng

    2012-01-01

    Purpose: In order to eventually develop an analytical algorithm with noise characteristics of an iterative algorithm, this technical note develops a window function for the filtered backprojection (FBP) algorithm in tomography that behaves as an iterative Landweber algorithm.

  15. Pricing and simulation for real estate index options: Radial basis point interpolation

    Science.gov (United States)

    Gong, Pu; Zou, Dong; Wang, Jiayue

    2018-06-01

    This study employs the meshfree radial basis point interpolation (RBPI) for pricing real estate derivatives contingent on real estate index. This method combines radial and polynomial basis functions, which can guarantee the interpolation scheme with Kronecker property and effectively improve accuracy. An exponential change of variables, a mesh refinement algorithm and the Richardson extrapolation are employed in this study to implement the RBPI. Numerical results are presented to examine the computational efficiency and accuracy of our method.

  16. Super-Encryption Implementation Using Monoalphabetic Algorithm and XOR Algorithm for Data Security

    Science.gov (United States)

    Rachmawati, Dian; Andri Budiman, Mohammad; Aulia, Indra

    2018-03-01

    The exchange of data that occurs offline and online is very vulnerable to the threat of data theft. In general, cryptography is a science and art to maintain data secrecy. An encryption is a cryptography algorithm in which data is transformed into cipher text, which is something that is unreadable and meaningless so it cannot be read or understood by other parties. In super-encryption, two or more encryption algorithms are combined to make it more secure. In this work, Monoalphabetic algorithm and XOR algorithm are combined to form a super- encryption. Monoalphabetic algorithm works by changing a particular letter into a new letter based on existing keywords while the XOR algorithm works by using logic operation XOR Since Monoalphabetic algorithm is a classical cryptographic algorithm and XOR algorithm is a modern cryptographic algorithm, this scheme is expected to be both easy-to-implement and more secure. The combination of the two algorithms is capable of securing the data and restoring it back to its original form (plaintext), so the data integrity is still ensured.

  17. Richland Operations (DOE-RL) Environmental Safety Health (ES and H) FY 2000 and FY 2001 Execution Commitment Summary

    Energy Technology Data Exchange (ETDEWEB)

    REEP, I.E.

    2000-12-01

    All sites in the U.S. Department of Energy (DOE) Complex prepare this report annually for the DOE Office of Environment, Safety and Health (EH). The purpose of this report is to provide a summary of the previous and current year's Environment, Safety and Health (ES&H) execution commitments and the Safety and Health (S&H) resources that support these activities. The fiscal year (FY) 2000 and 2001 information and data contained in the Richland Operations Environment, Safefy and Health Fiscal Year 2002 Budget-Risk Management Summary (RL 2000a) were the basis for preparing this report. Fiscal year 2001 activities are based on the President's Amended Congressional Budget Request of $689.6 million for funding Ofice of Environmental Management (EM) $44.0 million for Fast Flux Test Facility standby less $7.0 million in anticipated DOE, Headquarters holdbacks for Office of Nuclear Energy, Science and Technology (NE); and $55.3 million for Safeguards and Security (SAS). Any funding changes as a result of the Congressional appropriation process will be reflected in the Fiscal Year 2003 ES&H Budget-Risk Management Summary to be issued in May 2001. This report provides the end-of-year status of FY 2000 ES&H execution commitments, including actual S&H expenditures, and describes planned FY 2001 ES&H execution commitments and the S&H resources needed to support those activities. This requirement is included in the ES&H guidance contained in the FY 2002 Field Budget Call (DOE 2000).

  18. Linear feature detection algorithm for astronomical surveys - I. Algorithm description

    Science.gov (United States)

    Bektešević, Dino; Vinković, Dejan

    2017-11-01

    Computer vision algorithms are powerful tools in astronomical image analyses, especially when automation of object detection and extraction is required. Modern object detection algorithms in astronomy are oriented towards detection of stars and galaxies, ignoring completely the detection of existing linear features. With the emergence of wide-field sky surveys, linear features attract scientific interest as possible trails of fast flybys of near-Earth asteroids and meteors. In this work, we describe a new linear feature detection algorithm designed specifically for implementation in big data astronomy. The algorithm combines a series of algorithmic steps that first remove other objects (stars and galaxies) from the image and then enhance the line to enable more efficient line detection with the Hough algorithm. The rate of false positives is greatly reduced thanks to a step that replaces possible line segments with rectangles and then compares lines fitted to the rectangles with the lines obtained directly from the image. The speed of the algorithm and its applicability in astronomical surveys are also discussed.

  19. TEXPLORE temporal difference reinforcement learning for robots and time-constrained domains

    CERN Document Server

    Hester, Todd

    2013-01-01

    This book presents and develops new reinforcement learning methods that enable fast and robust learning on robots in real-time. Robots have the potential to solve many problems in society, because of their ability to work in dangerous places doing necessary jobs that no one wants or is able to do. One barrier to their widespread deployment is that they are mainly limited to tasks where it is possible to hand-program behaviors for every situation that may be encountered. For robots to meet their potential, they need methods that enable them to learn and adapt to novel situations that they were not programmed for. Reinforcement learning (RL) is a paradigm for learning sequential decision making processes and could solve the problems of learning and adaptation on robots. This book identifies four key challenges that must be addressed for an RL algorithm to be practical for robotic control tasks. These RL for Robotics Challenges are: 1) it must learn in very few samples; 2) it must learn in domains with continuou...

  20. Golden Sine Algorithm: A Novel Math-Inspired Algorithm

    Directory of Open Access Journals (Sweden)

    TANYILDIZI, E.

    2017-05-01

    Full Text Available In this study, Golden Sine Algorithm (Gold-SA is presented as a new metaheuristic method for solving optimization problems. Gold-SA has been developed as a new search algorithm based on population. This math-based algorithm is inspired by sine that is a trigonometric function. In the algorithm, random individuals are created as many as the number of search agents with uniform distribution for each dimension. The Gold-SA operator searches to achieve a better solution in each iteration by trying to bring the current situation closer to the target value. The solution space is narrowed by the golden section so that the areas that are supposed to give only good results are scanned instead of the whole solution space scan. In the tests performed, it is seen that Gold-SA has better results than other population based methods. In addition, Gold-SA has fewer algorithm-dependent parameters and operators than other metaheuristic methods, increasing the importance of this method by providing faster convergence of this new method.

  1. Toxicity of smelter slag-contaminated sediments from Upper Lake Roosevelt and associated metals to early life stage White Sturgeon (Acipenser transmontanus Richardson, 1836)

    Science.gov (United States)

    Little, E.E.; Calfee, R.D.; Linder, G.

    2014-01-01

    The toxicity of five smelter slag-contaminated sediments from the upper Columbia River and metals associated with those slags (cadmium, copper, zinc) was evaluated in 96-h exposures of White Sturgeon (Acipenser transmontanus Richardson, 1836) at 8 and 30 days post-hatch. Leachates prepared from slag-contaminated sediments were evaluated for toxicity. Leachates yielded a maximum aqueous copper concentration of 11.8 μg L−1 observed in sediment collected at Dead Man's Eddy (DME), the sampling site nearest the smelter. All leachates were nonlethal to sturgeon that were 8 day post-hatch (dph), but leachates from three of the five sediments were toxic to fish that were 30 dph, suggesting that the latter life stage is highly vulnerable to metals exposure. Fish maintained consistent and prolonged contact with sediments and did not avoid contaminated sediments when provided a choice between contaminated and uncontaminated sediments. White Sturgeon also failed to avoid aqueous copper (1.5–20 μg L−1). In water-only 96-h exposures of 35 dph sturgeon with the three metals, similar toxicity was observed during exposure to water spiked with copper alone and in combination with cadmium and zinc. Cadmium ranging from 3.2 to 41 μg L−1 or zinc ranging from 21 to 275 μg L−1 was not lethal, but induced adverse behavioral changes including a loss of equilibrium. These results suggest that metals associated with smelter slags may pose an increased exposure risk to early life stage sturgeon if fish occupy areas contaminated by slags.

  2. The Orthogonally Partitioned EM Algorithm: Extending the EM Algorithm for Algorithmic Stability and Bias Correction Due to Imperfect Data.

    Science.gov (United States)

    Regier, Michael D; Moodie, Erica E M

    2016-05-01

    We propose an extension of the EM algorithm that exploits the common assumption of unique parameterization, corrects for biases due to missing data and measurement error, converges for the specified model when standard implementation of the EM algorithm has a low probability of convergence, and reduces a potentially complex algorithm into a sequence of smaller, simpler, self-contained EM algorithms. We use the theory surrounding the EM algorithm to derive the theoretical results of our proposal, showing that an optimal solution over the parameter space is obtained. A simulation study is used to explore the finite sample properties of the proposed extension when there is missing data and measurement error. We observe that partitioning the EM algorithm into simpler steps may provide better bias reduction in the estimation of model parameters. The ability to breakdown a complicated problem in to a series of simpler, more accessible problems will permit a broader implementation of the EM algorithm, permit the use of software packages that now implement and/or automate the EM algorithm, and make the EM algorithm more accessible to a wider and more general audience.

  3. 77 FR 1884 - Changes in Flood Elevation Determinations

    Science.gov (United States)

    2012-01-12

    ... County August 12, 2011; Craft, Chairman, St. (11-04-4362P). The St. Lucie News- Lucie County Board of... Sun-News. Miyagishima, Mayor, City of Las Cruces, 700 North Main Street, Las Cruces, NM 88004. New.... June 16, 2011 360497 02-2163P). December 31, 2010; Bloomberg, Mayor, City The Chief. of New York, City...

  4. Parallel sorting algorithms

    CERN Document Server

    Akl, Selim G

    1985-01-01

    Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the

  5. Field assessment of synthetic attractants and traps for the Old World screw-worm fly, Chrysomya bezziana.

    Science.gov (United States)

    Urech, R; Green, P E; Brown, G W; Spradbery, J P; Tozer, R S; Mayer, D G; Tack Kan, Y

    2012-07-06

    The performance of newly developed trapping systems for the Old World screw-worm fly, Chrysomya bezziana has been determined in field trials on cattle farms in Malaysia. The efficacy of non-sticky traps and new attractants to trap C. bezziana and non-target flies was compared with the standard sticky trap and Swormlure. The optimal trap was a modified LuciTrap(®) with a new attractant mixture, Bezzilure-2. The LuciTrap/Bezzilure-2 caught on average 3.1 times more C. bezziana than the sticky trap with Swormlure (PChrysomya megacephala and Chrysomya rufifacies with factors of 5.9 and 6.4, respectively. The LuciTrap also discriminates with factors of 90 and 3.6 against Hemipyrellia sp. and sarcophagid flesh flies respectively, compared to the sticky trap. The LuciTrap/Bezzilure-2 system is recommended for screwworm fly surveillance as it is more attractive and selective towards C. bezziana and provides flies of better quality for identification than the sticky trap. Crown Copyright © 2012. Published by Elsevier B.V. All rights reserved.

  6. Manufacturing Scheduling Using Colored Petri Nets and Reinforcement Learning

    Directory of Open Access Journals (Sweden)

    Maria Drakaki

    2017-02-01

    Full Text Available Agent-based intelligent manufacturing control systems are capable to efficiently respond and adapt to environmental changes. Manufacturing system adaptation and evolution can be addressed with learning mechanisms that increase the intelligence of agents. In this paper a manufacturing scheduling method is presented based on Timed Colored Petri Nets (CTPNs and reinforcement learning (RL. CTPNs model the manufacturing system and implement the scheduling. In the search for an optimal solution a scheduling agent uses RL and in particular the Q-learning algorithm. A warehouse order-picking scheduling is presented as a case study to illustrate the method. The proposed scheduling method is compared to existing methods. Simulation and state space results are used to evaluate performance and identify system properties.

  7. On the equivalence of cyclic and quasi-cyclic codes over finite fields

    Directory of Open Access Journals (Sweden)

    Kenza Guenda

    2017-07-01

    Full Text Available This paper studies the equivalence problem for cyclic codes of length $p^r$ and quasi-cyclic codes of length $p^rl$. In particular, we generalize the results of Huffman, Job, and Pless (J. Combin. Theory. A, 62, 183--215, 1993, who considered the special case $p^2$. This is achieved by explicitly giving the permutations by which two cyclic codes of prime power length are equivalent. This allows us to obtain an algorithm which solves the problem of equivalency for cyclic codes of length $p^r$ in polynomial time. Further, we characterize the set by which two quasi-cyclic codes of length $p^rl$ can be equivalent, and prove that the affine group is one of its subsets.

  8. Fermion cluster algorithms

    International Nuclear Information System (INIS)

    Chandrasekharan, Shailesh

    2000-01-01

    Cluster algorithms have been recently used to eliminate sign problems that plague Monte-Carlo methods in a variety of systems. In particular such algorithms can also be used to solve sign problems associated with the permutation of fermion world lines. This solution leads to the possibility of designing fermion cluster algorithms in certain cases. Using the example of free non-relativistic fermions we discuss the ideas underlying the algorithm

  9. Algorithm improvement program nuclide identification algorithm scoring criteria and scoring application.

    Energy Technology Data Exchange (ETDEWEB)

    Enghauser, Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-02-01

    The goal of the Domestic Nuclear Detection Office (DNDO) Algorithm Improvement Program (AIP) is to facilitate gamma-radiation detector nuclide identification algorithm development, improvement, and validation. Accordingly, scoring criteria have been developed to objectively assess the performance of nuclide identification algorithms. In addition, a Microsoft Excel spreadsheet application for automated nuclide identification scoring has been developed. This report provides an overview of the equations, nuclide weighting factors, nuclide equivalencies, and configuration weighting factors used by the application for scoring nuclide identification algorithm performance. Furthermore, this report presents a general overview of the nuclide identification algorithm scoring application including illustrative examples.

  10. Model-Free Machine Learning in Biomedicine: Feasibility Study in Type 1 Diabetes.

    Directory of Open Access Journals (Sweden)

    Elena Daskalaki

    Full Text Available Although reinforcement learning (RL is suitable for highly uncertain systems, the applicability of this class of algorithms to medical treatment may be limited by the patient variability which dictates individualised tuning for their usually multiple algorithmic parameters. This study explores the feasibility of RL in the framework of artificial pancreas development for type 1 diabetes (T1D. In this approach, an Actor-Critic (AC learning algorithm is designed and developed for the optimisation of insulin infusion for personalised glucose regulation. AC optimises the daily basal insulin rate and insulin:carbohydrate ratio for each patient, on the basis of his/her measured glucose profile. Automatic, personalised tuning of AC is based on the estimation of information transfer (IT from insulin to glucose signals. Insulin-to-glucose IT is linked to patient-specific characteristics related to total daily insulin needs and insulin sensitivity (SI. The AC algorithm is evaluated using an FDA-accepted T1D simulator on a large patient database under a complex meal protocol, meal uncertainty and diurnal SI variation. The results showed that 95.66% of time was spent in normoglycaemia in the presence of meal uncertainty and 93.02% when meal uncertainty and SI variation were simultaneously considered. The time spent in hypoglycaemia was 0.27% in both cases. The novel tuning method reduced the risk of severe hypoglycaemia, especially in patients with low SI.

  11. Algorithm of Particle Data Association for SLAM Based on Improved Ant Algorithm

    Directory of Open Access Journals (Sweden)

    KeKe Gen

    2015-01-01

    Full Text Available The article considers a problem of data association algorithm for simultaneous localization and mapping guidelines in determining the route of unmanned aerial vehicles (UAVs. Currently, these equipments are already widely used, but mainly controlled from the remote operator. An urgent task is to develop a control system that allows for autonomous flight. Algorithm SLAM (simultaneous localization and mapping, which allows to predict the location, speed, the ratio of flight parameters and the coordinates of landmarks and obstacles in an unknown environment, is one of the key technologies to achieve real autonomous UAV flight. The aim of this work is to study the possibility of solving this problem by using an improved ant algorithm.The data association for SLAM algorithm is meant to establish a matching set of observed landmarks and landmarks in the state vector. Ant algorithm is one of the widely used optimization algorithms with positive feedback and the ability to search in parallel, so the algorithm is suitable for solving the problem of data association for SLAM. But the traditional ant algorithm in the process of finding routes easily falls into local optimum. Adding random perturbations in the process of updating the global pheromone to avoid local optima. Setting limits pheromone on the route can increase the search space with a reasonable amount of calculations for finding the optimal route.The paper proposes an algorithm of the local data association for SLAM algorithm based on an improved ant algorithm. To increase the speed of calculation, local data association is used instead of the global data association. The first stage of the algorithm defines targets in the matching space and the observed landmarks with the possibility of association by the criterion of individual compatibility (IC. The second stage defines the matched landmarks and their coordinates using improved ant algorithm. Simulation results confirm the efficiency and

  12. The BR eigenvalue algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Geist, G.A. [Oak Ridge National Lab., TN (United States). Computer Science and Mathematics Div.; Howell, G.W. [Florida Inst. of Tech., Melbourne, FL (United States). Dept. of Applied Mathematics; Watkins, D.S. [Washington State Univ., Pullman, WA (United States). Dept. of Pure and Applied Mathematics

    1997-11-01

    The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.

  13. Real time dose rate measurements with fiber optic probes based on the RL and OSL of beryllium oxide

    International Nuclear Information System (INIS)

    Teichmann, T.; Sponner, J.; Jakobi, Ch.; Henniger, J.

    2016-01-01

    This work covers the examination of fiber optical probes based on the radioluminescence and real time optically stimulated luminescence of beryllium oxide. Experiments are carried out to determine the fundamental dosimetric and temporal properties of the system and evaluate its suitability for dose rate measurements in brachytherapy and other applications using non-pulsed radiation fields. For this purpose the responses of the radioluminescence and optically stimulated luminescence signal have been investigated in the dose rate range of 20 mGy/h to 3.6 Gy/h and for doses of 1 mGy up to 6 Gy. Furthermore, a new, efficient analysis procedure, the double phase reference summing, is introduced, leading to a real time optically stimulated luminescence signal. This method allows a complete compensation of the stem effect during the measurement. In contrast to previous works, the stimulation of the 1 mm cylindrical beryllium oxide detectors is performed with a symmetric function during irradiation. The investigated dose rates range from 0.3 to 3.6 Gy/h. The real time optically stimulated luminescence signal of beryllium oxide shows a dependency on both the dose rate and the applied dose. To overcome the problem of dose dependency, further experiments using higher stimulation intensities have to follow. - Highlights: • RL and OSL measurements with BeO extended to low dose (rate) range. • A new method to obtain the real time OSL: Dual Phase Reference Summing. • Real time OSL signal shows both dose and dose rate dependency. • Real time OSL enables a complete discrimination of the stem effect.

  14. Geometric approximation algorithms

    CERN Document Server

    Har-Peled, Sariel

    2011-01-01

    Exact algorithms for dealing with geometric objects are complicated, hard to implement in practice, and slow. Over the last 20 years a theory of geometric approximation algorithms has emerged. These algorithms tend to be simple, fast, and more robust than their exact counterparts. This book is the first to cover geometric approximation algorithms in detail. In addition, more traditional computational geometry techniques that are widely used in developing such algorithms, like sampling, linear programming, etc., are also surveyed. Other topics covered include approximate nearest-neighbor search, shape approximation, coresets, dimension reduction, and embeddings. The topics covered are relatively independent and are supplemented by exercises. Close to 200 color figures are included in the text to illustrate proofs and ideas.

  15. Algorithms in Singular

    Directory of Open Access Journals (Sweden)

    Hans Schonemann

    1996-12-01

    Full Text Available Some algorithms for singularity theory and algebraic geometry The use of Grobner basis computations for treating systems of polynomial equations has become an important tool in many areas. This paper introduces of the concept of standard bases (a generalization of Grobner bases and the application to some problems from algebraic geometry. The examples are presented as SINGULAR commands. A general introduction to Grobner bases can be found in the textbook [CLO], an introduction to syzygies in [E] and [St1]. SINGULAR is a computer algebra system for computing information about singularities, for use in algebraic geometry. The basic algorithms in SINGULAR are several variants of a general standard basis algorithm for general monomial orderings (see [GG]. This includes wellorderings (Buchberger algorithm ([B1], [B2] and tangent cone orderings (Mora algorithm ([M1], [MPT] as special cases: It is able to work with non-homogeneous and homogeneous input and also to compute in the localization of the polynomial ring in 0. Recent versions include algorithms to factorize polynomials and a factorizing Grobner basis algorithm. For a complete description of SINGULAR see [Si].

  16. Multimodal optimization by using hybrid of artificial bee colony algorithm and BFGS algorithm

    Science.gov (United States)

    Anam, S.

    2017-10-01

    Optimization has become one of the important fields in Mathematics. Many problems in engineering and science can be formulated into optimization problems. They maybe have many local optima. The optimization problem with many local optima, known as multimodal optimization problem, is how to find the global solution. Several metaheuristic methods have been proposed to solve multimodal optimization problems such as Particle Swarm Optimization (PSO), Genetics Algorithm (GA), Artificial Bee Colony (ABC) algorithm, etc. The performance of the ABC algorithm is better than or similar to those of other population-based algorithms with the advantage of employing a fewer control parameters. The ABC algorithm also has the advantages of strong robustness, fast convergence and high flexibility. However, it has the disadvantages premature convergence in the later search period. The accuracy of the optimal value cannot meet the requirements sometimes. Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm is a good iterative method for finding a local optimum. Compared with other local optimization methods, the BFGS algorithm is better. Based on the advantages of the ABC algorithm and the BFGS algorithm, this paper proposes a hybrid of the artificial bee colony algorithm and the BFGS algorithm to solve the multimodal optimization problem. The first step is that the ABC algorithm is run to find a point. In the second step is that the point obtained by the first step is used as an initial point of BFGS algorithm. The results show that the hybrid method can overcome from the basic ABC algorithm problems for almost all test function. However, if the shape of function is flat, the proposed method cannot work well.

  17. Modified Firefly Algorithm

    Directory of Open Access Journals (Sweden)

    Surafel Luleseged Tilahun

    2012-01-01

    Full Text Available Firefly algorithm is one of the new metaheuristic algorithms for optimization problems. The algorithm is inspired by the flashing behavior of fireflies. In the algorithm, randomly generated solutions will be considered as fireflies, and brightness is assigned depending on their performance on the objective function. One of the rules used to construct the algorithm is, a firefly will be attracted to a brighter firefly, and if there is no brighter firefly, it will move randomly. In this paper we modify this random movement of the brighter firefly by generating random directions in order to determine the best direction in which the brightness increases. If such a direction is not generated, it will remain in its current position. Furthermore the assignment of attractiveness is modified in such a way that the effect of the objective function is magnified. From the simulation result it is shown that the modified firefly algorithm performs better than the standard one in finding the best solution with smaller CPU time.

  18. Network-Oblivious Algorithms

    DEFF Research Database (Denmark)

    Bilardi, Gianfranco; Pietracaprina, Andrea; Pucci, Geppino

    2016-01-01

    A framework is proposed for the design and analysis of network-oblivious algorithms, namely algorithms that can run unchanged, yet efficiently, on a variety of machines characterized by different degrees of parallelism and communication capabilities. The framework prescribes that a network......-oblivious algorithm be specified on a parallel model of computation where the only parameter is the problem’s input size, and then evaluated on a model with two parameters, capturing parallelism granularity and communication latency. It is shown that for a wide class of network-oblivious algorithms, optimality...... of cache hierarchies, to the realm of parallel computation. Its effectiveness is illustrated by providing optimal network-oblivious algorithms for a number of key problems. Some limitations of the oblivious approach are also discussed....

  19. A novel hybrid algorithm of GSA with Kepler algorithm for numerical optimization

    Directory of Open Access Journals (Sweden)

    Soroor Sarafrazi

    2015-07-01

    Full Text Available It is now well recognized that pure algorithms can be promisingly improved by hybridization with other techniques. One of the relatively new metaheuristic algorithms is Gravitational Search Algorithm (GSA which is based on the Newton laws. In this paper, to enhance the performance of GSA, a novel algorithm called “Kepler”, inspired by the astrophysics, is introduced. The Kepler algorithm is based on the principle of the first Kepler law. The hybridization of GSA and Kepler algorithm is an efficient approach to provide much stronger specialization in intensification and/or diversification. The performance of GSA–Kepler is evaluated by applying it to 14 benchmark functions with 20–1000 dimensions and the optimal approximation of linear system as a practical optimization problem. The results obtained reveal that the proposed hybrid algorithm is robust enough to optimize the benchmark functions and practical optimization problems.

  20. Algorithmically specialized parallel computers

    CERN Document Server

    Snyder, Lawrence; Gannon, Dennis B

    1985-01-01

    Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster

  1. Hybrid simulation of scatter intensity in industrial cone-beam computed tomography

    International Nuclear Information System (INIS)

    Thierry, R.; Miceli, A.; Hofmann, J.; Flisch, A.; Sennhauser, U.

    2009-01-01

    A cone-beam computed tomography (CT) system using a 450 kV X-ray tube has been developed to challenge the three-dimensional imaging of parts of the automotive industry in short acquisition time. Because the probability of detecting scattered photons is high regarding the energy range and the area of detection, a scattering correction becomes mandatory for generating reliable images with enhanced contrast detectability. In this paper, we present a hybrid simulator for the fast and accurate calculation of the scattering intensity distribution. The full acquisition chain, from the generation of a polyenergetic photon beam, its interaction with the scanned object and the energy deposit in the detector is simulated. Object phantoms can be spatially described in form of voxels, mathematical primitives or CAD models. Uncollided radiation is treated with a ray-tracing method and scattered radiation is split into single and multiple scattering. The single scattering is calculated with a deterministic approach accelerated with a forced detection method. The residual noisy signal is subsequently deconvoluted with the iterative Richardson-Lucy method. Finally the multiple scattering is addressed with a coarse Monte Carlo (MC) simulation. The proposed hybrid method has been validated on aluminium phantoms with varying size and object-to-detector distance, and found in good agreement with the MC code Geant4. The acceleration achieved by the hybrid method over the standard MC on a single projection is approximately of three orders of magnitude.

  2. DEVELOPMENT OF A NEW ALGORITHM FOR KEY AND S-BOX GENERATION IN BLOWFISH ALGORITHM

    Directory of Open Access Journals (Sweden)

    TAYSEER S. ATIA

    2014-08-01

    Full Text Available Blowfish algorithm is a block cipher algorithm, its strong, simple algorithm used to encrypt data in block of size 64-bit. Key and S-box generation process in this algorithm require time and memory space the reasons that make this algorithm not convenient to be used in smart card or application requires changing secret key frequently. In this paper a new key and S-box generation process was developed based on Self Synchronization Stream Cipher (SSS algorithm where the key generation process for this algorithm was modified to be used with the blowfish algorithm. Test result shows that the generation process requires relatively slow time and reasonably low memory requirement and this enhance the algorithm and gave it the possibility for different usage.

  3. Monte Carlo algorithms with absorbing Markov chains: Fast local algorithms for slow dynamics

    International Nuclear Information System (INIS)

    Novotny, M.A.

    1995-01-01

    A class of Monte Carlo algorithms which incorporate absorbing Markov chains is presented. In a particular limit, the lowest order of these algorithms reduces to the n-fold way algorithm. These algorithms are applied to study the escape from the metastable state in the two-dimensional square-lattice nearest-neighbor Ising ferromagnet in an unfavorable applied field, and the agreement with theoretical predictions is very good. It is demonstrated that the higher-order algorithms can be many orders of magnitude faster than either the traditional Monte Carlo or n-fold way algorithms

  4. Dynamic route guidance algorithm based algorithm based on artificial immune system

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    To improve the performance of the K-shortest paths search in intelligent traffic guidance systems,this paper proposes an optimal search algorithm based on the intelligent optimization search theory and the memphor mechanism of vertebrate immune systems.This algorithm,applied to the urban traffic network model established by the node-expanding method,can expediently realize K-shortest paths search in the urban traffic guidance systems.Because of the immune memory and global parallel search ability from artificial immune systems,K shortest paths can be found without any repeat,which indicates evidently the superiority of the algorithm to the conventional ones.Not only does it perform a better parallelism,the algorithm also prevents premature phenomenon that often occurs in genetic algorithms.Thus,it is especially suitable for real-time requirement of the traffic guidance system and other engineering optimal applications.A case study verifies the efficiency and the practicability of the algorithm aforementioned.

  5. Hamiltonian Algorithm Sound Synthesis

    OpenAIRE

    大矢, 健一

    2013-01-01

    Hamiltonian Algorithm (HA) is an algorithm for searching solutions is optimization problems. This paper introduces a sound synthesis technique using Hamiltonian Algorithm and shows a simple example. "Hamiltonian Algorithm Sound Synthesis" uses phase transition effect in HA. Because of this transition effect, totally new waveforms are produced.

  6. Raman Lidar Profiles–Temperature (RLPROFTEMP) Value-Added Product

    Energy Technology Data Exchange (ETDEWEB)

    Newsom, RK; Sivaraman, C; McFarlane, SA

    2012-10-31

    The purpose of this document is to describe the Raman Lidar Profiles–Temperature (RLPROFTEMP) value-added product (VAP) and the procedures used to derive atmospheric temperature profiles from the raw RL measurements. Sections 2 and 4 describe the input and output variables, respectively. Section 3 discusses the theory behind the measurement and the details of the algorithm, including calibration and overlap correction.

  7. Reinforcement Learning–Based Energy Management Strategy for a Hybrid Electric Tracked Vehicle

    Directory of Open Access Journals (Sweden)

    Teng Liu

    2015-07-01

    Full Text Available This paper presents a reinforcement learning (RL–based energy management strategy for a hybrid electric tracked vehicle. A control-oriented model of the powertrain and vehicle dynamics is first established. According to the sample information of the experimental driving schedule, statistical characteristics at various velocities are determined by extracting the transition probability matrix of the power request. Two RL-based algorithms, namely Q-learning and Dyna algorithms, are applied to generate optimal control solutions. The two algorithms are simulated on the same driving schedule, and the simulation results are compared to clarify the merits and demerits of these algorithms. Although the Q-learning algorithm is faster (3 h than the Dyna algorithm (7 h, its fuel consumption is 1.7% higher than that of the Dyna algorithm. Furthermore, the Dyna algorithm registers approximately the same fuel consumption as the dynamic programming–based global optimal solution. The computational cost of the Dyna algorithm is substantially lower than that of the stochastic dynamic programming.

  8. Modified Clipped LMS Algorithm

    Directory of Open Access Journals (Sweden)

    Lotfizad Mojtaba

    2005-01-01

    Full Text Available Abstract A new algorithm is proposed for updating the weights of an adaptive filter. The proposed algorithm is a modification of an existing method, namely, the clipped LMS, and uses a three-level quantization ( scheme that involves the threshold clipping of the input signals in the filter weight update formula. Mathematical analysis shows the convergence of the filter weights to the optimum Wiener filter weights. Also, it can be proved that the proposed modified clipped LMS (MCLMS algorithm has better tracking than the LMS algorithm. In addition, this algorithm has reduced computational complexity relative to the unmodified one. By using a suitable threshold, it is possible to increase the tracking capability of the MCLMS algorithm compared to the LMS algorithm, but this causes slower convergence. Computer simulations confirm the mathematical analysis presented.

  9. Turbulent mixed buoyancy driven flow and heat transfer in lid driven enclosure

    International Nuclear Information System (INIS)

    Mishra, Ajay Kumar; Sharma, Anil Kumar

    2015-01-01

    Turbulent mixed buoyancy driven flow and heat transfer of air in lid driven rectangular enclosure has been investigated for Grashof number in the range of 10 8 to 10 11 and for Richardson number 0.1, 1 and 10. Steady two dimensional Reynolds-Averaged-Navier-Stokes equations and conservation equations of mass and energy, coupled with the Boussinesq approximation, are solved. The spatial derivatives in the equations are discretized using the finite-element method. The SIMPLE algorithm is used to resolve pressure-velocity coupling. Turbulence is modeled with the k-ω closure model with physical boundary conditions along with the Boussinesq approximation, for the flow and heat transfer. The predicted results are validated against benchmark solutions reported in literature. The results include stream lines and temperature fields are presented to understand flow and heat transfer characteristics. There is a marked reduction in mean Nusselt number (about 58%) as the Richardson number increases from 0.1 to 10 for the case of Ra=10 10 signifying the effect of reduction of top lid velocity resulting in reduction of turbulent mixing. (author)

  10. Algorithms as fetish: Faith and possibility in algorithmic work

    Directory of Open Access Journals (Sweden)

    Suzanne L Thomas

    2018-01-01

    Full Text Available Algorithms are powerful because we invest in them the power to do things. With such promise, they can transform the ordinary, say snapshots along a robotic vacuum cleaner’s route, into something much more, such as a clean home. Echoing David Graeber’s revision of fetishism, we argue that this easy slip from technical capabilities to broader claims betrays not the “magic” of algorithms but rather the dynamics of their exchange. Fetishes are not indicators of false thinking, but social contracts in material form. They mediate emerging distributions of power often too nascent, too slippery or too disconcerting to directly acknowledge. Drawing primarily on 2016 ethnographic research with computer vision professionals, we show how faith in what algorithms can do shapes the social encounters and exchanges of their production. By analyzing algorithms through the lens of fetishism, we can see the social and economic investment in some people’s labor over others. We also see everyday opportunities for social creativity and change. We conclude that what is problematic about algorithms is not their fetishization but instead their stabilization into full-fledged gods and demons – the more deserving objects of critique.

  11. Denarration in Michael Haneke’s funny FUNNY games GAMES [an audiovisual essay

    NARCIS (Netherlands)

    Kiss, Miklós

    2016-01-01

    In literary fiction, Brian Richardson has discerned a troubling kind of incongruity that he labelled denarration. Denarration is “an intriguing and paradoxical narrative strategy that appears in a number of late modern and postmodern texts” (Richardson 2001, 168). Richardson coins the term for cases

  12. Quick fuzzy backpropagation algorithm.

    Science.gov (United States)

    Nikov, A; Stoeva, S

    2001-03-01

    A modification of the fuzzy backpropagation (FBP) algorithm called QuickFBP algorithm is proposed, where the computation of the net function is significantly quicker. It is proved that the FBP algorithm is of exponential time complexity, while the QuickFBP algorithm is of polynomial time complexity. Convergence conditions of the QuickFBP, resp. the FBP algorithm are defined and proved for: (1) single output neural networks in case of training patterns with different targets; and (2) multiple output neural networks in case of training patterns with equivalued target vector. They support the automation of the weights training process (quasi-unsupervised learning) establishing the target value(s) depending on the network's input values. In these cases the simulation results confirm the convergence of both algorithms. An example with a large-sized neural network illustrates the significantly greater training speed of the QuickFBP rather than the FBP algorithm. The adaptation of an interactive web system to users on the basis of the QuickFBP algorithm is presented. Since the QuickFBP algorithm ensures quasi-unsupervised learning, this implies its broad applicability in areas of adaptive and adaptable interactive systems, data mining, etc. applications.

  13. A New Modified Firefly Algorithm

    Directory of Open Access Journals (Sweden)

    Medha Gupta

    2016-07-01

    Full Text Available Nature inspired meta-heuristic algorithms studies the emergent collective intelligence of groups of simple agents. Firefly Algorithm is one of the new such swarm-based metaheuristic algorithm inspired by the flashing behavior of fireflies. The algorithm was first proposed in 2008 and since then has been successfully used for solving various optimization problems. In this work, we intend to propose a new modified version of Firefly algorithm (MoFA and later its performance is compared with the standard firefly algorithm along with various other meta-heuristic algorithms. Numerical studies and results demonstrate that the proposed algorithm is superior to existing algorithms.

  14. Quantum Computation and Algorithms

    International Nuclear Information System (INIS)

    Biham, O.; Biron, D.; Biham, E.; Grassi, M.; Lidar, D.A.

    1999-01-01

    It is now firmly established that quantum algorithms provide a substantial speedup over classical algorithms for a variety of problems, including the factorization of large numbers and the search for a marked element in an unsorted database. In this talk I will review the principles of quantum algorithms, the basic quantum gates and their operation. The combination of superposition and interference, that makes these algorithms efficient, will be discussed. In particular, Grover's search algorithm will be presented as an example. I will show that the time evolution of the amplitudes in Grover's algorithm can be found exactly using recursion equations, for any initial amplitude distribution

  15. Semioptimal practicable algorithmic cooling

    International Nuclear Information System (INIS)

    Elias, Yuval; Mor, Tal; Weinstein, Yossi

    2011-01-01

    Algorithmic cooling (AC) of spins applies entropy manipulation algorithms in open spin systems in order to cool spins far beyond Shannon's entropy bound. Algorithmic cooling of nuclear spins was demonstrated experimentally and may contribute to nuclear magnetic resonance spectroscopy. Several cooling algorithms were suggested in recent years, including practicable algorithmic cooling (PAC) and exhaustive AC. Practicable algorithms have simple implementations, yet their level of cooling is far from optimal; exhaustive algorithms, on the other hand, cool much better, and some even reach (asymptotically) an optimal level of cooling, but they are not practicable. We introduce here semioptimal practicable AC (SOPAC), wherein a few cycles (typically two to six) are performed at each recursive level. Two classes of SOPAC algorithms are proposed and analyzed. Both attain cooling levels significantly better than PAC and are much more efficient than the exhaustive algorithms. These algorithms are shown to bridge the gap between PAC and exhaustive AC. In addition, we calculated the number of spins required by SOPAC in order to purify qubits for quantum computation. As few as 12 and 7 spins are required (in an ideal scenario) to yield a mildly pure spin (60% polarized) from initial polarizations of 1% and 10%, respectively. In the latter case, about five more spins are sufficient to produce a highly pure spin (99.99% polarized), which could be relevant for fault-tolerant quantum computing.

  16. A biallelic RFLP of the human. alpha. 2-C4 adrenergic receptor gene (ADRA2RL2) localized on the short arm of chromosome 4 and encoding the putative. alpha. 2B receptor is identified with Bsu 36 L using a 1. 5 kb probe (p ADRA2RL2)

    Energy Technology Data Exchange (ETDEWEB)

    Hoeche, M.R.; Berrettini, W.H. (Clinical Neurogenetics Branch, Bethesda, MD (USA)); Regan, J.W. (Duke Univ. Medical Center, Durham, NC (USA))

    1989-12-11

    A 1.5 kb Eco RI cDNA fragment representing the human alpha2-C4 adrenergic receptor (AR) gene encoding the putative alpha2B-AR, containing approximately 1270 bp of the coding and 240 bp of the 3{prime}flanking region, inserted into pSP65, was used as a probe (p ADRA2RL2). This clone was obtained by screening a human kidney lambda GT10 cDNA library with the 0.95 kb Pst I restriction fragment derived from the coding block of the gene for the human platelet alpha2-AR. Hybridization of human genomic DNA digested with Bsu 36 I identifies a two allele polymorphism with bands at 12 kb and 5.8 kb. 20 unrelated North American caucasian subjects were evaluated with frequencies of: A allele, 0.45; B allele, 0.55, heterozygosity (obs), 0.5. This alpha2-AR gene has been mapped in a separation effort in 59 CEPH reference pedigrees to the tip of the short arm of chromosome 4 just proximal to GB (4p 16.3) reported to be linked to the Huntingston's disease gene. Codominant inheritance was observed in seven families with two and three generations, respectively. The number of meioses scored was 95.

  17. An Ordering Linear Unification Algorithm

    Institute of Scientific and Technical Information of China (English)

    胡运发

    1989-01-01

    In this paper,we present an ordering linear unification algorithm(OLU).A new idea on substituteion of the binding terms is introduced to the algorithm,which is able to overcome some drawbacks of other algorithms,e.g.,MM algorithm[1],RG1 and RG2 algorithms[2],Particularly,if we use the directed eyclie graphs,the algoritm needs not check the binding order,then the OLU algorithm can also be aplied to the infinite tree data struceture,and a higher efficiency can be expected.The paper focuses upon the discussion of OLU algorithm and a partial order structure with respect to the unification algorithm.This algorithm has been implemented in the GKD-PROLOG/VAX 780 interpreting system.Experimental results have shown that the algorithm is very simple and efficient.

  18. Pärnus paati kõigutamas / Heie Treier

    Index Scriptorium Estoniae

    Treier, Heie, 1963-

    1999-01-01

    Seminar 'Tsensuur ja enesetsensuur täna, 10 aastat hiljem' 18.-20. augustil Uue Kunsti Muuseumis. E. Lucie-Smithi, J. Vidal-Halli, E. Ohlsoni, L. Lapini esinemisest. E. Lucie-Smithi ja J. Vidal-Halli arvamus Ly Lestbergi fotodest, millel on kujutatud kaks alasti poisikest. Uue Kunsti Muuseumi näitusest 'Mees ja naine (mees ja mees)', kus probleeme põhjustasid homoerootilised tööd.

  19. Off-policy reinforcement learning for H∞ control design.

    Science.gov (United States)

    Luo, Biao; Wu, Huai-Ning; Huang, Tingwen

    2015-01-01

    The H∞ control design problem is considered for nonlinear systems with unknown internal system model. It is known that the nonlinear H∞ control problem can be transformed into solving the so-called Hamilton-Jacobi-Isaacs (HJI) equation, which is a nonlinear partial differential equation that is generally impossible to be solved analytically. Even worse, model-based approaches cannot be used for approximately solving HJI equation, when the accurate system model is unavailable or costly to obtain in practice. To overcome these difficulties, an off-policy reinforcement leaning (RL) method is introduced to learn the solution of HJI equation from real system data instead of mathematical system model, and its convergence is proved. In the off-policy RL method, the system data can be generated with arbitrary policies rather than the evaluating policy, which is extremely important and promising for practical systems. For implementation purpose, a neural network (NN)-based actor-critic structure is employed and a least-square NN weight update algorithm is derived based on the method of weighted residuals. Finally, the developed NN-based off-policy RL method is tested on a linear F16 aircraft plant, and further applied to a rotational/translational actuator system.

  20. VISUALIZATION OF PAGERANK ALGORITHM

    OpenAIRE

    Perhaj, Ervin

    2013-01-01

    The goal of the thesis is to develop a web application that help users understand the functioning of the PageRank algorithm. The thesis consists of two parts. First we develop an algorithm to calculate PageRank values of web pages. The input of algorithm is a list of web pages and links between them. The user enters the list through the web interface. From the data the algorithm calculates PageRank value for each page. The algorithm repeats the process, until the difference of PageRank va...

  1. RFID Location Algorithm

    Directory of Open Access Journals (Sweden)

    Wang Zi Min

    2016-01-01

    Full Text Available With the development of social services, people’s living standards improve further requirements, there is an urgent need for a way to adapt to the complex situation of the new positioning technology. In recent years, RFID technology have a wide range of applications in all aspects of life and production, such as logistics tracking, car alarm, security and other items. The use of RFID technology to locate, it is a new direction in the eyes of the various research institutions and scholars. RFID positioning technology system stability, the error is small and low-cost advantages of its location algorithm is the focus of this study.This article analyzes the layers of RFID technology targeting methods and algorithms. First, RFID common several basic methods are introduced; Secondly, higher accuracy to political network location method; Finally, LANDMARC algorithm will be described. Through this it can be seen that advanced and efficient algorithms play an important role in increasing RFID positioning accuracy aspects.Finally, the algorithm of RFID location technology are summarized, pointing out the deficiencies in the algorithm, and put forward a follow-up study of the requirements, the vision of a better future RFID positioning technology.

  2. Improved multivariate polynomial factoring algorithm

    International Nuclear Information System (INIS)

    Wang, P.S.

    1978-01-01

    A new algorithm for factoring multivariate polynomials over the integers based on an algorithm by Wang and Rothschild is described. The new algorithm has improved strategies for dealing with the known problems of the original algorithm, namely, the leading coefficient problem, the bad-zero problem and the occurrence of extraneous factors. It has an algorithm for correctly predetermining leading coefficients of the factors. A new and efficient p-adic algorithm named EEZ is described. Bascially it is a linearly convergent variable-by-variable parallel construction. The improved algorithm is generally faster and requires less store then the original algorithm. Machine examples with comparative timing are included

  3. Governance by algorithms

    Directory of Open Access Journals (Sweden)

    Francesca Musiani

    2013-08-01

    Full Text Available Algorithms are increasingly often cited as one of the fundamental shaping devices of our daily, immersed-in-information existence. Their importance is acknowledged, their performance scrutinised in numerous contexts. Yet, a lot of what constitutes 'algorithms' beyond their broad definition as “encoded procedures for transforming input data into a desired output, based on specified calculations” (Gillespie, 2013 is often taken for granted. This article seeks to contribute to the discussion about 'what algorithms do' and in which ways they are artefacts of governance, providing two examples drawing from the internet and ICT realm: search engine queries and e-commerce websites’ recommendations to customers. The question of the relationship between algorithms and rules is likely to occupy an increasingly central role in the study and the practice of internet governance, in terms of both institutions’ regulation of algorithms, and algorithms’ regulation of our society.

  4. Algorithmic and user study of an autocompletion algorithm on a large medical vocabulary.

    Science.gov (United States)

    Sevenster, Merlijn; van Ommering, Rob; Qian, Yuechen

    2012-02-01

    Autocompletion supports human-computer interaction in software applications that let users enter textual data. We will be inspired by the use case in which medical professionals enter ontology concepts, catering the ongoing demand for structured and standardized data in medicine. Goal is to give an algorithmic analysis of one particular autocompletion algorithm, called multi-prefix matching algorithm, which suggests terms whose words' prefixes contain all words in the string typed by the user, e.g., in this sense, opt ner me matches optic nerve meningioma. Second we aim to investigate how well it supports users entering concepts from a large and comprehensive medical vocabulary (snomed ct). We give a concise description of the multi-prefix algorithm, and sketch how it can be optimized to meet required response time. Performance will be compared to a baseline algorithm, which gives suggestions that extend the string typed by the user to the right, e.g. optic nerve m gives optic nerve meningioma, but opt ner me does not. We conduct a user experiment in which 12 participants are invited to complete 40 snomed ct terms with the baseline algorithm and another set of 40 snomed ct terms with the multi-prefix algorithm. Our results show that users need significantly fewer keystrokes when supported by the multi-prefix algorithm than when supported by the baseline algorithm. The proposed algorithm is a competitive candidate for searching and retrieving terms from a large medical ontology. Copyright © 2011 Elsevier Inc. All rights reserved.

  5. Optimal Fungal Space Searching Algorithms.

    Science.gov (United States)

    Asenova, Elitsa; Lin, Hsin-Yu; Fu, Eileen; Nicolau, Dan V; Nicolau, Dan V

    2016-10-01

    Previous experiments have shown that fungi use an efficient natural algorithm for searching the space available for their growth in micro-confined networks, e.g., mazes. This natural "master" algorithm, which comprises two "slave" sub-algorithms, i.e., collision-induced branching and directional memory, has been shown to be more efficient than alternatives, with one, or the other, or both sub-algorithms turned off. In contrast, the present contribution compares the performance of the fungal natural algorithm against several standard artificial homologues. It was found that the space-searching fungal algorithm consistently outperforms uninformed algorithms, such as Depth-First-Search (DFS). Furthermore, while the natural algorithm is inferior to informed ones, such as A*, this under-performance does not importantly increase with the increase of the size of the maze. These findings suggest that a systematic effort of harvesting the natural space searching algorithms used by microorganisms is warranted and possibly overdue. These natural algorithms, if efficient, can be reverse-engineered for graph and tree search strategies.

  6. Application of Acoustic and Optic Methods for Estimating Suspended-Solids Concentrations in the St. Lucie River Estuary, Florida

    Science.gov (United States)

    Patino, Eduardo; Byrne, Michael J.

    2004-01-01

    Acoustic and optic methods were applied to estimate suspended-solids concentrations in the St. Lucie River Estuary, southeastern Florida. Acoustic Doppler velocity meters were installed at the North Fork, Speedy Point, and Steele Point sites within the estuary. These sites provide varying flow, salinity, water-quality, and channel cross-sectional characteristics. The monitoring site at Steele Point was not used in the analyses because repeated instrument relocations (due to bridge construction) prevented a sufficient number of samples from being collected at the various locations. Acoustic and optic instruments were installed to collect water velocity, acoustic backscatter strength (ABS), and turbidity data that were used to assess the feasibility of estimating suspended-solids concentrations in the estuary. Other data collected at the monitoring sites include tidal stage, salinity, temperature, and periodic discharge measurements. Regression analyses were used to determine the relations of suspended-solids concentration to ABS and suspended-solids concentration to turbidity at the North Fork and Speedy Point sites. For samples used in regression analyses, measured suspended-solids concentrations at the North Fork and Speedy Point sites ranged from 3 to 37 milligrams per liter, and organic content ranged from 50 to 83 percent. Corresponding salinity for these samples ranged from 0.12 to 22.7 parts per thousand, and corresponding temperature ranged from 19.4 to 31.8 ?C. Relations determined using this technique are site specific and only describe suspended-solids concentrations at locations where data were collected. The suspended-solids concentration to ABS relation resulted in correlation coefficients of 0.78 and 0.63 at the North Fork and Speedy Point sites, respectively. The suspended-solids concentration to turbidity relation resulted in correlation coefficients of 0.73 and 0.89 at the North Fork and Speedy Point sites, respectively. The adequacy of the

  7. Autonomous Star Tracker Algorithms

    DEFF Research Database (Denmark)

    Betto, Maurizio; Jørgensen, John Leif; Kilsgaard, Søren

    1998-01-01

    Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances.......Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances....

  8. Fast geometric algorithms

    International Nuclear Information System (INIS)

    Noga, M.T.

    1984-01-01

    This thesis addresses a number of important problems that fall within the framework of the new discipline of Computational Geometry. The list of topics covered includes sorting and selection, convex hull algorithms, the L 1 hull, determination of the minimum encasing rectangle of a set of points, the Euclidean and L 1 diameter of a set of points, the metric traveling salesman problem, and finding the superrange of star-shaped and monotype polygons. The main theme of all the work was to develop a set of very fast state-of-the-art algorithms that supersede any rivals in terms of speed and ease of implementation. In some cases existing algorithms were refined; for others new techniques were developed that add to the present database of fast adaptive geometric algorithms. What emerges is a collection of techniques that is successful at merging modern tools developed in analysis of algorithms with those of classical geometry

  9. An algorithm for real-time dosimetry in intensity-modulated radiation therapy using the radioluminescence signal from Al2O3:C

    DEFF Research Database (Denmark)

    Andersen, C.E.; Marckmann, C.J.; Aznar, Marianne

    2006-01-01

    radiation beams. The dosimetry system has been used for dose measurements in a phantom during an intensity-modulated radiation therapy (IMRT) treatment with 6 MV photons. The RL measurement results are in excellent agreement (i.e. within 1%) with both the OSL results and the dose delivered according...

  10. Genetic Algorithm Applied to the Eigenvalue Equalization Filtered-x LMS Algorithm (EE-FXLMS

    Directory of Open Access Journals (Sweden)

    Stephan P. Lovstedt

    2008-01-01

    Full Text Available The FXLMS algorithm, used extensively in active noise control (ANC, exhibits frequency-dependent convergence behavior. This leads to degraded performance for time-varying tonal noise and noise with multiple stationary tones. Previous work by the authors proposed the eigenvalue equalization filtered-x least mean squares (EE-FXLMS algorithm. For that algorithm, magnitude coefficients of the secondary path transfer function are modified to decrease variation in the eigenvalues of the filtered-x autocorrelation matrix, while preserving the phase, giving faster convergence and increasing overall attenuation. This paper revisits the EE-FXLMS algorithm, using a genetic algorithm to find magnitude coefficients that give the least variation in eigenvalues. This method overcomes some of the problems with implementing the EE-FXLMS algorithm arising from finite resolution of sampled systems. Experimental control results using the original secondary path model, and a modified secondary path model for both the previous implementation of EE-FXLMS and the genetic algorithm implementation are compared.

  11. On factoring RSA modulus using random-restart hill-climbing algorithm and Pollard’s rho algorithm

    Science.gov (United States)

    Budiman, M. A.; Rachmawati, D.

    2017-12-01

    The security of the widely-used RSA public key cryptography algorithm depends on the difficulty of factoring a big integer into two large prime numbers. For many years, the integer factorization problem has been intensively and extensively studied in the field of number theory. As a result, a lot of deterministic algorithms such as Euler’s algorithm, Kraitchik’s, and variants of Pollard’s algorithms have been researched comprehensively. Our study takes a rather uncommon approach: rather than making use of intensive number theories, we attempt to factorize RSA modulus n by using random-restart hill-climbing algorithm, which belongs the class of metaheuristic algorithms. The factorization time of RSA moduli with different lengths is recorded and compared with the factorization time of Pollard’s rho algorithm, which is a deterministic algorithm. Our experimental results indicates that while random-restart hill-climbing algorithm is an acceptable candidate to factorize smaller RSA moduli, the factorization speed is much slower than that of Pollard’s rho algorithm.

  12. Opposition-Based Adaptive Fireworks Algorithm

    Directory of Open Access Journals (Sweden)

    Chibing Gong

    2016-07-01

    Full Text Available A fireworks algorithm (FWA is a recent swarm intelligence algorithm that is inspired by observing fireworks explosions. An adaptive fireworks algorithm (AFWA proposes additional adaptive amplitudes to improve the performance of the enhanced fireworks algorithm (EFWA. The purpose of this paper is to add opposition-based learning (OBL to AFWA with the goal of further boosting performance and achieving global optimization. Twelve benchmark functions are tested in use of an opposition-based adaptive fireworks algorithm (OAFWA. The final results conclude that OAFWA significantly outperformed EFWA and AFWA in terms of solution accuracy. Additionally, OAFWA was compared with a bat algorithm (BA, differential evolution (DE, self-adapting control parameters in differential evolution (jDE, a firefly algorithm (FA, and a standard particle swarm optimization 2011 (SPSO2011 algorithm. The research results indicate that OAFWA ranks the highest of the six algorithms for both solution accuracy and runtime cost.

  13. Opposite Degree Algorithm and Its Applications

    Directory of Open Access Journals (Sweden)

    Xiao-Guang Yue

    2015-12-01

    Full Text Available The opposite (Opposite Degree, referred to as OD algorithm is an intelligent algorithm proposed by Yue Xiaoguang et al. Opposite degree algorithm is mainly based on the concept of opposite degree, combined with the idea of design of neural network and genetic algorithm and clustering analysis algorithm. The OD algorithm is divided into two sub algorithms, namely: opposite degree - numerical computation (OD-NC algorithm and opposite degree - Classification computation (OD-CC algorithm.

  14. Optimal Bidding and Operation of a Power Plant with Solvent-Based Carbon Capture under a CO2 Allowance Market: A Solution with a Reinforcement Learning-Based Sarsa Temporal-Difference Algorithm

    Directory of Open Access Journals (Sweden)

    Ziang Li

    2017-04-01

    Full Text Available In this paper, a reinforcement learning (RL-based Sarsa temporal-difference (TD algorithm is applied to search for a unified bidding and operation strategy for a coal-fired power plant with monoethanolamine (MEA-based post-combustion carbon capture under different carbon dioxide (CO2 allowance market conditions. The objective of the decision maker for the power plant is to maximize the discounted cumulative profit during the power plant lifetime. Two constraints are considered for the objective formulation. Firstly, the tradeoff between the energy-intensive carbon capture and the electricity generation should be made under presumed fixed fuel consumption. Secondly, the CO2 allowances purchased from the CO2 allowance market should be approximately equal to the quantity of CO2 emission from power generation. Three case studies are demonstrated thereafter. In the first case, we show the convergence of the Sarsa TD algorithm and find a deterministic optimal bidding and operation strategy. In the second case, compared with the independently designed operation and bidding strategies discussed in most of the relevant literature, the Sarsa TD-based unified bidding and operation strategy with time-varying flexible market-oriented CO2 capture levels is demonstrated to help the power plant decision maker gain a higher discounted cumulative profit. In the third case, a competitor operating another power plant identical to the preceding plant is considered under the same CO2 allowance market. The competitor also has carbon capture facilities but applies a different strategy to earn profits. The discounted cumulative profits of the two power plants are then compared, thus exhibiting the competitiveness of the power plant that is using the unified bidding and operation strategy explored by the Sarsa TD algorithm.

  15. On K-Line and K x K Block Iterative Schemes for a Problem Arising in 3-D Elliptic Difference Equations.

    Science.gov (United States)

    1980-01-01

    VPARTER. , STEUERWALT No 0I 76_C-03AI UNCLASSIFIED CSTR -374 ML M EMON~hEE 111112.08 12.5 111112 1.4 1 1. KWOCP RSLINTS CHR NA11~ L .R~l0 ___VRD I-l...4b) are obtained from the well known algorithm for solving diagonally dominant tridiagonal sys- tems; see (16, 10]. The monotonicity of the Ej and the

  16. An Online Q-learning Based Multi-Agent LFC for a Multi-Area Multi-Source Power System Including Distributed Energy Resources

    Directory of Open Access Journals (Sweden)

    H. Shayeghi

    2017-12-01

    Full Text Available This paper presents an online two-stage Q-learning based multi-agent (MA controller for load frequency control (LFC in an interconnected multi-area multi-source power system integrated with distributed energy resources (DERs. The proposed control strategy consists of two stages. The first stage is employed a PID controller which its parameters are designed using sine cosine optimization (SCO algorithm and are fixed. The second one is a reinforcement learning (RL based supplementary controller that has a flexible structure and improves the output of the first stage adaptively based on the system dynamical behavior. Due to the use of RL paradigm integrated with PID controller in this strategy, it is called RL-PID controller. The primary motivation for the integration of RL technique with PID controller is to make the existing local controllers in the industry compatible to reduce the control efforts and system costs. This novel control strategy combines the advantages of the PID controller with adaptive behavior of MA to achieve the desired level of robust performance under different kind of uncertainties caused by stochastically power generation of DERs, plant operational condition changes, and physical nonlinearities of the system. The suggested decentralized controller is composed of the autonomous intelligent agents, who learn the optimal control policy from interaction with the system. These agents update their knowledge about the system dynamics continuously to achieve a good frequency oscillation damping under various severe disturbances without any knowledge of them. It leads to an adaptive control structure to solve LFC problem in the multi-source power system with stochastic DERs. The results of RL-PID controller in comparison to the traditional PID and fuzzy-PID controllers is verified in a multi-area power system integrated with DERs through some performance indices.

  17. Revisiting random walk based sampling in networks: evasion of burn-in period and frequent regenerations.

    Science.gov (United States)

    Avrachenkov, Konstantin; Borkar, Vivek S; Kadavankandy, Arun; Sreedharan, Jithin K

    2018-01-01

    In the framework of network sampling, random walk (RW) based estimation techniques provide many pragmatic solutions while uncovering the unknown network as little as possible. Despite several theoretical advances in this area, RW based sampling techniques usually make a strong assumption that the samples are in stationary regime, and hence are impelled to leave out the samples collected during the burn-in period. This work proposes two sampling schemes without burn-in time constraint to estimate the average of an arbitrary function defined on the network nodes, for example, the average age of users in a social network. The central idea of the algorithms lies in exploiting regeneration of RWs at revisits to an aggregated super-node or to a set of nodes, and in strategies to enhance the frequency of such regenerations either by contracting the graph or by making the hitting set larger. Our first algorithm, which is based on reinforcement learning (RL), uses stochastic approximation to derive an estimator. This method can be seen as intermediate between purely stochastic Markov chain Monte Carlo iterations and deterministic relative value iterations. The second algorithm, which we call the Ratio with Tours (RT)-estimator, is a modified form of respondent-driven sampling (RDS) that accommodates the idea of regeneration. We study the methods via simulations on real networks. We observe that the trajectories of RL-estimator are much more stable than those of standard random walk based estimation procedures, and its error performance is comparable to that of respondent-driven sampling (RDS) which has a smaller asymptotic variance than many other estimators. Simulation studies also show that the mean squared error of RT-estimator decays much faster than that of RDS with time. The newly developed RW based estimators (RL- and RT-estimators) allow to avoid burn-in period, provide better control of stability along the sample path, and overall reduce the estimation time. Our

  18. Multiparameter extrapolation and deflation methods for solving equation systems

    Directory of Open Access Journals (Sweden)

    A. J. Hughes Hallett

    1984-01-01

    Full Text Available Most models in economics and the applied sciences are solved by first order iterative techniques, usually those based on the Gauss-Seidel algorithm. This paper examines the convergence of multiparameter extrapolations (accelerations of first order iterations, as an improved approximation to the Newton method for solving arbitrary nonlinear equation systems. It generalises my earlier results on single parameter extrapolations. Richardson's generalised method and the deflation method for detecting successive solutions in nonlinear equation systems are also presented as multiparameter extrapolations of first order iterations. New convergence results are obtained for those methods.

  19. Algorithmic phase diagrams

    Science.gov (United States)

    Hockney, Roger

    1987-01-01

    Algorithmic phase diagrams are a neat and compact representation of the results of comparing the execution time of several algorithms for the solution of the same problem. As an example, the recent results are shown of Gannon and Van Rosendale on the solution of multiple tridiagonal systems of equations in the form of such diagrams. The act of preparing these diagrams has revealed an unexpectedly complex relationship between the best algorithm and the number and size of the tridiagonal systems, which was not evident from the algebraic formulae in the original paper. Even so, for a particular computer, one diagram suffices to predict the best algorithm for all problems that are likely to be encountered the prediction being read directly from the diagram without complex calculation.

  20. Recursive forgetting algorithms

    DEFF Research Database (Denmark)

    Parkum, Jens; Poulsen, Niels Kjølstad; Holst, Jan

    1992-01-01

    In the first part of the paper, a general forgetting algorithm is formulated and analysed. It contains most existing forgetting schemes as special cases. Conditions are given ensuring that the basic convergence properties will hold. In the second part of the paper, the results are applied...... to a specific algorithm with selective forgetting. Here, the forgetting is non-uniform in time and space. The theoretical analysis is supported by a simulation example demonstrating the practical performance of this algorithm...

  1. Agency and Algorithms

    Directory of Open Access Journals (Sweden)

    Hanns Holger Rutz

    2016-11-01

    Full Text Available Although the concept of algorithms has been established a long time ago, their current topicality indicates a shift in the discourse. Classical definitions based on logic seem to be inadequate to describe their aesthetic capabilities. New approaches stress their involvement in material practices as well as their incompleteness. Algorithmic aesthetics can no longer be tied to the static analysis of programs, but must take into account the dynamic and experimental nature of coding practices. It is suggested that the aesthetic objects thus produced articulate something that could be called algorithmicity or the space of algorithmic agency. This is the space or the medium – following Luhmann’s form/medium distinction – where human and machine undergo mutual incursions. In the resulting coupled “extimate” writing process, human initiative and algorithmic speculation cannot be clearly divided out any longer. An observation is attempted of defining aspects of such a medium by drawing a trajectory across a number of sound pieces. The operation of exchange between form and medium I call reconfiguration and it is indicated by this trajectory. 

  2. Role of O2 in the Growth of Rhizobium leguminosarum bv. viciae 3841 on Glucose and Succinate

    OpenAIRE

    Wheatley, Rachel M.; Ramachandran, Vinoy K.; Geddes, Barney A.; Perry, Benjamin J.; Yost, Chris K.; Poole, Philip S.

    2016-01-01

    Insertion sequencing (INSeq) analysis of Rhizobium leguminosarum bv. viciae 3841 (Rlv3841) grown on glucose or succinate at both 21% and 1% O2 was used to understand how O2 concentration alters metabolism. Two transcriptional regulators were required for growth on glucose (pRL120207 [eryD] and RL0547 [phoB]), five were required on succinate (pRL100388, RL1641, RL1642, RL3427, and RL4524 [ecfL]), and three were required on 1% O2 (pRL110072, RL0545 [phoU], and RL4042). A novel toxin-antitoxin s...

  3. Real-Coded Quantum-Inspired Genetic Algorithm-Based BP Neural Network Algorithm

    Directory of Open Access Journals (Sweden)

    Jianyong Liu

    2015-01-01

    Full Text Available The method that the real-coded quantum-inspired genetic algorithm (RQGA used to optimize the weights and threshold of BP neural network is proposed to overcome the defect that the gradient descent method makes the algorithm easily fall into local optimal value in the learning process. Quantum genetic algorithm (QGA is with good directional global optimization ability, but the conventional QGA is based on binary coding; the speed of calculation is reduced by the coding and decoding processes. So, RQGA is introduced to explore the search space, and the improved varied learning rate is adopted to train the BP neural network. Simulation test shows that the proposed algorithm is effective to rapidly converge to the solution conformed to constraint conditions.

  4. A cluster algorithm for graphs

    NARCIS (Netherlands)

    S. van Dongen

    2000-01-01

    textabstractA cluster algorithm for graphs called the emph{Markov Cluster algorithm (MCL~algorithm) is introduced. The algorithm provides basically an interface to an algebraic process defined on stochastic matrices, called the MCL~process. The graphs may be both weighted (with nonnegative weight)

  5. Advanced Source Deconvolution Methods for Compton Telescopes

    Science.gov (United States)

    Zoglauer, Andreas

    list-mode approach to get the best angular resolution, to get achieve both at the same time! The second open question concerns the best deconvolution algorithm. For example, several algorithms have been investigated for the famous COMPTEL 26Al map which resulted in significantly different images. There is no clear answer as to which approach provides the most accurate result, largely due to the fact that detailed simulations to test and verify the approaches and their limitations were not possible at that time. This has changed, and therefore we propose to evaluate several deconvolution algorithms (e.g. Richardson-Lucy, Maximum-Entropy, MREM, and stochastic origin ensembles) with simulations of typical observations to find the best algorithm for each application and for each stage of the hybrid reconstruction approach. We will adapt, implement, and fully evaluate the hybrid source reconstruction approach as well as the various deconvolution algorithms with simulations of synthetic benchmarks and simulations of key science objectives such as diffuse nuclear line science and continuum science of point sources, as well as with calibrations/observations of the COSI balloon telescope. This proposal for "development of new data analysis methods for future satellite missions" will significantly improve the source deconvolution techniques for modern Compton telescopes and will allow unlocking the full potential of envisioned satellite missions using Compton-scatter technology in astrophysics, heliophysics and planetary sciences, and ultimately help them to "discover how the universe works" and to better "understand the sun". Ultimately it will also benefit ground based applications such as nuclear medicine and environmental monitoring as all developed algorithms will be made publicly available within the open-source Compton telescope analysis framework MEGAlib.

  6. Off-Policy Reinforcement Learning: Optimal Operational Control for Two-Time-Scale Industrial Processes.

    Science.gov (United States)

    Li, Jinna; Kiumarsi, Bahare; Chai, Tianyou; Lewis, Frank L; Fan, Jialu

    2017-12-01

    Industrial flow lines are composed of unit processes operating on a fast time scale and performance measurements known as operational indices measured at a slower time scale. This paper presents a model-free optimal solution to a class of two time-scale industrial processes using off-policy reinforcement learning (RL). First, the lower-layer unit process control loop with a fast sampling period and the upper-layer operational index dynamics at a slow time scale are modeled. Second, a general optimal operational control problem is formulated to optimally prescribe the set-points for the unit industrial process. Then, a zero-sum game off-policy RL algorithm is developed to find the optimal set-points by using data measured in real-time. Finally, a simulation experiment is employed for an industrial flotation process to show the effectiveness of the proposed method.

  7. Çokkültürlü Eğitim ve Öğretim Yeterliğine İlişkin Teorik Bir Çözümleme

    OpenAIRE

    KILIÇOĞLU, Gökhan

    2015-01-01

    Yirminci yüzyılın yılların başlarından bu yana göç hareketleriyle etnik çeşitliliği artan dünya günümüzde küreselleşmekte ve birçok farklılığı içerisinde barındırır hale gelmektedir. Dolayısıyla küreselleşen dünya, öğretmenlerin farklı kültürel geçmişe sahip olan öğrencilerin geçmişlerini ve deneyimlerini göz önünde bulundurarak çokkültürlü ve evrensel bakış açısına sahip olmalarını gerektirmektedir. Özellikle farklı kültürleri içerisinde barındıran toplumlar, farklılıklara sahip öğrenciler a...

  8. Unsupervised learning algorithms

    CERN Document Server

    Aydin, Kemal

    2016-01-01

    This book summarizes the state-of-the-art in unsupervised learning. The contributors discuss how with the proliferation of massive amounts of unlabeled data, unsupervised learning algorithms, which can automatically discover interesting and useful patterns in such data, have gained popularity among researchers and practitioners. The authors outline how these algorithms have found numerous applications including pattern recognition, market basket analysis, web mining, social network analysis, information retrieval, recommender systems, market research, intrusion detection, and fraud detection. They present how the difficulty of developing theoretically sound approaches that are amenable to objective evaluation have resulted in the proposal of numerous unsupervised learning algorithms over the past half-century. The intended audience includes researchers and practitioners who are increasingly using unsupervised learning algorithms to analyze their data. Topics of interest include anomaly detection, clustering,...

  9. Generalized-ensemble molecular dynamics and Monte Carlo algorithms beyond the limit of the multicanonical algorithm

    International Nuclear Information System (INIS)

    Okumura, Hisashi

    2010-01-01

    I review two new generalized-ensemble algorithms for molecular dynamics and Monte Carlo simulations of biomolecules, that is, the multibaric–multithermal algorithm and the partial multicanonical algorithm. In the multibaric–multithermal algorithm, two-dimensional random walks not only in the potential-energy space but also in the volume space are realized. One can discuss the temperature dependence and pressure dependence of biomolecules with this algorithm. The partial multicanonical simulation samples a wide range of only an important part of potential energy, so that one can concentrate the effort to determine a multicanonical weight factor only on the important energy terms. This algorithm has higher sampling efficiency than the multicanonical and canonical algorithms. (review)

  10. Deterrence from Cold War to Long War: Lessons from Six Decades of RAND Research

    Science.gov (United States)

    2008-01-01

    that humor was not absent from RAND analysis, gave the first system in this paper the code-name Lucy, making reference to the Beatles ’ song “Lucy in...document was made available from www.rand.org as a public service of the RAND Corporation. 6Jump down to document THE ARTS CHILD POLICY CIVIL JUSTICE...TECHNOLOGY SUBSTANCE ABUSE TERRORISM AND HOMELAND SECURITY TRANSPORTATION AND INFRASTRUCTURE WORKFORCE AND WORKPLACE The RAND Corporation is a nonprofit

  11. Theoretic derivation of directed acyclic subgraph algorithm and comparisons with message passing algorithm

    Science.gov (United States)

    Ha, Jeongmok; Jeong, Hong

    2016-07-01

    This study investigates the directed acyclic subgraph (DAS) algorithm, which is used to solve discrete labeling problems much more rapidly than other Markov-random-field-based inference methods but at a competitive accuracy. However, the mechanism by which the DAS algorithm simultaneously achieves competitive accuracy and fast execution speed, has not been elucidated by a theoretical derivation. We analyze the DAS algorithm by comparing it with a message passing algorithm. Graphical models, inference methods, and energy-minimization frameworks are compared between DAS and message passing algorithms. Moreover, the performances of DAS and other message passing methods [sum-product belief propagation (BP), max-product BP, and tree-reweighted message passing] are experimentally compared.

  12. Shadow algorithms data miner

    CERN Document Server

    Woo, Andrew

    2012-01-01

    Digital shadow generation continues to be an important aspect of visualization and visual effects in film, games, simulations, and scientific applications. This resource offers a thorough picture of the motivations, complexities, and categorized algorithms available to generate digital shadows. From general fundamentals to specific applications, it addresses shadow algorithms and how to manage huge data sets from a shadow perspective. The book also examines the use of shadow algorithms in industrial applications, in terms of what algorithms are used and what software is applicable.

  13. Diversity-Guided Evolutionary Algorithms

    DEFF Research Database (Denmark)

    Ursem, Rasmus Kjær

    2002-01-01

    Population diversity is undoubtably a key issue in the performance of evolutionary algorithms. A common hypothesis is that high diversity is important to avoid premature convergence and to escape local optima. Various diversity measures have been used to analyze algorithms, but so far few...... algorithms have used a measure to guide the search. The diversity-guided evolutionary algorithm (DGEA) uses the wellknown distance-to-average-point measure to alternate between phases of exploration (mutation) and phases of exploitation (recombination and selection). The DGEA showed remarkable results...

  14. Contact-impact algorithms on parallel computers

    International Nuclear Information System (INIS)

    Zhong Zhihua; Nilsson, Larsgunnar

    1994-01-01

    Contact-impact algorithms on parallel computers are discussed within the context of explicit finite element analysis. The algorithms concerned include a contact searching algorithm and an algorithm for contact force calculations. The contact searching algorithm is based on the territory concept of the general HITA algorithm. However, no distinction is made between different contact bodies, or between different contact surfaces. All contact segments from contact boundaries are taken as a single set. Hierarchy territories and contact territories are expanded. A three-dimensional bucket sort algorithm is used to sort contact nodes. The defence node algorithm is used in the calculation of contact forces. Both the contact searching algorithm and the defence node algorithm are implemented on the connection machine CM-200. The performance of the algorithms is examined under different circumstances, and numerical results are presented. ((orig.))

  15. A review on quantum search algorithms

    Science.gov (United States)

    Giri, Pulak Ranjan; Korepin, Vladimir E.

    2017-12-01

    The use of superposition of states in quantum computation, known as quantum parallelism, has significant advantage in terms of speed over the classical computation. It is evident from the early invented quantum algorithms such as Deutsch's algorithm, Deutsch-Jozsa algorithm and its variation as Bernstein-Vazirani algorithm, Simon algorithm, Shor's algorithms, etc. Quantum parallelism also significantly speeds up the database search algorithm, which is important in computer science because it comes as a subroutine in many important algorithms. Quantum database search of Grover achieves the task of finding the target element in an unsorted database in a time quadratically faster than the classical computer. We review Grover's quantum search algorithms for a singe and multiple target elements in a database. The partial search algorithm of Grover and Radhakrishnan and its optimization by Korepin called GRK algorithm are also discussed.

  16. Magnet sorting algorithms

    International Nuclear Information System (INIS)

    Dinev, D.

    1996-01-01

    Several new algorithms for sorting of dipole and/or quadrupole magnets in synchrotrons and storage rings are described. The algorithms make use of a combinatorial approach to the problem and belong to the class of random search algorithms. They use an appropriate metrization of the state space. The phase-space distortion (smear) is used as a goal function. Computational experiments for the case of the JINR-Dubna superconducting heavy ion synchrotron NUCLOTRON have shown a significant reduction of the phase-space distortion after the magnet sorting. (orig.)

  17. Composite Differential Search Algorithm

    Directory of Open Access Journals (Sweden)

    Bo Liu

    2014-01-01

    Full Text Available Differential search algorithm (DS is a relatively new evolutionary algorithm inspired by the Brownian-like random-walk movement which is used by an organism to migrate. It has been verified to be more effective than ABC, JDE, JADE, SADE, EPSDE, GSA, PSO2011, and CMA-ES. In this paper, we propose four improved solution search algorithms, namely “DS/rand/1,” “DS/rand/2,” “DS/current to rand/1,” and “DS/current to rand/2” to search the new space and enhance the convergence rate for the global optimization problem. In order to verify the performance of different solution search methods, 23 benchmark functions are employed. Experimental results indicate that the proposed algorithm performs better than, or at least comparable to, the original algorithm when considering the quality of the solution obtained. However, these schemes cannot still achieve the best solution for all functions. In order to further enhance the convergence rate and the diversity of the algorithm, a composite differential search algorithm (CDS is proposed in this paper. This new algorithm combines three new proposed search schemes including “DS/rand/1,” “DS/rand/2,” and “DS/current to rand/1” with three control parameters using a random method to generate the offspring. Experiment results show that CDS has a faster convergence rate and better search ability based on the 23 benchmark functions.

  18. Artifact removal algorithms for stroke detection using a multistatic MIST beamforming algorithm.

    Science.gov (United States)

    Ricci, E; Di Domenico, S; Cianca, E; Rossi, T

    2015-01-01

    Microwave imaging (MWI) has been recently proved as a promising imaging modality for low-complexity, low-cost and fast brain imaging tools, which could play a fundamental role to efficiently manage emergencies related to stroke and hemorrhages. This paper focuses on the UWB radar imaging approach and in particular on the processing algorithms of the backscattered signals. Assuming the use of the multistatic version of the MIST (Microwave Imaging Space-Time) beamforming algorithm, developed by Hagness et al. for the early detection of breast cancer, the paper proposes and compares two artifact removal algorithms. Artifacts removal is an essential step of any UWB radar imaging system and currently considered artifact removal algorithms have been shown not to be effective in the specific scenario of brain imaging. First of all, the paper proposes modifications of a known artifact removal algorithm. These modifications are shown to be effective to achieve good localization accuracy and lower false positives. However, the main contribution is the proposal of an artifact removal algorithm based on statistical methods, which allows to achieve even better performance but with much lower computational complexity.

  19. Law and Order in Algorithmics

    NARCIS (Netherlands)

    Fokkinga, M.M.

    1992-01-01

    An algorithm is the input-output effect of a computer program; mathematically, the notion of algorithm comes close to the notion of function. Just as arithmetic is the theory and practice of calculating with numbers, so is ALGORITHMICS the theory and practice of calculating with algorithms. Just as

  20. Algorithms in Algebraic Geometry

    CERN Document Server

    Dickenstein, Alicia; Sommese, Andrew J

    2008-01-01

    In the last decade, there has been a burgeoning of activity in the design and implementation of algorithms for algebraic geometric computation. Some of these algorithms were originally designed for abstract algebraic geometry, but now are of interest for use in applications and some of these algorithms were originally designed for applications, but now are of interest for use in abstract algebraic geometry. The workshop on Algorithms in Algebraic Geometry that was held in the framework of the IMA Annual Program Year in Applications of Algebraic Geometry by the Institute for Mathematics and Its

  1. Explaining algorithms using metaphors

    CERN Document Server

    Forišek, Michal

    2013-01-01

    There is a significant difference between designing a new algorithm, proving its correctness, and teaching it to an audience. When teaching algorithms, the teacher's main goal should be to convey the underlying ideas and to help the students form correct mental models related to the algorithm. This process can often be facilitated by using suitable metaphors. This work provides a set of novel metaphors identified and developed as suitable tools for teaching many of the 'classic textbook' algorithms taught in undergraduate courses worldwide. Each chapter provides exercises and didactic notes fo

  2. Portfolios of quantum algorithms.

    Science.gov (United States)

    Maurer, S M; Hogg, T; Huberman, B A

    2001-12-17

    Quantum computation holds promise for the solution of many intractable problems. However, since many quantum algorithms are stochastic in nature they can find the solution of hard problems only probabilistically. Thus the efficiency of the algorithms has to be characterized by both the expected time to completion and the associated variance. In order to minimize both the running time and its uncertainty, we show that portfolios of quantum algorithms analogous to those of finance can outperform single algorithms when applied to the NP-complete problems such as 3-satisfiability.

  3. DNABIT Compress - Genome compression algorithm.

    Science.gov (United States)

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-22

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.

  4. A verified LLL algorithm

    NARCIS (Netherlands)

    Divasón, Jose; Joosten, Sebastiaan; Thiemann, René; Yamada, Akihisa

    2018-01-01

    The Lenstra-Lenstra-Lovász basis reduction algorithm, also known as LLL algorithm, is an algorithm to find a basis with short, nearly orthogonal vectors of an integer lattice. Thereby, it can also be seen as an approximation to solve the shortest vector problem (SVP), which is an NP-hard problem,

  5. SPECIAL LIBRARIES OF FRAGMENTS OF ALGORITHMIC NETWORKS TO AUTOMATE THE DEVELOPMENT OF ALGORITHMIC MODELS

    Directory of Open Access Journals (Sweden)

    V. E. Marley

    2015-01-01

    Full Text Available Summary. The concept of algorithmic models appeared from the algorithmic approach in which the simulated object, the phenomenon appears in the form of process, subject to strict rules of the algorithm, which placed the process of operation of the facility. Under the algorithmic model is the formalized description of the scenario subject specialist for the simulated process, the structure of which is comparable with the structure of the causal and temporal relationships between events of the process being modeled, together with all information necessary for its software implementation. To represent the structure of algorithmic models used algorithmic network. Normally, they were defined as loaded finite directed graph, the vertices which are mapped to operators and arcs are variables, bound by operators. The language of algorithmic networks has great features, the algorithms that it can display indifference the class of all random algorithms. In existing systems, automation modeling based on algorithmic nets, mainly used by operators working with real numbers. Although this reduces their ability, but enough for modeling a wide class of problems related to economy, environment, transport, technical processes. The task of modeling the execution of schedules and network diagrams is relevant and useful. There are many counting systems, network graphs, however, the monitoring process based analysis of gaps and terms of graphs, no analysis of prediction execution schedule or schedules. The library is designed to build similar predictive models. Specifying source data to obtain a set of projections from which to choose one and take it for a new plan.

  6. Group leaders optimization algorithm

    Science.gov (United States)

    Daskin, Anmer; Kais, Sabre

    2011-03-01

    We present a new global optimization algorithm in which the influence of the leaders in social groups is used as an inspiration for the evolutionary technique which is designed into a group architecture. To demonstrate the efficiency of the method, a standard suite of single and multi-dimensional optimization functions along with the energies and the geometric structures of Lennard-Jones clusters are given as well as the application of the algorithm on quantum circuit design problems. We show that as an improvement over previous methods, the algorithm scales as N 2.5 for the Lennard-Jones clusters of N-particles. In addition, an efficient circuit design is shown for a two-qubit Grover search algorithm which is a quantum algorithm providing quadratic speedup over the classical counterpart.

  7. Totally parallel multilevel algorithms

    Science.gov (United States)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  8. The global Minmax k-means algorithm.

    Science.gov (United States)

    Wang, Xiaoyan; Bai, Yanping

    2016-01-01

    The global k -means algorithm is an incremental approach to clustering that dynamically adds one cluster center at a time through a deterministic global search procedure from suitable initial positions, and employs k -means to minimize the sum of the intra-cluster variances. However the global k -means algorithm sometimes results singleton clusters and the initial positions sometimes are bad, after a bad initialization, poor local optimal can be easily obtained by k -means algorithm. In this paper, we modified the global k -means algorithm to eliminate the singleton clusters at first, and then we apply MinMax k -means clustering error method to global k -means algorithm to overcome the effect of bad initialization, proposed the global Minmax k -means algorithm. The proposed clustering method is tested on some popular data sets and compared to the k -means algorithm, the global k -means algorithm and the MinMax k -means algorithm. The experiment results show our proposed algorithm outperforms other algorithms mentioned in the paper.

  9. New Optimization Algorithms in Physics

    CERN Document Server

    Hartmann, Alexander K

    2004-01-01

    Many physicists are not aware of the fact that they can solve their problems by applying optimization algorithms. Since the number of such algorithms is steadily increasing, many new algorithms have not been presented comprehensively until now. This presentation of recently developed algorithms applied in physics, including demonstrations of how they work and related results, aims to encourage their application, and as such the algorithms selected cover concepts and methods from statistical physics to optimization problems emerging in theoretical computer science.

  10. Learning algorithms and automatic processing of languages; Algorithmes a apprentissage et traitement automatique des langues

    Energy Technology Data Exchange (ETDEWEB)

    Fluhr, Christian Yves Andre

    1977-06-15

    This research thesis concerns the field of artificial intelligence. It addresses learning algorithms applied to automatic processing of languages. The author first briefly describes some mechanisms of human intelligence in order to describe how these mechanisms are simulated on a computer. He outlines the specific role of learning in various manifestations of intelligence. Then, based on the Markov's algorithm theory, the author discusses the notion of learning algorithm. Two main types of learning algorithms are then addressed: firstly, an 'algorithm-teacher dialogue' type sanction-based algorithm which aims at learning how to solve grammatical ambiguities in submitted texts; secondly, an algorithm related to a document system which structures semantic data automatically obtained from a set of texts in order to be able to understand by references to any question on the content of these texts.

  11. Improving predictions of the effects of extreme events, land use, and climate change on the hydrology of watersheds in the Philippines

    Directory of Open Access Journals (Sweden)

    R. Benavidez

    2016-05-01

    Full Text Available Due to its location within the typhoon belt, the Philippines is vulnerable to tropical cyclones that can cause destructive floods. Climate change is likely to exacerbate these risks through increases in tropical cyclone frequency and intensity. To protect populations and infrastructure, disaster risk management in the Philippines focuses on real-time flood forecasting and structural measures such as dikes and retaining walls. Real-time flood forecasting in the Philippines mostly utilises two models from the Hydrologic Engineering Center (HEC: the Hydrologic Modeling System (HMS for watershed modelling, and the River Analysis System (RAS for inundation modelling. This research focuses on using non-structural measures for flood mitigation, such as changing land use management or watershed rehabilitation. This is being done by parameterising and applying the Land Utilisation and Capability Indicator (LUCI model to the Cagayan de Oro watershed (1400 km2 in southern Philippines. The LUCI model is capable of identifying areas providing ecosystem services such as flood mitigation and agricultural productivity, and analysing trade-offs between services. It can also assess whether management interventions could enhance or degrade ecosystem services at fine spatial scales. The LUCI model was used to identify areas within the watershed that are providing flood mitigating services and areas that would benefit from management interventions. For the preliminary comparison, LUCI and HEC-HMS were run under the same scenario: baseline land use and the extreme rainfall event of Typhoon Bopha. The hydrographs from both models were then input to HEC-RAS to produce inundation maps. The novelty of this research is two-fold: (1 this type of ecosystem service modelling has not been carried out in the Cagayan de Oro watershed; and (2 this is the first application of the LUCI model in the Philippines. Since this research is still ongoing, the results presented in

  12. Improving predictions of the effects of extreme events, land use, and climate change on the hydrology of watersheds in the Philippines

    Science.gov (United States)

    Benavidez, Rubianca; Jackson, Bethanna; Maxwell, Deborah; Paringit, Enrico

    2016-05-01

    Due to its location within the typhoon belt, the Philippines is vulnerable to tropical cyclones that can cause destructive floods. Climate change is likely to exacerbate these risks through increases in tropical cyclone frequency and intensity. To protect populations and infrastructure, disaster risk management in the Philippines focuses on real-time flood forecasting and structural measures such as dikes and retaining walls. Real-time flood forecasting in the Philippines mostly utilises two models from the Hydrologic Engineering Center (HEC): the Hydrologic Modeling System (HMS) for watershed modelling, and the River Analysis System (RAS) for inundation modelling. This research focuses on using non-structural measures for flood mitigation, such as changing land use management or watershed rehabilitation. This is being done by parameterising and applying the Land Utilisation and Capability Indicator (LUCI) model to the Cagayan de Oro watershed (1400 km2) in southern Philippines. The LUCI model is capable of identifying areas providing ecosystem services such as flood mitigation and agricultural productivity, and analysing trade-offs between services. It can also assess whether management interventions could enhance or degrade ecosystem services at fine spatial scales. The LUCI model was used to identify areas within the watershed that are providing flood mitigating services and areas that would benefit from management interventions. For the preliminary comparison, LUCI and HEC-HMS were run under the same scenario: baseline land use and the extreme rainfall event of Typhoon Bopha. The hydrographs from both models were then input to HEC-RAS to produce inundation maps. The novelty of this research is two-fold: (1) this type of ecosystem service modelling has not been carried out in the Cagayan de Oro watershed; and (2) this is the first application of the LUCI model in the Philippines. Since this research is still ongoing, the results presented in this paper are

  13. Graph Colouring Algorithms

    DEFF Research Database (Denmark)

    Husfeldt, Thore

    2015-01-01

    This chapter presents an introduction to graph colouring algorithms. The focus is on vertex-colouring algorithms that work for general classes of graphs with worst-case performance guarantees in a sequential model of computation. The presentation aims to demonstrate the breadth of available...

  14. FRAMEWORK FOR COMPARING SEGMENTATION ALGORITHMS

    Directory of Open Access Journals (Sweden)

    G. Sithole

    2015-05-01

    Full Text Available The notion of a ‘Best’ segmentation does not exist. A segmentation algorithm is chosen based on the features it yields, the properties of the segments (point sets it generates, and the complexity of its algorithm. The segmentation is then assessed based on a variety of metrics such as homogeneity, heterogeneity, fragmentation, etc. Even after an algorithm is chosen its performance is still uncertain because the landscape/scenarios represented in a point cloud have a strong influence on the eventual segmentation. Thus selecting an appropriate segmentation algorithm is a process of trial and error. Automating the selection of segmentation algorithms and their parameters first requires methods to evaluate segmentations. Three common approaches for evaluating segmentation algorithms are ‘goodness methods’, ‘discrepancy methods’ and ‘benchmarks’. Benchmarks are considered the most comprehensive method of evaluation. This paper shortcomings in current benchmark methods are identified and a framework is proposed that permits both a visual and numerical evaluation of segmentations for different algorithms, algorithm parameters and evaluation metrics. The concept of the framework is demonstrated on a real point cloud. Current results are promising and suggest that it can be used to predict the performance of segmentation algorithms.

  15. From Genetics to Genetic Algorithms

    Indian Academy of Sciences (India)

    Genetic algorithms (GAs) are computational optimisation schemes with an ... The algorithms solve optimisation problems ..... Genetic Algorithms in Search, Optimisation and Machine. Learning, Addison-Wesley Publishing Company, Inc. 1989.

  16. The Lund University Checklist for Incipient Exhaustion–a cross–sectional comparison of a new instrument with similar contemporary tools

    Directory of Open Access Journals (Sweden)

    Roger Persson

    2016-04-01

    Full Text Available Abstract Background Stress-related health problems (e.g., work-related exhaustion are a societal concern in many postindustrial countries. Experience suggests that early detection and intervention are crucial in preventing long-term negative consequences. In the present study, we benchmark a new tool for early identification of work-related exhaustion–the Lund University Checklist for Incipient Exhaustion (LUCIE–against other contextually relevant inventories and two contemporary Swedish screening scales. Methods A cross-sectional population sample (n = 1355 completed: LUCIE, Karolinska Exhaustion Disorder Scale (KEDS, Self-reported Exhaustion Disorder Scale (s-ED, Shirom-Melamed Burnout Questionnaire (SMBQ, Utrecht Work Engagement Scale (UWES-9, Job Content Questionnaire (JCQ, Big Five Inventory (BFI, and items concerning work-family interference and stress in private life. Results Increasing signs of exhaustion on LUCIE were positively associated with signs of exhaustion on KEDS and s-ED. The prevalence rates were 13.4, 13.8 and 7.8 %, respectively (3.8 % were identified by all three instruments. Increasing signs of exhaustion on LUCIE were also positively associated with reports of burnout, job demands, stress in private life, family-to-work interference and neuroticism as well as negatively associated with reports of job control, job support and work engagement. Conclusions LUCIE, which is intended to detect pre-stages of ED, exhibits logical and coherent positive relations with KEDS and s-ED as well as other conceptually similar inventories. The results suggest that LUCIE has the potential to detect mild states of exhaustion (possibly representing pre-stages to ED that if not brought to the attention of the healthcare system and treated, may develop in to ED. The prospective validity remains to be evaluated.

  17. Quantum random-walk search algorithm

    International Nuclear Information System (INIS)

    Shenvi, Neil; Whaley, K. Birgitta; Kempe, Julia

    2003-01-01

    Quantum random walks on graphs have been shown to display many interesting properties, including exponentially fast hitting times when compared with their classical counterparts. However, it is still unclear how to use these novel properties to gain an algorithmic speedup over classical algorithms. In this paper, we present a quantum search algorithm based on the quantum random-walk architecture that provides such a speedup. It will be shown that this algorithm performs an oracle search on a database of N items with O(√(N)) calls to the oracle, yielding a speedup similar to other quantum search algorithms. It appears that the quantum random-walk formulation has considerable flexibility, presenting interesting opportunities for development of other, possibly novel quantum algorithms

  18. Recognition algorithms in knot theory

    International Nuclear Information System (INIS)

    Dynnikov, I A

    2003-01-01

    In this paper the problem of constructing algorithms for comparing knots and links is discussed. A survey of existing approaches and basic results in this area is given. In particular, diverse combinatorial methods for representing links are discussed, the Haken algorithm for recognizing a trivial knot (the unknot) and a scheme for constructing a general algorithm (using Haken's ideas) for comparing links are presented, an approach based on representing links by closed braids is described, the known algorithms for solving the word problem and the conjugacy problem for braid groups are described, and the complexity of the algorithms under consideration is discussed. A new method of combinatorial description of knots is given together with a new algorithm (based on this description) for recognizing the unknot by using a procedure for monotone simplification. In the conclusion of the paper several problems are formulated whose solution could help to advance towards the 'algorithmization' of knot theory

  19. Fast algorithm for Morphological Filters

    International Nuclear Information System (INIS)

    Lou Shan; Jiang Xiangqian; Scott, Paul J

    2011-01-01

    In surface metrology, morphological filters, which evolved from the envelope filtering system (E-system) work well for functional prediction of surface finish in the analysis of surfaces in contact. The naive algorithms are time consuming, especially for areal data, and not generally adopted in real practice. A fast algorithm is proposed based on the alpha shape. The hull obtained by rolling the alpha ball is equivalent to the morphological opening/closing in theory. The algorithm depends on Delaunay triangulation with time complexity O(nlogn). In comparison to the naive algorithms it generates the opening and closing envelope without combining dilation and erosion. Edge distortion is corrected by reflective padding for open profiles/surfaces. Spikes in the sample data are detected and points interpolated to prevent singularities. The proposed algorithm works well both for morphological profile and area filters. Examples are presented to demonstrate the validity and superiority on efficiency of this algorithm over the naive algorithm.

  20. An Algorithm Computing the Local $b$ Function by an Approximate Division Algorithm in $\\hat{\\mathcal{D}}$

    OpenAIRE

    Nakayama, Hiromasa

    2006-01-01

    We give an algorithm to compute the local $b$ function. In this algorithm, we use the Mora division algorithm in the ring of differential operators and an approximate division algorithm in the ring of differential operators with power series coefficient.

  1. Introduction to Evolutionary Algorithms

    CERN Document Server

    Yu, Xinjie

    2010-01-01

    Evolutionary algorithms (EAs) are becoming increasingly attractive for researchers from various disciplines, such as operations research, computer science, industrial engineering, electrical engineering, social science, economics, etc. This book presents an insightful, comprehensive, and up-to-date treatment of EAs, such as genetic algorithms, differential evolution, evolution strategy, constraint optimization, multimodal optimization, multiobjective optimization, combinatorial optimization, evolvable hardware, estimation of distribution algorithms, ant colony optimization, particle swarm opti

  2. Cyanobacteria of the 2016 Lake Okeechobee and Okeechobee Waterway harmful algal bloom

    Science.gov (United States)

    Rosen, Barry H.; Davis, Timothy W.; Gobler, Christopher J.; Kramer, Benjamin J.; Loftin, Keith A.

    2017-05-31

    The Lake Okeechobee and the Okeechobee Waterway (Lake Okeechobee, the St. Lucie Canal and River, and the Caloosahatchee River) experienced an extensive harmful algal bloom within Lake Okeechobee, the St. Lucie Canal and River and the Caloosahatchee River in 2016. In addition to the very visible bloom of the cyanobacterium Microcystis aeruginosa, several other cyanobacteria were present. These other species were less conspicuous; however, they have the potential to produce a variety of cyanotoxins, including anatoxins, cylindrospermopsins, and saxitoxins, in addition to the microcystins commonly associated with Microcystis. Some of these species were found before, during, and 2 weeks after the large Microcystis bloom and could provide a better understanding of bloom dynamics and succession. This report provides photographic documentation and taxonomic assessment of the cyanobacteria present from Lake Okeechobee and the Caloosahatchee River and St. Lucie Canal, with samples collected June 1st from the Caloosahatchee River and Lake Okeechobee and in July from the St. Lucie Canal. The majority of the images were of live organisms, allowing their natural complement of pigmentation to be captured. The report provides a digital image-based taxonomic record of the Lake Okeechobee and the Okeechobee Waterway microscopic flora. It is anticipated that these images will facilitate current and future studies on this system, such as understanding the timing of cyanobacteria blooms and their potential toxin production.

  3. Evidence-based algorithm for heparin dosing before cardiopulmonary bypass. Part 1: Development of the algorithm.

    Science.gov (United States)

    McKinney, Mark C; Riley, Jeffrey B

    2007-12-01

    The incidence of heparin resistance during adult cardiac surgery with cardiopulmonary bypass has been reported at 15%-20%. The consistent use of a clinical decision-making algorithm may increase the consistency of patient care and likely reduce the total required heparin dose and other problems associated with heparin dosing. After a directed survey of practicing perfusionists regarding treatment of heparin resistance and a literature search for high-level evidence regarding the diagnosis and treatment of heparin resistance, an evidence-based decision-making algorithm was constructed. The face validity of the algorithm decisive steps and logic was confirmed by a second survey of practicing perfusionists. The algorithm begins with review of the patient history to identify predictors for heparin resistance. The definition for heparin resistance contained in the algorithm is an activated clotting time 450 IU/kg heparin loading dose. Based on the literature, the treatment for heparin resistance used in the algorithm is anti-thrombin III supplement. The algorithm seems to be valid and is supported by high-level evidence and clinician opinion. The next step is a human randomized clinical trial to test the clinical procedure guideline algorithm vs. current standard clinical practice.

  4. Algorithmic Principles of Mathematical Programming

    NARCIS (Netherlands)

    Faigle, Ulrich; Kern, Walter; Still, Georg

    2002-01-01

    Algorithmic Principles of Mathematical Programming investigates the mathematical structures and principles underlying the design of efficient algorithms for optimization problems. Recent advances in algorithmic theory have shown that the traditionally separate areas of discrete optimization, linear

  5. ADORE-GA: Genetic algorithm variant of the ADORE algorithm for ROP detector layout optimization in CANDU reactors

    International Nuclear Information System (INIS)

    Kastanya, Doddy

    2012-01-01

    Highlights: ► ADORE is an algorithm for CANDU ROP Detector Layout Optimization. ► ADORE-GA is a Genetic Algorithm variant of the ADORE algorithm. ► Robustness test of ADORE-GA algorithm is presented in this paper. - Abstract: The regional overpower protection (ROP) systems protect CANDU® reactors against overpower in the fuel that could reduce the safety margin-to-dryout. The overpower could originate from a localized power peaking within the core or a general increase in the global core power level. The design of the detector layout for ROP systems is a challenging discrete optimization problem. In recent years, two algorithms have been developed to find a quasi optimal solution to this detector layout optimization problem. Both of these algorithms utilize the simulated annealing (SA) algorithm as their optimization engine. In the present paper, an alternative optimization algorithm, namely the genetic algorithm (GA), has been implemented as the optimization engine. The implementation is done within the ADORE algorithm. Results from evaluating the effects of using various mutation rates and crossover parameters are presented in this paper. It has been demonstrated that the algorithm is sufficiently robust in producing similar quality solutions.

  6. Where genetic algorithms excel.

    Science.gov (United States)

    Baum, E B; Boneh, D; Garrett, C

    2001-01-01

    We analyze the performance of a genetic algorithm (GA) we call Culling, and a variety of other algorithms, on a problem we refer to as the Additive Search Problem (ASP). We show that the problem of learning the Ising perceptron is reducible to a noisy version of ASP. Noisy ASP is the first problem we are aware of where a genetic-type algorithm bests all known competitors. We generalize ASP to k-ASP to study whether GAs will achieve "implicit parallelism" in a problem with many more schemata. GAs fail to achieve this implicit parallelism, but we describe an algorithm we call Explicitly Parallel Search that succeeds. We also compute the optimal culling point for selective breeding, which turns out to be independent of the fitness function or the population distribution. We also analyze a mean field theoretic algorithm performing similarly to Culling on many problems. These results provide insight into when and how GAs can beat competing methods.

  7. Mao-Gilles Stabilization Algorithm

    OpenAIRE

    Jérôme Gilles

    2013-01-01

    Originally, the Mao-Gilles stabilization algorithm was designed to compensate the non-rigid deformations due to atmospheric turbulence. Given a sequence of frames affected by atmospheric turbulence, the algorithm uses a variational model combining optical flow and regularization to characterize the static observed scene. The optimization problem is solved by Bregman Iteration and the operator splitting method. The algorithm is simple, efficient, and can be easily generalized for different sce...

  8. Efficient GPS Position Determination Algorithms

    National Research Council Canada - National Science Library

    Nguyen, Thao Q

    2007-01-01

    ... differential GPS algorithm for a network of users. The stand-alone user GPS algorithm is a direct, closed-form, and efficient new position determination algorithm that exploits the closed-form solution of the GPS trilateration equations and works...

  9. Improved autonomous star identification algorithm

    International Nuclear Information System (INIS)

    Luo Li-Yan; Xu Lu-Ping; Zhang Hua; Sun Jing-Rong

    2015-01-01

    The log–polar transform (LPT) is introduced into the star identification because of its rotation invariance. An improved autonomous star identification algorithm is proposed in this paper to avoid the circular shift of the feature vector and to reduce the time consumed in the star identification algorithm using LPT. In the proposed algorithm, the star pattern of the same navigation star remains unchanged when the stellar image is rotated, which makes it able to reduce the star identification time. The logarithmic values of the plane distances between the navigation and its neighbor stars are adopted to structure the feature vector of the navigation star, which enhances the robustness of star identification. In addition, some efforts are made to make it able to find the identification result with fewer comparisons, instead of searching the whole feature database. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition rate and robustness by the proposed algorithm are better than those by the LPT algorithm and the modified grid algorithm. (paper)

  10. Fluid-structure-coupling algorithm

    International Nuclear Information System (INIS)

    McMaster, W.H.; Gong, E.Y.; Landram, C.S.; Quinones, D.F.

    1980-01-01

    A fluid-structure-interaction algorithm has been developed and incorporated into the two dimensional code PELE-IC. This code combines an Eulerian incompressible fluid algorithm with a Lagrangian finite element shell algorithm and incorporates the treatment of complex free surfaces. The fluid structure, and coupling algorithms have been verified by the calculation of solved problems from the literature and from air and steam blowdown experiments. The code has been used to calculate loads and structural response from air blowdown and the oscillatory condensation of steam bubbles in water suppression pools typical of boiling water reactors. The techniques developed here have been extended to three dimensions and implemented in the computer code PELE-3D

  11. Fluid structure coupling algorithm

    International Nuclear Information System (INIS)

    McMaster, W.H.; Gong, E.Y.; Landram, C.S.; Quinones, D.F.

    1980-01-01

    A fluid-structure-interaction algorithm has been developed and incorporated into the two-dimensional code PELE-IC. This code combines an Eulerian incompressible fluid algorithm with a Lagrangian finite element shell algorithm and incorporates the treatment of complex free surfaces. The fluid structure and coupling algorithms have been verified by the calculation of solved problems from the literature and from air and steam blowdown experiments. The code has been used to calculate loads and structural response from air blowdown and the oscillatory condensation of steam bubbles in water suppression pools typical of boiling water reactors. The techniques developed have been extended to three dimensions and implemented in the computer code PELE-3D

  12. Majorization arrow in quantum-algorithm design

    International Nuclear Information System (INIS)

    Latorre, J.I.; Martin-Delgado, M.A.

    2002-01-01

    We apply majorization theory to study the quantum algorithms known so far and find that there is a majorization principle underlying the way they operate. Grover's algorithm is a neat instance of this principle where majorization works step by step until the optimal target state is found. Extensions of this situation are also found in algorithms based in quantum adiabatic evolution and the family of quantum phase-estimation algorithms, including Shor's algorithm. We state that in quantum algorithms the time arrow is a majorization arrow

  13. Foundations of genetic algorithms 1991

    CERN Document Server

    1991-01-01

    Foundations of Genetic Algorithms 1991 (FOGA 1) discusses the theoretical foundations of genetic algorithms (GA) and classifier systems.This book compiles research papers on selection and convergence, coding and representation, problem hardness, deception, classifier system design, variation and recombination, parallelization, and population divergence. Other topics include the non-uniform Walsh-schema transform; spurious correlations and premature convergence in genetic algorithms; and variable default hierarchy separation in a classifier system. The grammar-based genetic algorithm; condition

  14. Opposition-Based Adaptive Fireworks Algorithm

    OpenAIRE

    Chibing Gong

    2016-01-01

    A fireworks algorithm (FWA) is a recent swarm intelligence algorithm that is inspired by observing fireworks explosions. An adaptive fireworks algorithm (AFWA) proposes additional adaptive amplitudes to improve the performance of the enhanced fireworks algorithm (EFWA). The purpose of this paper is to add opposition-based learning (OBL) to AFWA with the goal of further boosting performance and achieving global optimization. Twelve benchmark functions are tested in use of an opposition-based a...

  15. Tevfik Fikret’in Özgürlük Yolundaki “İzler”i “İzler” of Tevfik Fikret in the Wake of Freedom

    Directory of Open Access Journals (Sweden)

    Mutlu DEVECİ

    2012-09-01

    çıkar. İzleksel açıdan toplumsal olanı dışladığı şiirlerinde kendini onlardan ayrı hisseden şair, yalnızlık ve tekbaşınalığın getirdiği özgürlük sayesinde yaşama tutunur. “İzler” şiiri de bu izlek etrafında kurgulanmış bir varoluş şiiridir. Öznenin özgürlük yolunda, yalnız ve tekbaşına olduğunu dile getirdiği bu şiirde, kendi kendisiyle yüzleşen insanın ruh durumu anlatılır. Ontolojik bir sorun olan yalnızlık, yabancılaşma bağlamında bireysel ve toplumsal kaynaklı iletişimsizlik yaratır. Dönemin siyasi, sosyal ve ekonomik şartlarının şekillendirdiği şair, karakter ve yetişme tarzının da etkisi ile kendisini iletişimsizlikler içerisinde yalnız ve tekbaşına olarak hisseder. “İzler”de, olumsuz bir yalnızlığın koyu karanlığında yitip gitmeyerek kendine dönen şair, dünyadaki varlığı ile yüzleşir ve tekbaşınalığını bir güç olarak algılar. Şiirde, Varoluşçu felsefecilerin özgürlük anlayışına benzer bir anlayışla kendine, dünyaya ve çevresine bakan Tevfik Fikret, dış dünyayla ilişkiye girememenin bunaltısını yaşarken anlam arayışına yönelik eylemleri ile diğerlerinden farklı olduğunu hisseder. Servet-i Fünun kuşağının dünya ile en çok çatışma yaşayan şairi olan Tevfik Fikret, bu şiiri ile varoluşsal problemlerinden kurtulmak için içsel bir yolculuğa çıkarak kendisiyle yüzleşen ve bırakılmışlık bunaltısının sağaltım sürecini başlatan öznenin ruh halini yansıtır.

  16. Design optimization of high speed gamma-ray tomography

    International Nuclear Information System (INIS)

    Maad, Rachid

    2009-01-01

    This thesis concerns research and development of efficient gamma-ray systems for high speed tomographic imaging of hydrocarbon flow dynamics with a particular focus on gas liquid imaging. The Bergen HSGT (High Speed Gamma-ray Tomograph) based on instant imaging with a fixed source-detector geometry setup, has been thoroughly characterized with a variety of image reconstruction algorithms and flow conditions. Experiments in flow loops have been carried out for reliable characterization and error analysis, static flow phantoms have been applied for the majority of experiments to provide accurate imaging references. A semi-empirical model has been developed for estimation of the contribution of scattered radiation to each HSGT detector and further for correction of this contribution prior to data reconstruction. The Bergen FGGT (Flexible Geometry Gamma-ray Tomograph) has been further developed, particularly on the software side. The system emulates any fan beam tomography. Based on user input of geometry and other conditions, the new software perform scanning, data acquisition and storage, and also weight matrix calculation and image reconstruction with the desired method. The FGGT has been used for experiments supporting those carried out with the HSGT, and in addition for research on other fan beam geometries suitable for hydrocarbon flow imaging applications. An instant no-scanning tomograph like the HSGT has no flexibility with respect to change of geometry, which usually is necessary when applying the tomograph for a new application. A computer controlled FGGT has been designed and built at the UoB. The software developed for the FGGT controls the scanning procedure, the data acquisition, calculates the weight matrix necessary for the image reconstruction, reconstructs the image using standard reconstruction algorithms, and calculates the error of the reconstructed image. The performance of the geometry has been investigated using a 100 mCi 241 Am disk source, a

  17. Chinese handwriting recognition an algorithmic perspective

    CERN Document Server

    Su, Tonghua

    2013-01-01

    This book provides an algorithmic perspective on the recent development of Chinese handwriting recognition. Two technically sound strategies, the segmentation-free and integrated segmentation-recognition strategy, are investigated and algorithms that have worked well in practice are primarily focused on. Baseline systems are initially presented for these strategies and are subsequently expanded on and incrementally improved. The sophisticated algorithms covered include: 1) string sample expansion algorithms which synthesize string samples from isolated characters or distort realistic string samples; 2) enhanced feature representation algorithms, e.g. enhanced four-plane features and Delta features; 3) novel learning algorithms, such as Perceptron learning with dynamic margin, MPE training and distributed training; and lastly 4) ensemble algorithms, that is, combining the two strategies using both parallel structure and serial structure. All the while, the book moves from basic to advanced algorithms, helping ...

  18. Combinatorial algorithms

    CERN Document Server

    Hu, T C

    2002-01-01

    Newly enlarged, updated second edition of a valuable text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discusses binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. 153 black-and-white illus. 23 tables.Newly enlarged, updated second edition of a valuable, widely used text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discussed are binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. New to this edition: Chapter 9

  19. THE APPROACHING TRAIN DETECTION ALGORITHM

    OpenAIRE

    S. V. Bibikov

    2015-01-01

    The paper deals with detection algorithm for rail vibroacoustic waves caused by approaching train on the background of increased noise. The urgency of algorithm development for train detection in view of increased rail noise, when railway lines are close to roads or road intersections is justified. The algorithm is based on the method of weak signals detection in a noisy environment. The information statistics ultimate expression is adjusted. We present the results of algorithm research and t...

  20. Combinatorial optimization algorithms and complexity

    CERN Document Server

    Papadimitriou, Christos H

    1998-01-01

    This clearly written, mathematically rigorous text includes a novel algorithmic exposition of the simplex method and also discusses the Soviet ellipsoid algorithm for linear programming; efficient algorithms for network flow, matching, spanning trees, and matroids; the theory of NP-complete problems; approximation algorithms, local search heuristics for NP-complete problems, more. All chapters are supplemented by thought-provoking problems. A useful work for graduate-level students with backgrounds in computer science, operations research, and electrical engineering.

  1. Enhanced sampling algorithms.

    Science.gov (United States)

    Mitsutake, Ayori; Mori, Yoshiharu; Okamoto, Yuko

    2013-01-01

    In biomolecular systems (especially all-atom models) with many degrees of freedom such as proteins and nucleic acids, there exist an astronomically large number of local-minimum-energy states. Conventional simulations in the canonical ensemble are of little use, because they tend to get trapped in states of these energy local minima. Enhanced conformational sampling techniques are thus in great demand. A simulation in generalized ensemble performs a random walk in potential energy space and can overcome this difficulty. From only one simulation run, one can obtain canonical-ensemble averages of physical quantities as functions of temperature by the single-histogram and/or multiple-histogram reweighting techniques. In this article we review uses of the generalized-ensemble algorithms in biomolecular systems. Three well-known methods, namely, multicanonical algorithm, simulated tempering, and replica-exchange method, are described first. Both Monte Carlo and molecular dynamics versions of the algorithms are given. We then present various extensions of these three generalized-ensemble algorithms. The effectiveness of the methods is tested with short peptide and protein systems.

  2. Economic dispatch using chaotic bat algorithm

    International Nuclear Information System (INIS)

    Adarsh, B.R.; Raghunathan, T.; Jayabarathi, T.; Yang, Xin-She

    2016-01-01

    This paper presents the application of a new metaheuristic optimization algorithm, the chaotic bat algorithm for solving the economic dispatch problem involving a number of equality and inequality constraints such as power balance, prohibited operating zones and ramp rate limits. Transmission losses and multiple fuel options are also considered for some problems. The chaotic bat algorithm, a variant of the basic bat algorithm, is obtained by incorporating chaotic sequences to enhance its performance. Five different example problems comprising 6, 13, 20, 40 and 160 generating units are solved to demonstrate the effectiveness of the algorithm. The algorithm requires little tuning by the user, and the results obtained show that it either outperforms or compares favorably with several existing techniques reported in literature. - Highlights: • The chaotic bat algorithm, a new metaheuristic optimization algorithm has been used. • The problem solved – the economic dispatch problem – is nonlinear, discontinuous. • It has number of equality and inequality constraints. • The algorithm has been demonstrated to be applicable on high dimensional problems.

  3. Inclusive Flavour Tagging Algorithm

    International Nuclear Information System (INIS)

    Likhomanenko, Tatiana; Derkach, Denis; Rogozhnikov, Alex

    2016-01-01

    Identifying the flavour of neutral B mesons production is one of the most important components needed in the study of time-dependent CP violation. The harsh environment of the Large Hadron Collider makes it particularly hard to succeed in this task. We present an inclusive flavour-tagging algorithm as an upgrade of the algorithms currently used by the LHCb experiment. Specifically, a probabilistic model which efficiently combines information from reconstructed vertices and tracks using machine learning is proposed. The algorithm does not use information about underlying physics process. It reduces the dependence on the performance of lower level identification capacities and thus increases the overall performance. The proposed inclusive flavour-tagging algorithm is applicable to tag the flavour of B mesons in any proton-proton experiment. (paper)

  4. Distributed k-Means Algorithm and Fuzzy c-Means Algorithm for Sensor Networks Based on Multiagent Consensus Theory.

    Science.gov (United States)

    Qin, Jiahu; Fu, Weiming; Gao, Huijun; Zheng, Wei Xing

    2016-03-03

    This paper is concerned with developing a distributed k-means algorithm and a distributed fuzzy c-means algorithm for wireless sensor networks (WSNs) where each node is equipped with sensors. The underlying topology of the WSN is supposed to be strongly connected. The consensus algorithm in multiagent consensus theory is utilized to exchange the measurement information of the sensors in WSN. To obtain a faster convergence speed as well as a higher possibility of having the global optimum, a distributed k-means++ algorithm is first proposed to find the initial centroids before executing the distributed k-means algorithm and the distributed fuzzy c-means algorithm. The proposed distributed k-means algorithm is capable of partitioning the data observed by the nodes into measure-dependent groups which have small in-group and large out-group distances, while the proposed distributed fuzzy c-means algorithm is capable of partitioning the data observed by the nodes into different measure-dependent groups with degrees of membership values ranging from 0 to 1. Simulation results show that the proposed distributed algorithms can achieve almost the same results as that given by the centralized clustering algorithms.

  5. A survey of parallel multigrid algorithms

    Science.gov (United States)

    Chan, Tony F.; Tuminaro, Ray S.

    1987-01-01

    A typical multigrid algorithm applied to well-behaved linear-elliptic partial-differential equations (PDEs) is described. Criteria for designing and evaluating parallel algorithms are presented. Before evaluating the performance of some parallel multigrid algorithms, consideration is given to some theoretical complexity results for solving PDEs in parallel and for executing the multigrid algorithm. The effect of mapping and load imbalance on the partial efficiency of the algorithm is studied.

  6. Autodriver algorithm

    Directory of Open Access Journals (Sweden)

    Anna Bourmistrova

    2011-02-01

    Full Text Available The autodriver algorithm is an intelligent method to eliminate the need of steering by a driver on a well-defined road. The proposed method performs best on a four-wheel steering (4WS vehicle, though it is also applicable to two-wheel-steering (TWS vehicles. The algorithm is based on coinciding the actual vehicle center of rotation and road center of curvature, by adjusting the kinematic center of rotation. The road center of curvature is assumed prior information for a given road, while the dynamic center of rotation is the output of dynamic equations of motion of the vehicle using steering angle and velocity measurements as inputs. We use kinematic condition of steering to set the steering angles in such a way that the kinematic center of rotation of the vehicle sits at a desired point. At low speeds the ideal and actual paths of the vehicle are very close. With increase of forward speed the road and tire characteristics, along with the motion dynamics of the vehicle cause the vehicle to turn about time-varying points. By adjusting the steering angles, our algorithm controls the dynamic turning center of the vehicle so that it coincides with the road curvature center, hence keeping the vehicle on a given road autonomously. The position and orientation errors are used as feedback signals in a closed loop control to adjust the steering angles. The application of the presented autodriver algorithm demonstrates reliable performance under different driving conditions.

  7. Quantum algorithm for support matrix machines

    Science.gov (United States)

    Duan, Bojia; Yuan, Jiabin; Liu, Ying; Li, Dan

    2017-09-01

    We propose a quantum algorithm for support matrix machines (SMMs) that efficiently addresses an image classification problem by introducing a least-squares reformulation. This algorithm consists of two core subroutines: a quantum matrix inversion (Harrow-Hassidim-Lloyd, HHL) algorithm and a quantum singular value thresholding (QSVT) algorithm. The two algorithms can be implemented on a universal quantum computer with complexity O[log(npq) ] and O[log(pq)], respectively, where n is the number of the training data and p q is the size of the feature space. By iterating the algorithms, we can find the parameters for the SMM classfication model. Our analysis shows that both HHL and QSVT algorithms achieve an exponential increase of speed over their classical counterparts.

  8. Mao-Gilles Stabilization Algorithm

    Directory of Open Access Journals (Sweden)

    Jérôme Gilles

    2013-07-01

    Full Text Available Originally, the Mao-Gilles stabilization algorithm was designed to compensate the non-rigid deformations due to atmospheric turbulence. Given a sequence of frames affected by atmospheric turbulence, the algorithm uses a variational model combining optical flow and regularization to characterize the static observed scene. The optimization problem is solved by Bregman Iteration and the operator splitting method. The algorithm is simple, efficient, and can be easily generalized for different scenarios involving non-rigid deformations.

  9. Effects of visualization on algorithm comprehension

    Science.gov (United States)

    Mulvey, Matthew

    Computer science students are expected to learn and apply a variety of core algorithms which are an essential part of the field. Any one of these algorithms by itself is not necessarily extremely complex, but remembering the large variety of algorithms and the differences between them is challenging. To address this challenge, we present a novel algorithm visualization tool designed to enhance students understanding of Dijkstra's algorithm by allowing them to discover the rules of the algorithm for themselves. It is hoped that a deeper understanding of the algorithm will help students correctly select, adapt and apply the appropriate algorithm when presented with a problem to solve, and that what is learned here will be applicable to the design of other visualization tools designed to teach different algorithms. Our visualization tool is currently in the prototype stage, and this thesis will discuss the pedagogical approach that informs its design, as well as the results of some initial usability testing. Finally, to clarify the direction for further development of the tool, four different variations of the prototype were implemented, and the instructional effectiveness of each was assessed by having a small sample participants use the different versions of the prototype and then take a quiz to assess their comprehension of the algorithm.

  10. Algorithmic approach to diagram techniques

    International Nuclear Information System (INIS)

    Ponticopoulos, L.

    1980-10-01

    An algorithmic approach to diagram techniques of elementary particles is proposed. The definition and axiomatics of the theory of algorithms are presented, followed by the list of instructions of an algorithm formalizing the construction of graphs and the assignment of mathematical objects to them. (T.A.)

  11. Multisensor data fusion algorithm development

    Energy Technology Data Exchange (ETDEWEB)

    Yocky, D.A.; Chadwick, M.D.; Goudy, S.P.; Johnson, D.K.

    1995-12-01

    This report presents a two-year LDRD research effort into multisensor data fusion. We approached the problem by addressing the available types of data, preprocessing that data, and developing fusion algorithms using that data. The report reflects these three distinct areas. First, the possible data sets for fusion are identified. Second, automated registration techniques for imagery data are analyzed. Third, two fusion techniques are presented. The first fusion algorithm is based on the two-dimensional discrete wavelet transform. Using test images, the wavelet algorithm is compared against intensity modulation and intensity-hue-saturation image fusion algorithms that are available in commercial software. The wavelet approach outperforms the other two fusion techniques by preserving spectral/spatial information more precisely. The wavelet fusion algorithm was also applied to Landsat Thematic Mapper and SPOT panchromatic imagery data. The second algorithm is based on a linear-regression technique. We analyzed the technique using the same Landsat and SPOT data.

  12. A new cluster algorithm for graphs

    NARCIS (Netherlands)

    S. van Dongen

    1998-01-01

    textabstractA new cluster algorithm for graphs called the emph{Markov Cluster algorithm ($MCL$ algorithm) is introduced. The graphs may be both weighted (with nonnegative weight) and directed. Let~$G$~be such a graph. The $MCL$ algorithm simulates flow in $G$ by first identifying $G$ in a

  13. An overview of smart grid routing algorithms

    Science.gov (United States)

    Wang, Junsheng; OU, Qinghai; Shen, Haijuan

    2017-08-01

    This paper summarizes the typical routing algorithm in smart grid by analyzing the communication business and communication requirements of intelligent grid. Mainly from the two kinds of routing algorithm is analyzed, namely clustering routing algorithm and routing algorithm, analyzed the advantages and disadvantages of two kinds of typical routing algorithm in routing algorithm and applicability.

  14. A Parametric k-Means Algorithm

    Science.gov (United States)

    Tarpey, Thaddeus

    2007-01-01

    Summary The k points that optimally represent a distribution (usually in terms of a squared error loss) are called the k principal points. This paper presents a computationally intensive method that automatically determines the principal points of a parametric distribution. Cluster means from the k-means algorithm are nonparametric estimators of principal points. A parametric k-means approach is introduced for estimating principal points by running the k-means algorithm on a very large simulated data set from a distribution whose parameters are estimated using maximum likelihood. Theoretical and simulation results are presented comparing the parametric k-means algorithm to the usual k-means algorithm and an example on determining sizes of gas masks is used to illustrate the parametric k-means algorithm. PMID:17917692

  15. Fundamentals of Filament Interaction

    Science.gov (United States)

    2017-05-19

    AFRL-AFOSR-VA-TR-2017-0110 FUNDAMENTALS OF FILAMENT INTERACTION Martin Richardson UNIVERSITY OF CENTRAL FLORIDA Final Report 06/02/2017 DISTRIBUTION...of Filament Interaction 5a. CONTRACT NUMBER 5b. GRANT NUMBER FA95501110001 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Martin Richardson 5d. PROJECT...NAME OF RESPONSIBLE PERSON Martin Richardson a. REPORT b. ABSTRACT c. THIS PAGE 19b. TELEPHONE NUMBER (Include area code) 407-823-6819 Standard Form

  16. DNABIT Compress – Genome compression algorithm

    Science.gov (United States)

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-01

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that “DNABIT Compress” algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases. PMID:21383923

  17. Some software algorithms for microprocessor ratemeters

    International Nuclear Information System (INIS)

    Savic, Z.

    1991-01-01

    After a review of the basic theoretical ratemeter problem and a general discussion of microprocessor ratemeters, a short insight into their hardware organization is given. Three software algorithms are described: the old ones the quasi-exponential and floating mean algorithm, and a new weighted moving average algorithm. The equations for statistical characterization of the new algorithm are given and an intercomparison is made. It is concluded that the new algorithm has statistical advantages over the old ones. (orig.)

  18. Some software algorithms for microprocessor ratemeters

    Energy Technology Data Exchange (ETDEWEB)

    Savic, Z. (Military Technical Inst., Belgrade (Yugoslavia))

    1991-03-15

    After a review of the basic theoretical ratemeter problem and a general discussion of microprocessor ratemeters, a short insight into their hardware organization is given. Three software algorithms are described: the old ones the quasi-exponential and floating mean algorithm, and a new weighted moving average algorithm. The equations for statistical characterization of the new algorithm are given and an intercomparison is made. It is concluded that the new algorithm has statistical advantages over the old ones. (orig.).

  19. Higher-order force gradient symplectic algorithms

    Science.gov (United States)

    Chin, Siu A.; Kidwell, Donald W.

    2000-12-01

    We show that a recently discovered fourth order symplectic algorithm, which requires one evaluation of force gradient in addition to three evaluations of the force, when iterated to higher order, yielded algorithms that are far superior to similarly iterated higher order algorithms based on the standard Forest-Ruth algorithm. We gauge the accuracy of each algorithm by comparing the step-size independent error functions associated with energy conservation and the rotation of the Laplace-Runge-Lenz vector when solving a highly eccentric Kepler problem. For orders 6, 8, 10, and 12, the new algorithms are approximately a factor of 103, 104, 104, and 105 better.

  20. The Retina Algorithm

    CERN Multimedia

    CERN. Geneva; PUNZI, Giovanni

    2015-01-01

    Charge particle reconstruction is one of the most demanding computational tasks found in HEP, and it becomes increasingly important to perform it in real time. We envision that HEP would greatly benefit from achieving a long-term goal of making track reconstruction happen transparently as part of the detector readout ("detector-embedded tracking"). We describe here a track-reconstruction approach based on a massively parallel pattern-recognition algorithm, inspired by studies of the processing of visual images by the brain as it happens in nature ('RETINA algorithm'). It turns out that high-quality tracking in large HEP detectors is possible with very small latencies, when this algorithm is implemented in specialized processors, based on current state-of-the-art, high-speed/high-bandwidth digital devices.

  1. Machine Learning an algorithmic perspective

    CERN Document Server

    Marsland, Stephen

    2009-01-01

    Traditional books on machine learning can be divided into two groups - those aimed at advanced undergraduates or early postgraduates with reasonable mathematical knowledge and those that are primers on how to code algorithms. The field is ready for a text that not only demonstrates how to use the algorithms that make up machine learning methods, but also provides the background needed to understand how and why these algorithms work. Machine Learning: An Algorithmic Perspective is that text.Theory Backed up by Practical ExamplesThe book covers neural networks, graphical models, reinforcement le

  2. Instance-specific algorithm configuration

    CERN Document Server

    Malitsky, Yuri

    2014-01-01

    This book presents a modular and expandable technique in the rapidly emerging research area of automatic configuration and selection of the best algorithm for the instance at hand. The author presents the basic model behind ISAC and then details a number of modifications and practical applications. In particular, he addresses automated feature generation, offline algorithm configuration for portfolio generation, algorithm selection, adaptive solvers, online tuning, and parallelization.    The author's related thesis was honorably mentioned (runner-up) for the ACP Dissertation Award in 2014,

  3. Quantum algorithm for linear regression

    Science.gov (United States)

    Wang, Guoming

    2017-07-01

    We present a quantum algorithm for fitting a linear regression model to a given data set using the least-squares approach. Differently from previous algorithms which yield a quantum state encoding the optimal parameters, our algorithm outputs these numbers in the classical form. So by running it once, one completely determines the fitted model and then can use it to make predictions on new data at little cost. Moreover, our algorithm works in the standard oracle model, and can handle data sets with nonsparse design matrices. It runs in time poly( log2(N ) ,d ,κ ,1 /ɛ ) , where N is the size of the data set, d is the number of adjustable parameters, κ is the condition number of the design matrix, and ɛ is the desired precision in the output. We also show that the polynomial dependence on d and κ is necessary. Thus, our algorithm cannot be significantly improved. Furthermore, we also give a quantum algorithm that estimates the quality of the least-squares fit (without computing its parameters explicitly). This algorithm runs faster than the one for finding this fit, and can be used to check whether the given data set qualifies for linear regression in the first place.

  4. Algorithm FIRE-Feynman Integral REduction

    International Nuclear Information System (INIS)

    Smirnov, A.V.

    2008-01-01

    The recently developed algorithm FIRE performs the reduction of Feynman integrals to master integrals. It is based on a number of strategies, such as applying the Laporta algorithm, the s-bases algorithm, region-bases and integrating explicitly over loop momenta when possible. Currently it is being used in complicated three-loop calculations.

  5. Synthesis of Greedy Algorithms Using Dominance Relations

    Science.gov (United States)

    Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.

    2010-01-01

    Greedy algorithms exploit problem structure and constraints to achieve linear-time performance. Yet there is still no completely satisfactory way of constructing greedy algorithms. For example, the Greedy Algorithm of Edmonds depends upon translating a problem into an algebraic structure called a matroid, but the existence of such a translation can be as hard to determine as the existence of a greedy algorithm itself. An alternative characterization of greedy algorithms is in terms of dominance relations, a well-known algorithmic technique used to prune search spaces. We demonstrate a process by which dominance relations can be methodically derived for a number of greedy algorithms, including activity selection, and prefix-free codes. By incorporating our approach into an existing framework for algorithm synthesis, we demonstrate that it could be the basis for an effective engineering method for greedy algorithms. We also compare our approach with other characterizations of greedy algorithms.

  6. Algorithm for Compressing Time-Series Data

    Science.gov (United States)

    Hawkins, S. Edward, III; Darlington, Edward Hugo

    2012-01-01

    An algorithm based on Chebyshev polynomials effects lossy compression of time-series data or other one-dimensional data streams (e.g., spectral data) that are arranged in blocks for sequential transmission. The algorithm was developed for use in transmitting data from spacecraft scientific instruments to Earth stations. In spite of its lossy nature, the algorithm preserves the information needed for scientific analysis. The algorithm is computationally simple, yet compresses data streams by factors much greater than two. The algorithm is not restricted to spacecraft or scientific uses: it is applicable to time-series data in general. The algorithm can also be applied to general multidimensional data that have been converted to time-series data, a typical example being image data acquired by raster scanning. However, unlike most prior image-data-compression algorithms, this algorithm neither depends on nor exploits the two-dimensional spatial correlations that are generally present in images. In order to understand the essence of this compression algorithm, it is necessary to understand that the net effect of this algorithm and the associated decompression algorithm is to approximate the original stream of data as a sequence of finite series of Chebyshev polynomials. For the purpose of this algorithm, a block of data or interval of time for which a Chebyshev polynomial series is fitted to the original data is denoted a fitting interval. Chebyshev approximation has two properties that make it particularly effective for compressing serial data streams with minimal loss of scientific information: The errors associated with a Chebyshev approximation are nearly uniformly distributed over the fitting interval (this is known in the art as the "equal error property"); and the maximum deviations of the fitted Chebyshev polynomial from the original data have the smallest possible values (this is known in the art as the "min-max property").

  7. Algorithms

    Indian Academy of Sciences (India)

    ticians but also forms the foundation of computer science. Two ... with methods of developing algorithms for solving a variety of problems but ... applications of computers in science and engineer- ... numerical calculus are as important. We will ...

  8. Novel medical image enhancement algorithms

    Science.gov (United States)

    Agaian, Sos; McClendon, Stephen A.

    2010-01-01

    In this paper, we present two novel medical image enhancement algorithms. The first, a global image enhancement algorithm, utilizes an alpha-trimmed mean filter as its backbone to sharpen images. The second algorithm uses a cascaded unsharp masking technique to separate the high frequency components of an image in order for them to be enhanced using a modified adaptive contrast enhancement algorithm. Experimental results from enhancing electron microscopy, radiological, CT scan and MRI scan images, using the MATLAB environment, are then compared to the original images as well as other enhancement methods, such as histogram equalization and two forms of adaptive contrast enhancement. An image processing scheme for electron microscopy images of Purkinje cells will also be implemented and utilized as a comparison tool to evaluate the performance of our algorithm.

  9. Wavelet-LMS algorithm-based echo cancellers

    Science.gov (United States)

    Seetharaman, Lalith K.; Rao, Sathyanarayana S.

    2002-12-01

    This paper presents Echo Cancellers based on the Wavelet-LMS Algorithm. The performance of the Least Mean Square Algorithm in Wavelet transform domain is observed and its application in Echo cancellation is analyzed. The Widrow-Hoff Least Mean Square Algorithm is most widely used algorithm for Adaptive filters that function as Echo Cancellers. The present day communication signals are widely non-stationary in nature and some errors crop up when Least Mean Square Algorithm is used for the Echo Cancellers handling such signals. The analysis of non-stationary signals often involves a compromise between how well transitions or discontinuities can be located. The multi-scale or multi-resolution of signal analysis, which is the essence of wavelet transform, makes Wavelets popular in non-stationary signal analysis. In this paper, we present a Wavelet-LMS algorithm wherein the wavelet coefficients of a signal are modified adaptively using the Least Mean Square Algorithm and then reconstructed to give an Echo-free signal. The Echo Canceller based on this Algorithm is found to have a better convergence and a comparatively lesser MSE (Mean Square error).

  10. Analysis and Improvement of Fireworks Algorithm

    Directory of Open Access Journals (Sweden)

    Xi-Guang Li

    2017-02-01

    Full Text Available The Fireworks Algorithm is a recently developed swarm intelligence algorithm to simulate the explosion process of fireworks. Based on the analysis of each operator of Fireworks Algorithm (FWA, this paper improves the FWA and proves that the improved algorithm converges to the global optimal solution with probability 1. The proposed algorithm improves the goal of further boosting performance and achieving global optimization where mainly include the following strategies. Firstly using the opposition-based learning initialization population. Secondly a new explosion amplitude mechanism for the optimal firework is proposed. In addition, the adaptive t-distribution mutation for non-optimal individuals and elite opposition-based learning for the optimal individual are used. Finally, a new selection strategy, namely Disruptive Selection, is proposed to reduce the running time of the algorithm compared with FWA. In our simulation, we apply the CEC2013 standard functions and compare the proposed algorithm (IFWA with SPSO2011, FWA, EFWA and dynFWA. The results show that the proposed algorithm has better overall performance on the test functions.

  11. Estimation of heterosis in yield and yield attributing traits in single cross hybrids of maize

    Directory of Open Access Journals (Sweden)

    Hari Prasad Sharma

    2016-12-01

    Full Text Available A field experiment was conducted at National Maize Research Program, Rampur, Chitwan, Nepal during winter season from 6th October, 2015 to 5th March 2016 to estimate different heterosis on single cross maize hybrids . Thirteen maize hybrids were tested randomized complete block design with three replications. Hybrid namely RML-98/RL-105 gave the highest standard heterosis (57.5% for grain yield over CP-666 followed by RML-4/NML-2 (32.6%, RML-95/RL-105 (29% and RML-5/RL-105 (20.6%. The hybrid RML-98/RL-105 produced the highest standard heterosis (75.1% for grain yield over Rajkumar followed by RML-4/NML-2(50.2%, RML-95/RL-105(46.6%, RML-5/RL-105 and (35.7%. Mid and better parent heterosis were significantly higher for yield and yield attributes viz. ear length, ear diameter, no of kernel row per ear, no of kernel per row and test weight. The highest positive mid-parent heterosis for grain yield was found in RML-98/RL-105 followed by RML-5/RL-105, RML-95/RL-105, and RML-4/NML-2. For the grain yield the better parent heterosis was the highest in RML-98/RL-105, followed by RML-5/RL-105, RML-95/RL-105, and RML-4/NML-2. These results suggested that maize production can be maximized by cultivating hybrids namely RML-98/RL-105, RML-5/RL-105, RML-95/RL-105, and RML-4/NML-2 .

  12. Empirical study of parallel LRU simulation algorithms

    Science.gov (United States)

    Carr, Eric; Nicol, David M.

    1994-01-01

    This paper reports on the performance of five parallel algorithms for simulating a fully associative cache operating under the LRU (Least-Recently-Used) replacement policy. Three of the algorithms are SIMD, and are implemented on the MasPar MP-2 architecture. Two other algorithms are parallelizations of an efficient serial algorithm on the Intel Paragon. One SIMD algorithm is quite simple, but its cost is linear in the cache size. The two other SIMD algorithm are more complex, but have costs that are independent on the cache size. Both the second and third SIMD algorithms compute all stack distances; the second SIMD algorithm is completely general, whereas the third SIMD algorithm presumes and takes advantage of bounds on the range of reference tags. Both MIMD algorithm implemented on the Paragon are general and compute all stack distances; they differ in one step that may affect their respective scalability. We assess the strengths and weaknesses of these algorithms as a function of problem size and characteristics, and compare their performance on traces derived from execution of three SPEC benchmark programs.

  13. A Hybrid Chaotic Quantum Evolutionary Algorithm

    DEFF Research Database (Denmark)

    Cai, Y.; Zhang, M.; Cai, H.

    2010-01-01

    A hybrid chaotic quantum evolutionary algorithm is proposed to reduce amount of computation, speed up convergence and restrain premature phenomena of quantum evolutionary algorithm. The proposed algorithm adopts the chaotic initialization method to generate initial population which will form a pe...... tests. The presented algorithm is applied to urban traffic signal timing optimization and the effect is satisfied....

  14. Comparison between iterative wavefront control algorithm and direct gradient wavefront control algorithm for adaptive optics system

    International Nuclear Information System (INIS)

    Cheng Sheng-Yi; Liu Wen-Jin; Chen Shan-Qiu; Dong Li-Zhi; Yang Ping; Xu Bing

    2015-01-01

    Among all kinds of wavefront control algorithms in adaptive optics systems, the direct gradient wavefront control algorithm is the most widespread and common method. This control algorithm obtains the actuator voltages directly from wavefront slopes through pre-measuring the relational matrix between deformable mirror actuators and Hartmann wavefront sensor with perfect real-time characteristic and stability. However, with increasing the number of sub-apertures in wavefront sensor and deformable mirror actuators of adaptive optics systems, the matrix operation in direct gradient algorithm takes too much time, which becomes a major factor influencing control effect of adaptive optics systems. In this paper we apply an iterative wavefront control algorithm to high-resolution adaptive optics systems, in which the voltages of each actuator are obtained through iteration arithmetic, which gains great advantage in calculation and storage. For AO system with thousands of actuators, the computational complexity estimate is about O(n 2 ) ∼ O(n 3 ) in direct gradient wavefront control algorithm, while the computational complexity estimate in iterative wavefront control algorithm is about O(n) ∼ (O(n) 3/2 ), in which n is the number of actuators of AO system. And the more the numbers of sub-apertures and deformable mirror actuators, the more significant advantage the iterative wavefront control algorithm exhibits. (paper)

  15. The Parallel Algorithm Based on Genetic Algorithm for Improving the Performance of Cognitive Radio

    Directory of Open Access Journals (Sweden)

    Liu Miao

    2018-01-01

    Full Text Available The intercarrier interference (ICI problem of cognitive radio (CR is severe. In this paper, the machine learning algorithm is used to obtain the optimal interference subcarriers of an unlicensed user (un-LU. Masking the optimal interference subcarriers can suppress the ICI of CR. Moreover, the parallel ICI suppression algorithm is designed to improve the calculation speed and meet the practical requirement of CR. Simulation results show that the data transmission rate threshold of un-LU can be set, the data transmission quality of un-LU can be ensured, the ICI of a licensed user (LU is suppressed, and the bit error rate (BER performance of LU is improved by implementing the parallel suppression algorithm. The ICI problem of CR is solved well by the new machine learning algorithm. The computing performance of the algorithm is improved by designing a new parallel structure and the communication performance of CR is enhanced.

  16. Look-ahead fermion algorithm

    International Nuclear Information System (INIS)

    Grady, M.

    1986-01-01

    I describe a fast fermion algorithm which utilizes pseudofermion fields but appears to have little or no systematic error. Test simulations on two-dimensional gauge theories are described. A possible justification for the algorithm being exact is discussed. 8 refs

  17. Online co-regularized algorithms

    NARCIS (Netherlands)

    Ruijter, T. de; Tsivtsivadze, E.; Heskes, T.

    2012-01-01

    We propose an online co-regularized learning algorithm for classification and regression tasks. We demonstrate that by sequentially co-regularizing prediction functions on unlabeled data points, our algorithm provides improved performance in comparison to supervised methods on several UCI benchmarks

  18. Morphometric comparisons of Diaphorina citri (Hemiptera: Liviidae populations from Iran, USA and Pakistan

    Directory of Open Access Journals (Sweden)

    Mohammadreza Lashkari

    2015-05-01

    Full Text Available The Asian citrus psyllid (ACP, Diaphorina citri Kuwayama (Hemiptera: Liviidae, vector of citrus greening disease pathogen, Huanglongbing (HLB, is considered the most serious pest of citrus in the world. Prior molecular based studies have hypothesized a link between the D. citri in Iran and the USA (Florida. The purpose of this study was to collect morphometric data from D. citri populations from Iran (mtCOI haplotype-1, Florida (mtCOI haplotype-1, and Pakistan (mtCOI haplotype-6, to determine whether different mtCOI haplotypes have a relationship to a specific morphometric variation. 240 samples from 6 ACP populations (Iran—Jiroft, Chabahar; Florida—Ft. Pierce, Palm Beach Gardens, Port St. Lucie; and Pakistan—Punjab were collected for comparison. Measurements of 20 morphological characters were selected, measured and analysed using ANOVA and MANOVA. The results indicate differences among the 6 ACP populations (Wilks’ lambda = 0.0376, F = 7.29, P < 0.0001. The body length (BL, circumanal ring length (CL, antenna length (AL, forewing length (WL and Rs vein length of forewing (RL were the most important characters separating the populations. The cluster analysis showed that the Iran and Florida populations are distinct from each other but separate from the Pakistan population. Thus, three subgroups can be morphologically discriminated within D. citri species in this study, (1 Iran, (2 USA (Florida and (3 Pakistan population. Morphometric comparisons provided further resolution to the mtCOI haplotypes and distinguished the Florida and Iranian populations.

  19. Solving a bi-objective mathematical model for location-routing problem with time windows in multi-echelon reverse logistics using metaheuristic procedure

    Science.gov (United States)

    Ghezavati, V. R.; Beigi, M.

    2016-12-01

    During the last decade, the stringent pressures from environmental and social requirements have spurred an interest in designing a reverse logistics (RL) network. The success of a logistics system may depend on the decisions of the facilities locations and vehicle routings. The location-routing problem (LRP) simultaneously locates the facilities and designs the travel routes for vehicles among established facilities and existing demand points. In this paper, the location-routing problem with time window (LRPTW) and homogeneous fleet type and designing a multi-echelon, and capacitated reverse logistics network, are considered which may arise in many real-life situations in logistics management. Our proposed RL network consists of hybrid collection/inspection centers, recovery centers and disposal centers. Here, we present a new bi-objective mathematical programming (BOMP) for LRPTW in reverse logistic. Since this type of problem is NP-hard, the non-dominated sorting genetic algorithm II (NSGA-II) is proposed to obtain the Pareto frontier for the given problem. Several numerical examples are presented to illustrate the effectiveness of the proposed model and algorithm. Also, the present work is an effort to effectively implement the ɛ-constraint method in GAMS software for producing the Pareto-optimal solutions in a BOMP. The results of the proposed algorithm have been compared with the ɛ-constraint method. The computational results show that the ɛ-constraint method is able to solve small-size instances to optimality within reasonable computing times, and for medium-to-large-sized problems, the proposed NSGA-II works better than the ɛ-constraint.

  20. Algorithmic complexity of quantum capacity

    Science.gov (United States)

    Oskouei, Samad Khabbazi; Mancini, Stefano

    2018-04-01

    We analyze the notion of quantum capacity from the perspective of algorithmic (descriptive) complexity. To this end, we resort to the concept of semi-computability in order to describe quantum states and quantum channel maps. We introduce algorithmic entropies (like algorithmic quantum coherent information) and derive relevant properties for them. Then we show that quantum capacity based on semi-computable concept equals the entropy rate of algorithmic coherent information, which in turn equals the standard quantum capacity. Thanks to this, we finally prove that the quantum capacity, for a given semi-computable channel, is limit computable.

  1. A biologically inspired neural network model to transformation invariant object recognition

    Science.gov (United States)

    Iftekharuddin, Khan M.; Li, Yaqin; Siddiqui, Faraz

    2007-09-01

    Transformation invariant image recognition has been an active research area due to its widespread applications in a variety of fields such as military operations, robotics, medical practices, geographic scene analysis, and many others. The primary goal for this research is detection of objects in the presence of image transformations such as changes in resolution, rotation, translation, scale and occlusion. We investigate a biologically-inspired neural network (NN) model for such transformation-invariant object recognition. In a classical training-testing setup for NN, the performance is largely dependent on the range of transformation or orientation involved in training. However, an even more serious dilemma is that there may not be enough training data available for successful learning or even no training data at all. To alleviate this problem, a biologically inspired reinforcement learning (RL) approach is proposed. In this paper, the RL approach is explored for object recognition with different types of transformations such as changes in scale, size, resolution and rotation. The RL is implemented in an adaptive critic design (ACD) framework, which approximates the neuro-dynamic programming of an action network and a critic network, respectively. Two ACD algorithms such as Heuristic Dynamic Programming (HDP) and Dual Heuristic dynamic Programming (DHP) are investigated to obtain transformation invariant object recognition. The two learning algorithms are evaluated statistically using simulated transformations in images as well as with a large-scale UMIST face database with pose variations. In the face database authentication case, the 90° out-of-plane rotation of faces from 20 different subjects in the UMIST database is used. Our simulations show promising results for both designs for transformation-invariant object recognition and authentication of faces. Comparing the two algorithms, DHP outperforms HDP in learning capability, as DHP takes fewer steps to

  2. Learning from nature: Nature-inspired algorithms

    DEFF Research Database (Denmark)

    Albeanu, Grigore; Madsen, Henrik; Popentiu-Vladicescu, Florin

    2016-01-01

    .), genetic and evolutionary strategies, artificial immune systems etc. Well-known examples of applications include: aircraft wing design, wind turbine design, bionic car, bullet train, optimal decisions related to traffic, appropriate strategies to survive under a well-adapted immune system etc. Based......During last decade, the nature has inspired researchers to develop new algorithms. The largest collection of nature-inspired algorithms is biology-inspired: swarm intelligence (particle swarm optimization, ant colony optimization, cuckoo search, bees' algorithm, bat algorithm, firefly algorithm etc...... on collective social behaviour of organisms, researchers have developed optimization strategies taking into account not only the individuals, but also groups and environment. However, learning from nature, new classes of approaches can be identified, tested and compared against already available algorithms...

  3. Cache-Oblivious Algorithms and Data Structures

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting

    2004-01-01

    Frigo, Leiserson, Prokop and Ramachandran in 1999 introduced the ideal-cache model as a formal model of computation for developing algorithms in environments with multiple levels of caching, and coined the terminology of cache-oblivious algorithms. Cache-oblivious algorithms are described...... as standard RAM algorithms with only one memory level, i.e. without any knowledge about memory hierarchies, but are analyzed in the two-level I/O model of Aggarwal and Vitter for an arbitrary memory and block size and an optimal off-line cache replacement strategy. The result are algorithms that automatically...... apply to multi-level memory hierarchies. This paper gives an overview of the results achieved on cache-oblivious algorithms and data structures since the seminal paper by Frigo et al....

  4. Uptake of Eudragit Retard L (Eudragit® RL Nanoparticles by Human THP-1 Cell Line and Its Effects on Hematology and Erythrocyte Damage in Rats

    Directory of Open Access Journals (Sweden)

    Mosaad A. Abdel-Wahhab

    2014-02-01

    Full Text Available The aim of this study was to prepare Eudragit Retard L (Eudragit RL nanoparticles (ENPs and to determine their properties, their uptake by the human THP-1 cell line in vitro and their effect on the hematological parameters and erythrocyte damage in rats. ENPs showed an average size of 329.0 ± 18.5 nm, a positive zeta potential value of +57.5 ± 5.47 mV and nearly spherical shape with a smooth surface. THP-1 cell lines could phagocyte ENPs after 2 h of incubation. In the in vivo study, male Sprague-Dawley rats were exposed orally or intraperitoneally (IP with a single dose of ENP (50 mg/kg body weight. Blood samples were collected after 4 h, 48 h, one week and three weeks for hematological and erythrocytes analysis. ENPs induced significant hematological disturbances in platelets, red blood cell (RBC total and differential counts of white blood cells (WBCs after 4 h, 48 h and one week. ENP increased met-Hb and Co-Hb derivatives and decreased met-Hb reductase activity. These parameters were comparable to the control after three weeks when administrated orally. It could be concluded that the route of administration has a major effect on the induction of hematological disturbances and should be considered when ENPs are applied for drug delivery systems.

  5. Portable Health Algorithms Test System

    Science.gov (United States)

    Melcher, Kevin J.; Wong, Edmond; Fulton, Christopher E.; Sowers, Thomas S.; Maul, William A.

    2010-01-01

    A document discusses the Portable Health Algorithms Test (PHALT) System, which has been designed as a means for evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT system allows systems health management algorithms to be developed in a graphical programming environment, to be tested and refined using system simulation or test data playback, and to be evaluated in a real-time hardware-in-the-loop mode with a live test article. The integrated hardware and software development environment provides a seamless transition from algorithm development to real-time implementation. The portability of the hardware makes it quick and easy to transport between test facilities. This hard ware/software architecture is flexible enough to support a variety of diagnostic applications and test hardware, and the GUI-based rapid prototyping capability is sufficient to support development execution, and testing of custom diagnostic algorithms. The PHALT operating system supports execution of diagnostic algorithms under real-time constraints. PHALT can perform real-time capture and playback of test rig data with the ability to augment/ modify the data stream (e.g. inject simulated faults). It performs algorithm testing using a variety of data input sources, including real-time data acquisition, test data playback, and system simulations, and also provides system feedback to evaluate closed-loop diagnostic response and mitigation control.

  6. Algorithms

    Indian Academy of Sciences (India)

    algorithm design technique called 'divide-and-conquer'. One of ... Turtle graphics, September. 1996. 5. ... whole list named 'PO' is a pointer to the first element of the list; ..... Program for computing matrices X and Y and placing the result in C *).

  7. Hardware Acceleration of Adaptive Neural Algorithms.

    Energy Technology Data Exchange (ETDEWEB)

    James, Conrad D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-11-01

    As tradit ional numerical computing has faced challenges, researchers have turned towards alternative computing approaches to reduce power - per - computation metrics and improve algorithm performance. Here, we describe an approach towards non - conventional computing that strengthens the connection between machine learning and neuroscience concepts. The Hardware Acceleration of Adaptive Neural Algorithms (HAANA) project ha s develop ed neural machine learning algorithms and hardware for applications in image processing and cybersecurity. While machine learning methods are effective at extracting relevant features from many types of data, the effectiveness of these algorithms degrades when subjected to real - world conditions. Our team has generated novel neural - inspired approa ches to improve the resiliency and adaptability of machine learning algorithms. In addition, we have also designed and fabricated hardware architectures and microelectronic devices specifically tuned towards the training and inference operations of neural - inspired algorithms. Finally, our multi - scale simulation framework allows us to assess the impact of microelectronic device properties on algorithm performance.

  8. Performance of Jet Algorithms in CMS

    CERN Document Server

    CMS Collaboration

    The CMS Combined Software and Analysis Challenge 2007 (CSA07) is well underway and expected to produce a wealth of physics analyses to be applied to the first incoming detector data in 2008. The JetMET group of CMS supports four different jet clustering algorithms for the CSA07 Monte Carlo samples, with two different parameterizations each: \\fastkt, \\siscone, \\midpoint, and \\itcone. We present several studies comparing the performance of these algorithms using QCD dijet and \\ttbar Monte Carlo samples. We specifically observe that the \\siscone algorithm performs equal to or better than the \\midpoint algorithm in all presented studies and propose that \\siscone be adopted as the preferred cone-based jet clustering algorithm in future CMS physics analyses, as it is preferred by theorists for its infrared- and collinear-safety to all orders of perturbative QCD. We furthermore encourage the use of the \\fastkt algorithm which is found to perform as good as any other algorithm under study, features dramatically reduc...

  9. Quantum-circuit model of Hamiltonian search algorithms

    International Nuclear Information System (INIS)

    Roland, Jeremie; Cerf, Nicolas J.

    2003-01-01

    We analyze three different quantum search algorithms, namely, the traditional circuit-based Grover's algorithm, its continuous-time analog by Hamiltonian evolution, and the quantum search by local adiabatic evolution. We show that these algorithms are closely related in the sense that they all perform a rotation, at a constant angular velocity, from a uniform superposition of all states to the solution state. This makes it possible to implement the two Hamiltonian-evolution algorithms on a conventional quantum circuit, while keeping the quadratic speedup of Grover's original algorithm. It also clarifies the link between the adiabatic search algorithm and Grover's algorithm

  10. Implementation of real-time energy management strategy based on reinforcement learning for hybrid electric vehicles and simulation validation.

    Science.gov (United States)

    Kong, Zehui; Zou, Yuan; Liu, Teng

    2017-01-01

    To further improve the fuel economy of series hybrid electric tracked vehicles, a reinforcement learning (RL)-based real-time energy management strategy is developed in this paper. In order to utilize the statistical characteristics of online driving schedule effectively, a recursive algorithm for the transition probability matrix (TPM) of power-request is derived. The reinforcement learning (RL) is applied to calculate and update the control policy at regular time, adapting to the varying driving conditions. A facing-forward powertrain model is built in detail, including the engine-generator model, battery model and vehicle dynamical model. The robustness and adaptability of real-time energy management strategy are validated through the comparison with the stationary control strategy based on initial transition probability matrix (TPM) generated from a long naturalistic driving cycle in the simulation. Results indicate that proposed method has better fuel economy than stationary one and is more effective in real-time control.

  11. Implementation of real-time energy management strategy based on reinforcement learning for hybrid electric vehicles and simulation validation.

    Directory of Open Access Journals (Sweden)

    Zehui Kong

    Full Text Available To further improve the fuel economy of series hybrid electric tracked vehicles, a reinforcement learning (RL-based real-time energy management strategy is developed in this paper. In order to utilize the statistical characteristics of online driving schedule effectively, a recursive algorithm for the transition probability matrix (TPM of power-request is derived. The reinforcement learning (RL is applied to calculate and update the control policy at regular time, adapting to the varying driving conditions. A facing-forward powertrain model is built in detail, including the engine-generator model, battery model and vehicle dynamical model. The robustness and adaptability of real-time energy management strategy are validated through the comparison with the stationary control strategy based on initial transition probability matrix (TPM generated from a long naturalistic driving cycle in the simulation. Results indicate that proposed method has better fuel economy than stationary one and is more effective in real-time control.

  12. Algorithms for parallel computers

    International Nuclear Information System (INIS)

    Churchhouse, R.F.

    1985-01-01

    Until relatively recently almost all the algorithms for use on computers had been designed on the (usually unstated) assumption that they were to be run on single processor, serial machines. With the introduction of vector processors, array processors and interconnected systems of mainframes, minis and micros, however, various forms of parallelism have become available. The advantage of parallelism is that it offers increased overall processing speed but it also raises some fundamental questions, including: (i) which, if any, of the existing 'serial' algorithms can be adapted for use in the parallel mode. (ii) How close to optimal can such adapted algorithms be and, where relevant, what are the convergence criteria. (iii) How can we design new algorithms specifically for parallel systems. (iv) For multi-processor systems how can we handle the software aspects of the interprocessor communications. Aspects of these questions illustrated by examples are considered in these lectures. (orig.)

  13. A hardware-algorithm co-design approach to optimize seizure detection algorithms for implantable applications.

    Science.gov (United States)

    Raghunathan, Shriram; Gupta, Sumeet K; Markandeya, Himanshu S; Roy, Kaushik; Irazoqui, Pedro P

    2010-10-30

    Implantable neural prostheses that deliver focal electrical stimulation upon demand are rapidly emerging as an alternate therapy for roughly a third of the epileptic patient population that is medically refractory. Seizure detection algorithms enable feedback mechanisms to provide focally and temporally specific intervention. Real-time feasibility and computational complexity often limit most reported detection algorithms to implementations using computers for bedside monitoring or external devices communicating with the implanted electrodes. A comparison of algorithms based on detection efficacy does not present a complete picture of the feasibility of the algorithm with limited computational power, as is the case with most battery-powered applications. We present a two-dimensional design optimization approach that takes into account both detection efficacy and hardware cost in evaluating algorithms for their feasibility in an implantable application. Detection features are first compared for their ability to detect electrographic seizures from micro-electrode data recorded from kainate-treated rats. Circuit models are then used to estimate the dynamic and leakage power consumption of the compared features. A score is assigned based on detection efficacy and the hardware cost for each of the features, then plotted on a two-dimensional design space. An optimal combination of compared features is used to construct an algorithm that provides maximal detection efficacy per unit hardware cost. The methods presented in this paper would facilitate the development of a common platform to benchmark seizure detection algorithms for comparison and feasibility analysis in the next generation of implantable neuroprosthetic devices to treat epilepsy. Copyright © 2010 Elsevier B.V. All rights reserved.

  14. 8. Algorithm Design Techniques

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 8. Algorithms - Algorithm Design Techniques. R K Shyamasundar. Series Article Volume 2 ... Author Affiliations. R K Shyamasundar1. Computer Science Group, Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai 400 005, India ...

  15. Efficient RNA structure comparison algorithms.

    Science.gov (United States)

    Arslan, Abdullah N; Anandan, Jithendar; Fry, Eric; Monschke, Keith; Ganneboina, Nitin; Bowerman, Jason

    2017-12-01

    Recently proposed relative addressing-based ([Formula: see text]) RNA secondary structure representation has important features by which an RNA structure database can be stored into a suffix array. A fast substructure search algorithm has been proposed based on binary search on this suffix array. Using this substructure search algorithm, we present a fast algorithm that finds the largest common substructure of given multiple RNA structures in [Formula: see text] format. The multiple RNA structure comparison problem is NP-hard in its general formulation. We introduced a new problem for comparing multiple RNA structures. This problem has more strict similarity definition and objective, and we propose an algorithm that solves this problem efficiently. We also develop another comparison algorithm that iteratively calls this algorithm to locate nonoverlapping large common substructures in compared RNAs. With the new resulting tools, we improved the RNASSAC website (linked from http://faculty.tamuc.edu/aarslan ). This website now also includes two drawing tools: one specialized for preparing RNA substructures that can be used as input by the search tool, and another one for automatically drawing the entire RNA structure from a given structure sequence.

  16. Algorithms for worst-case tolerance optimization

    DEFF Research Database (Denmark)

    Schjær-Jacobsen, Hans; Madsen, Kaj

    1979-01-01

    New algorithms are presented for the solution of optimum tolerance assignment problems. The problems considered are defined mathematically as a worst-case problem (WCP), a fixed tolerance problem (FTP), and a variable tolerance problem (VTP). The basic optimization problem without tolerances...... is denoted the zero tolerance problem (ZTP). For solution of the WCP we suggest application of interval arithmetic and also alternative methods. For solution of the FTP an algorithm is suggested which is conceptually similar to algorithms previously developed by the authors for the ZTP. Finally, the VTP...... is solved by a double-iterative algorithm in which the inner iteration is performed by the FTP- algorithm. The application of the algorithm is demonstrated by means of relatively simple numerical examples. Basic properties, such as convergence properties, are displayed based on the examples....

  17. Elementary functions algorithms and implementation

    CERN Document Server

    Muller, Jean-Michel

    2016-01-01

    This textbook presents the concepts and tools necessary to understand, build, and implement algorithms for computing elementary functions (e.g., logarithms, exponentials, and the trigonometric functions). Both hardware- and software-oriented algorithms are included, along with issues related to accurate floating-point implementation. This third edition has been updated and expanded to incorporate the most recent advances in the field, new elementary function algorithms, and function software. After a preliminary chapter that briefly introduces some fundamental concepts of computer arithmetic, such as floating-point arithmetic and redundant number systems, the text is divided into three main parts. Part I considers the computation of elementary functions using algorithms based on polynomial or rational approximations and using table-based methods; the final chapter in this section deals with basic principles of multiple-precision arithmetic. Part II is devoted to a presentation of “shift-and-add” algorithm...

  18. Streaming Algorithms for Line Simplification

    DEFF Research Database (Denmark)

    Abam, Mohammad; de Berg, Mark; Hachenberger, Peter

    2010-01-01

    this problem in a streaming setting, where we only have a limited amount of storage, so that we cannot store all the points. We analyze the competitive ratio of our algorithms, allowing resource augmentation: we let our algorithm maintain a simplification with 2k (internal) points and compare the error of our...... simplification to the error of the optimal simplification with k points. We obtain the algorithms with O(1) competitive ratio for three cases: convex paths, where the error is measured using the Hausdorff distance (or Fréchet distance), xy-monotone paths, where the error is measured using the Hausdorff distance...... (or Fréchet distance), and general paths, where the error is measured using the Fréchet distance. In the first case the algorithm needs O(k) additional storage, and in the latter two cases the algorithm needs O(k 2) additional storage....

  19. Advancements to the planogram frequency–distance rebinning algorithm

    International Nuclear Information System (INIS)

    Champley, Kyle M; Kinahan, Paul E; Raylman, Raymond R

    2010-01-01

    In this paper we consider the task of image reconstruction in positron emission tomography (PET) with the planogram frequency–distance rebinning (PFDR) algorithm. The PFDR algorithm is a rebinning algorithm for PET systems with panel detectors. The algorithm is derived in the planogram coordinate system which is a native data format for PET systems with panel detectors. A rebinning algorithm averages over the redundant four-dimensional set of PET data to produce a three-dimensional set of data. Images can be reconstructed from this rebinned three-dimensional set of data. This process enables one to reconstruct PET images more quickly than reconstructing directly from the four-dimensional PET data. The PFDR algorithm is an approximate rebinning algorithm. We show that implementing the PFDR algorithm followed by the (ramp) filtered backprojection (FBP) algorithm in linogram coordinates from multiple views reconstructs a filtered version of our image. We develop an explicit formula for this filter which can be used to achieve exact reconstruction by means of a modified FBP algorithm applied to the stack of rebinned linograms and can also be used to quantify the errors introduced by the PFDR algorithm. This filter is similar to the filter in the planogram filtered backprojection algorithm derived by Brasse et al. The planogram filtered backprojection and exact reconstruction with the PFDR algorithm require complete projections which can be completed with a reprojection algorithm. The PFDR algorithm is similar to the rebinning algorithm developed by Kao et al. By expressing the PFDR algorithm in detector coordinates, we provide a comparative analysis between the two algorithms. Numerical experiments using both simulated data and measured data from a positron emission mammography/tomography (PEM/PET) system are performed. Images are reconstructed by PFDR+FBP (PFDR followed by 2D FBP reconstruction), PFDRX (PFDR followed by the modified FBP algorithm for exact

  20. Multiple-algorithm parallel fusion of infrared polarization and intensity images based on algorithmic complementarity and synergy

    Science.gov (United States)

    Zhang, Lei; Yang, Fengbao; Ji, Linna; Lv, Sheng

    2018-01-01

    Diverse image fusion methods perform differently. Each method has advantages and disadvantages compared with others. One notion is that the advantages of different image methods can be effectively combined. A multiple-algorithm parallel fusion method based on algorithmic complementarity and synergy is proposed. First, in view of the characteristics of the different algorithms and difference-features among images, an index vector-based feature-similarity is proposed to define the degree of complementarity and synergy. This proposed index vector is a reliable evidence indicator for algorithm selection. Second, the algorithms with a high degree of complementarity and synergy are selected. Then, the different degrees of various features and infrared intensity images are used as the initial weights for the nonnegative matrix factorization (NMF). This avoids randomness of the NMF initialization parameter. Finally, the fused images of different algorithms are integrated using the NMF because of its excellent data fusing performance on independent features. Experimental results demonstrate that the visual effect and objective evaluation index of the fused images obtained using the proposed method are better than those obtained using traditional methods. The proposed method retains all the advantages that individual fusion algorithms have.

  1. Algorithmic alternatives

    International Nuclear Information System (INIS)

    Creutz, M.

    1987-11-01

    A large variety of Monte Carlo algorithms are being used for lattice gauge simulations. For purely bosonic theories, present approaches are generally adequate; nevertheless, overrelaxation techniques promise savings by a factor of about three in computer time. For fermionic fields the situation is more difficult and less clear. Algorithms which involve an extrapolation to a vanishing step size are all quite closely related. Methods which do not require such an approximation tend to require computer time which grows as the square of the volume of the system. Recent developments combining global accept/reject stages with Langevin or microcanonical updatings promise to reduce this growth to V/sup 4/3/

  2. A Global algorithm for linear radiosity

    OpenAIRE

    Sbert Cassasayas, Mateu; Pueyo Sánchez, Xavier

    1993-01-01

    A linear algorithm for radiosity is presented, linear both in time and storage. The new algorithm is based on previous work by the authors and on the well known algorithms for progressive radiosity and Monte Carlo particle transport.

  3. Algorithmic Mechanism Design of Evolutionary Computation.

    Science.gov (United States)

    Pei, Yan

    2015-01-01

    We consider algorithmic design, enhancement, and improvement of evolutionary computation as a mechanism design problem. All individuals or several groups of individuals can be considered as self-interested agents. The individuals in evolutionary computation can manipulate parameter settings and operations by satisfying their own preferences, which are defined by an evolutionary computation algorithm designer, rather than by following a fixed algorithm rule. Evolutionary computation algorithm designers or self-adaptive methods should construct proper rules and mechanisms for all agents (individuals) to conduct their evolution behaviour correctly in order to definitely achieve the desired and preset objective(s). As a case study, we propose a formal framework on parameter setting, strategy selection, and algorithmic design of evolutionary computation by considering the Nash strategy equilibrium of a mechanism design in the search process. The evaluation results present the efficiency of the framework. This primary principle can be implemented in any evolutionary computation algorithm that needs to consider strategy selection issues in its optimization process. The final objective of our work is to solve evolutionary computation design as an algorithmic mechanism design problem and establish its fundamental aspect by taking this perspective. This paper is the first step towards achieving this objective by implementing a strategy equilibrium solution (such as Nash equilibrium) in evolutionary computation algorithm.

  4. MultiAspect Graphs: Algebraic Representation and Algorithms

    Directory of Open Access Journals (Sweden)

    Klaus Wehmuth

    2016-12-01

    Full Text Available We present the algebraic representation and basic algorithms for MultiAspect Graphs (MAGs. A MAG is a structure capable of representing multilayer and time-varying networks, as well as higher-order networks, while also having the property of being isomorphic to a directed graph. In particular, we show that, as a consequence of the properties associated with the MAG structure, a MAG can be represented in matrix form. Moreover, we also show that any possible MAG function (algorithm can be obtained from this matrix-based representation. This is an important theoretical result since it paves the way for adapting well-known graph algorithms for application in MAGs. We present a set of basic MAG algorithms, constructed from well-known graph algorithms, such as degree computing, Breadth First Search (BFS, and Depth First Search (DFS. These algorithms adapted to the MAG context can be used as primitives for building other more sophisticated MAG algorithms. Therefore, such examples can be seen as guidelines on how to properly derive MAG algorithms from basic algorithms on directed graphs. We also make available Python implementations of all the algorithms presented in this paper.

  5. Firefly Mating Algorithm for Continuous Optimization Problems

    Directory of Open Access Journals (Sweden)

    Amarita Ritthipakdee

    2017-01-01

    Full Text Available This paper proposes a swarm intelligence algorithm, called firefly mating algorithm (FMA, for solving continuous optimization problems. FMA uses genetic algorithm as the core of the algorithm. The main feature of the algorithm is a novel mating pair selection method which is inspired by the following 2 mating behaviors of fireflies in nature: (i the mutual attraction between males and females causes them to mate and (ii fireflies of both sexes are of the multiple-mating type, mating with multiple opposite sex partners. A female continues mating until her spermatheca becomes full, and, in the same vein, a male can provide sperms for several females until his sperm reservoir is depleted. This new feature enhances the global convergence capability of the algorithm. The performance of FMA was tested with 20 benchmark functions (sixteen 30-dimensional functions and four 2-dimensional ones against FA, ALC-PSO, COA, MCPSO, LWGSODE, MPSODDS, DFOA, SHPSOS, LSA, MPDPGA, DE, and GABC algorithms. The experimental results showed that the success rates of our proposed algorithm with these functions were higher than those of other algorithms and the proposed algorithm also required fewer numbers of iterations to reach the global optima.

  6. Unconventional Algorithms: Complementarity of Axiomatics and Construction

    Directory of Open Access Journals (Sweden)

    Gordana Dodig Crnkovic

    2012-10-01

    Full Text Available In this paper, we analyze axiomatic and constructive issues of unconventional computations from a methodological and philosophical point of view. We explain how the new models of algorithms and unconventional computations change the algorithmic universe, making it open and allowing increased flexibility and expressive power that augment creativity. At the same time, the greater power of new types of algorithms also results in the greater complexity of the algorithmic universe, transforming it into the algorithmic multiverse and demanding new tools for its study. That is why we analyze new powerful tools brought forth by local mathematics, local logics, logical varieties and the axiomatic theory of algorithms, automata and computation. We demonstrate how these new tools allow efficient navigation in the algorithmic multiverse. Further work includes study of natural computation by unconventional algorithms and constructive approaches.

  7. Integrated Association Rules Complete Hiding Algorithms

    Directory of Open Access Journals (Sweden)

    Mohamed Refaat Abdellah

    2017-01-01

    Full Text Available This paper presents database security approach for complete hiding of sensitive association rules by using six novel algorithms. These algorithms utilize three new weights to reduce the needed database modifications and support complete hiding, as well as they reduce the knowledge distortion and the data distortions. Complete weighted hiding algorithms enhance the hiding failure by 100%; these algorithms have the advantage of performing only a single scan for the database to gather the required information to form the hiding process. These proposed algorithms are built within the database structure which enables the sanitized database to be generated on run time as needed.

  8. New Insights into the RLS Algorithm

    Directory of Open Access Journals (Sweden)

    Gänsler Tomas

    2004-01-01

    Full Text Available The recursive least squares (RLS algorithm is one of the most popular adaptive algorithms that can be found in the literature, due to the fact that it is easily and exactly derived from the normal equations. In this paper, we give another interpretation of the RLS algorithm and show the importance of linear interpolation error energies in the RLS structure. We also give a very efficient way to recursively estimate the condition number of the input signal covariance matrix thanks to fast versions of the RLS algorithm. Finally, we quantify the misalignment of the RLS algorithm with respect to the condition number.

  9. Computational geometry algorithms and applications

    CERN Document Server

    de Berg, Mark; Overmars, Mark; Schwarzkopf, Otfried

    1997-01-01

    Computational geometry emerged from the field of algorithms design and anal­ ysis in the late 1970s. It has grown into a recognized discipline with its own journals, conferences, and a large community of active researchers. The suc­ cess of the field as a research discipline can on the one hand be explained from the beauty of the problems studied and the solutions obtained, and, on the other hand, by the many application domains--computer graphics, geographic in­ formation systems (GIS), robotics, and others-in which geometric algorithms play a fundamental role. For many geometric problems the early algorithmic solutions were either slow or difficult to understand and implement. In recent years a number of new algorithmic techniques have been developed that improved and simplified many of the previous approaches. In this textbook we have tried to make these modem algorithmic solutions accessible to a large audience. The book has been written as a textbook for a course in computational geometry, but it can ...

  10. Comparing Online Algorithms for Bin Packing Problems

    DEFF Research Database (Denmark)

    Epstein, Leah; Favrholdt, Lene Monrad; Kohrt, Jens Svalgaard

    2012-01-01

    The relative worst-order ratio is a measure of the quality of online algorithms. In contrast to the competitive ratio, this measure compares two online algorithms directly instead of using an intermediate comparison with an optimal offline algorithm. In this paper, we apply the relative worst-ord......-order ratio to online algorithms for several common variants of the bin packing problem. We mainly consider pairs of algorithms that are not distinguished by the competitive ratio and show that the relative worst-order ratio prefers the intuitively better algorithm of each pair....

  11. Algorithms

    Indian Academy of Sciences (India)

    algorithm that it is implicitly understood that we know how to generate the next natural ..... Explicit comparisons are made in line (1) where maximum and minimum is ... It can be shown that the function T(n) = 3/2n -2 is the solution to the above ...

  12. Empirical tests of the Gradual Learning Algorithm

    NARCIS (Netherlands)

    Boersma, P.; Hayes, B.

    1999-01-01

    The Gradual Learning Algorithm (Boersma 1997) is a constraint ranking algorithm for learning Optimality-theoretic grammars. The purpose of this article is to assess the capabilities of the Gradual Learning Algorithm, particularly in comparison with the Constraint Demotion algorithm of Tesar and

  13. Empirical tests of the Gradual Learning Algorithm

    NARCIS (Netherlands)

    Boersma, P.; Hayes, B.

    2001-01-01

    The Gradual Learning Algorithm (Boersma 1997) is a constraint-ranking algorithm for learning optimality-theoretic grammars. The purpose of this article is to assess the capabilities of the Gradual Learning Algorithm, particularly in comparison with the Constraint Demotion algorithm of Tesar and

  14. Deterministic algorithms for multi-criteria Max-TSP

    NARCIS (Netherlands)

    Manthey, Bodo

    2012-01-01

    We present deterministic approximation algorithms for the multi-criteria maximum traveling salesman problem (Max-TSP). Our algorithms are faster and simpler than the existing randomized algorithms. We devise algorithms for the symmetric and asymmetric multi-criteria Max-TSP that achieve ratios of

  15. Verification-Based Interval-Passing Algorithm for Compressed Sensing

    OpenAIRE

    Wu, Xiaofu; Yang, Zhen

    2013-01-01

    We propose a verification-based Interval-Passing (IP) algorithm for iteratively reconstruction of nonnegative sparse signals using parity check matrices of low-density parity check (LDPC) codes as measurement matrices. The proposed algorithm can be considered as an improved IP algorithm by further incorporation of the mechanism of verification algorithm. It is proved that the proposed algorithm performs always better than either the IP algorithm or the verification algorithm. Simulation resul...

  16. Multimodal Estimation of Distribution Algorithms.

    Science.gov (United States)

    Yang, Qiang; Chen, Wei-Neng; Li, Yun; Chen, C L Philip; Xu, Xiang-Min; Zhang, Jun

    2016-02-15

    Taking the advantage of estimation of distribution algorithms (EDAs) in preserving high diversity, this paper proposes a multimodal EDA. Integrated with clustering strategies for crowding and speciation, two versions of this algorithm are developed, which operate at the niche level. Then these two algorithms are equipped with three distinctive techniques: 1) a dynamic cluster sizing strategy; 2) an alternative utilization of Gaussian and Cauchy distributions to generate offspring; and 3) an adaptive local search. The dynamic cluster sizing affords a potential balance between exploration and exploitation and reduces the sensitivity to the cluster size in the niching methods. Taking advantages of Gaussian and Cauchy distributions, we generate the offspring at the niche level through alternatively using these two distributions. Such utilization can also potentially offer a balance between exploration and exploitation. Further, solution accuracy is enhanced through a new local search scheme probabilistically conducted around seeds of niches with probabilities determined self-adaptively according to fitness values of these seeds. Extensive experiments conducted on 20 benchmark multimodal problems confirm that both algorithms can achieve competitive performance compared with several state-of-the-art multimodal algorithms, which is supported by nonparametric tests. Especially, the proposed algorithms are very promising for complex problems with many local optima.

  17. Algorithms and Their Explanations

    NARCIS (Netherlands)

    Benini, M.; Gobbo, F.; Beckmann, A.; Csuhaj-Varjú, E.; Meer, K.

    2014-01-01

    By analysing the explanation of the classical heapsort algorithm via the method of levels of abstraction mainly due to Floridi, we give a concrete and precise example of how to deal with algorithmic knowledge. To do so, we introduce a concept already implicit in the method, the ‘gradient of

  18. Engineering a Cache-Oblivious Sorting Algorithm

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Fagerberg, Rolf; Vinther, Kristoffer

    2007-01-01

    This paper is an algorithmic engineering study of cache-oblivious sorting. We investigate by empirical methods a number of implementation issues and parameter choices for the cache-oblivious sorting algorithm Lazy Funnelsort, and compare the final algorithm with Quicksort, the established standard...

  19. PROPOSAL OF ALGORITHM FOR ROUTE OPTIMIZATION

    OpenAIRE

    Robert Ramon de Carvalho Sousa; Abimael de Jesus Barros Costa; Eliezé Bulhões de Carvalho; Adriano de Carvalho Paranaíba; Daylyne Maerla Gomes Lima Sandoval

    2016-01-01

    This article uses “Six Sigma” methodology for the elaboration of an algorithm for routing problems which is able to obtain more efficient results than those from Clarke and Wright´s (CW) algorithm (1964) in situations of random increase of product delivery demands, facing the incapability of service level increase . In some situations, the algorithm proposed obtained more efficient results than the CW algorithm. The key factor was a reduction in the number of mistakes (on...

  20. Genetic algorithms and fuzzy multiobjective optimization

    CERN Document Server

    Sakawa, Masatoshi

    2002-01-01

    Since the introduction of genetic algorithms in the 1970s, an enormous number of articles together with several significant monographs and books have been published on this methodology. As a result, genetic algorithms have made a major contribution to optimization, adaptation, and learning in a wide variety of unexpected fields. Over the years, many excellent books in genetic algorithm optimization have been published; however, they focus mainly on single-objective discrete or other hard optimization problems under certainty. There appears to be no book that is designed to present genetic algorithms for solving not only single-objective but also fuzzy and multiobjective optimization problems in a unified way. Genetic Algorithms And Fuzzy Multiobjective Optimization introduces the latest advances in the field of genetic algorithm optimization for 0-1 programming, integer programming, nonconvex programming, and job-shop scheduling problems under multiobjectiveness and fuzziness. In addition, the book treats a w...