WorldWideScience

Sample records for experiment eve algorithms

  1. Measuring Solar Doppler Velocities in the He II 30.38 nm Emission Using the EUV Variability Experiment (EVE)

    Science.gov (United States)

    Chamberlin, Phillip Clyde

    2016-01-01

    The EUV Variability Experiment (EVE) onboard the Solar Dynamics Observatory has provided unprecedented measurements of the solar EUV irradiance at high temporal cadence with good spectral resolution and range since May 2010. The main purpose of EVE was to connect the Sun to the Earth by providing measurements of the EUV irradianceas a driver for space weather and Living With a Star studies, but after launch the instrument has demonstrated the significance of its measurements in contributing to studies looking at the sources of solar variability for pure solar physics purposes. This paper expands upon previous findings that EVE can in fact measure wavelength shifts during solar eruptive events and therefore provide Doppler velocities for plasma at all temperatures throughout the solar atmosphere from the chromosphere to hot flaring temperatures. This process is not straightforward as EVE was not designed or optimized for these types of measurements. In this paper we describe the many detailed instrumental characterizations needed to eliminate the optical effects in order to provide an absolute baseline for the Doppler shift studies. An example is given of a solar eruption on 7 September 2011 (SOL2011-09-07), associated with an X1.2 flare, where EVE Doppler analysis shows plasma ejected from the Sun in the He II 30.38 nm emission at a velocity of almost 120 km s(exp -1) along the line-of-sight.

  2. EPO for the NASA SDO Extreme Ultraviolet Variability Experiment (EVE) Learning Suite for Educators

    Science.gov (United States)

    Kellagher, Emily; Scherrer, D. K.

    2013-07-01

    EVE Education and Public Outreach (EPO) promotes an understanding of the process of science and concepts within solar science and sun-earth connections. EVE EPO also features working scientists, current research and career awareness. One of the highlights for of this years projects is the digitization of solar lessons and the collaboration with the other instrument teams to develop new resources for students and educators. Digital lesson suite: EVE EPO has taken the best solar lessons and reworked then to make then more engaging, to reflect SDO data and made them SMARTboard compatible. We are creating a website that Students and teachers can access these lesson and use them online or download them. Project team collaboration: The SDO instruments (EVE, AIA and HMI) teams have created a comic book series for upper elementary and middle school students with the SDO mascot Camilla. These comics may be printed or read on mobile devices. Many teachers are looking for resources to use with their students via the Ipad so our collaboration helps supply teachers with a great resource that teachers about solar concepts and helps dispel solar misconceptions.Abstract (2,250 Maximum Characters): EVE Education and Public Outreach (EPO) promotes an understanding of the process of science and concepts within solar science and sun-earth connections. EVE EPO also features working scientists, current research and career awareness. One of the highlights for of this years projects is the digitization of solar lessons and the collaboration with the other instrument teams to develop new resources for students and educators. Digital lesson suite: EVE EPO has taken the best solar lessons and reworked then to make then more engaging, to reflect SDO data and made them SMARTboard compatible. We are creating a website that Students and teachers can access these lesson and use them online or download them. Project team collaboration: The SDO instruments (EVE, AIA and HMI) teams have created a

  3. A Partnership between English Language Learners and a Team of Rocket Scientists: EPO for the NASA SDO Extreme Ultraviolet Variability Experiment (EVE)

    Science.gov (United States)

    Buhr, S. M.; McCaffrey, M. S.; Eparvier, F.; Murillo, M.

    2008-05-01

    Recent immigrant high school students were successfully engaged in learning about Sun-Earth connections through a partnership with the NASA Solar Dynamics Observatory Extreme Ultraviolet Variability Experiment (EVE) project. The students were enrolled in a pilot course as part of the Math, Engineering and Science Achievement (MESA) program. The English Language Learner (ELL) students doubled their achievement on a pre- and post- assessment on the content of the course. Students learned scientific content and vocabulary in English with support in Spanish, attended field trips, hosted scientist speakers, built antenna and deployed space weather monitors as part of the Stanford SOLAR project, and gave final presentations in English, showcasing their new computer skills. Teachers who taught the students in other courses noted gains in the students' willingness to use English in class and noted gains in math skills. The course has been broken into modules for use in shorter after-school environments, or for use by EVE scientists who are outside of the Boulder area. Video footage of "The Making of a Satellite", and "All About EVE" is completed for use in the kits. Other EVE EPO includes upcoming professional development for teachers and content workshops for journalists.

  4. A Partnership between English Language Learners and a Team of Rocket Scientists: EPO for the NASA SDO Extreme-Ultraviolet Variability Experiment (EVE)

    Science.gov (United States)

    Buhr, S. M.; Eparvier, F.; McCaffrey, M.; Murillo, M.

    2007-12-01

    Recent immigrant high school students were successfully engaged in learning about Sun-Earth connections through a partnership with the NASA SDO Extreme-Ultraviolet Variability Experiment (EVE) project. The students were enrolled in a pilot course as part of the Math, Engineering and Science Achievement MESA) program. For many of the students, this was the only science option available to them due to language limitations. The English Language Learner (ELL) students doubled their achievement on a pre- and post-assessment on the content of the course. Students learned scientific content and vocabulary in English with support in Spanish, attended field trips, hosted scientist speakers, built and deployed space weather monitors as part of the Stanford SOLAR project, and gave final presentations in English, showcasing their new computer skills. Teachers who taught the students in other courses noted gains in the students' willingness to use English in class and noted gains in math skills. The MESA-EVE course won recognition as a Colorado MESA Program of Excellence and is being offered again in 2007-08. The course has been broken into modules for use in shorter after-school environments, or for use by EVE scientists who are outside of the Boulder area. Other EVE EPO includes professional development for teachers and content workshops for journalists.

  5. Extreme Ultraviolet Variability Experiment (EVE) on the Solar Dynamics Observatory (SDO): Overview of Science Objectives, Instrument Design, Data Products, and Model Developments

    Science.gov (United States)

    Woods, T. N.; Eparvier, F. G.; Hock, R.; Jones, A. R.; Woodraska, D.; Judge, D.; Didkovsky, L.; Lean, J.; Mariska, J.; Warren, H.; hide

    2010-01-01

    The highly variable solar extreme ultraviolet (EUV) radiation is the major energy input to the Earth's upper atmosphere, strongly impacting the geospace environment, affecting satellite operations, communications, and navigation. The Extreme ultraviolet Variability Experiment (EVE) onboard the NASA Solar Dynamics Observatory (SDO) will measure the solar EUV irradiance from 0.1 to 105 nm with unprecedented spectral resolution (0.1 nm), temporal cadence (ten seconds), and accuracy (20%). EVE includes several irradiance instruments: The Multiple EUV Grating Spectrographs (MEGS)-A is a grazingincidence spectrograph that measures the solar EUV irradiance in the 5 to 37 nm range with 0.1-nm resolution, and the MEGS-B is a normal-incidence, dual-pass spectrograph that measures the solar EUV irradiance in the 35 to 105 nm range with 0.1-nm resolution. To provide MEGS in-flight calibration, the EUV SpectroPhotometer (ESP) measures the solar EUV irradiance in broadbands between 0.1 and 39 nm, and a MEGS-Photometer measures the Sun s bright hydrogen emission at 121.6 nm. The EVE data products include a near real-time space-weather product (Level 0C), which provides the solar EUV irradiance in specific bands and also spectra in 0.1-nm intervals with a cadence of one minute and with a time delay of less than 15 minutes. The EVE higher-level products are Level 2 with the solar EUV irradiance at higher time cadence (0.25 seconds for photometers and ten seconds for spectrographs) and Level 3 with averages of the solar irradiance over a day and over each one-hour period. The EVE team also plans to advance existing models of solar EUV irradiance and to operationally use the EVE measurements in models of Earth s ionosphere and thermosphere. Improved understanding of the evolution of solar flares and extending the various models to incorporate solar flare events are high priorities for the EVE team.

  6. Experiences of student nurses regarding the bursary system in KwaZulu Natal / Eve Precious Jacobs

    OpenAIRE

    Jacobs, Eve Precious

    2014-01-01

    This is a qualitative study, the aim of which is to explore the experiences of student nurses regarding the bursary system in KwaZulu Natal. During 2010 nursing education was confronted with restructuring of student nurses from having a supernumerary status to being bursary holders (DOH, 2010:68). This study describes the experiences of changes that have emanated from introduction of the new bursary system. The experiences of students in this new system were explored. These include the leg...

  7. Poster - Thurs Eve-21: Experience with the Velocity(TM) pre-commissioning services.

    Science.gov (United States)

    Scora, D; Sixel, K; Mason, D; Neath, C

    2008-07-01

    As the first Canadian users of the Velocity™ program offered by Siemens, we would like to share our experience with the program. The Velocity program involves the measurement of the commissioning data by an independent Physics consulting company at the factory test cell. The data collected was used to model the treatment beams in our planning system in parallel with the linac delivery and installation. Beam models and a complete data book were generated for two photon energies including Virtual Wedge, physical wedge, and IMRT, and 6 electron energies at 100 and 110 cm SSD. Our final beam models are essentially the Velocity models with some minor modifications to customize the fit to our liking. Our experience with the Velocity program was very positive; the data collection was professional and efficient. It allowed us to proceed with confidence in our beam data and modeling and to spend more time on other aspects of opening a new clinic. With the assistance of the program we were able to open a three-linac clinic with Image-Guided IMRT within 4.5 months of machine delivery. © 2008 American Association of Physicists in Medicine.

  8. EVE and School - Enrolments

    CERN Multimedia

    EVE et École

    2017-01-01

    IMPORTANT DATES Enrolments 2017-2018 Enrolments for the school year 2017-2018 to the Nursery, the Kindergarten and the School will take place on 6, 7 and 8 March 2017 from 10 am to 1 pm at EVE and School. Registration forms will be available from Thursday 2nd March. More information on the website: http://nurseryschool.web.cern.ch/.

  9. EVE and School

    CERN Multimedia

    EVE et École

    2017-01-01

    IMPORTANT DATES Enrolments 2017-2018 Enrolments for the school year 2017-2018 to the Nursery, the Kindergarten and the School will take place on 6, 7 and 8 March 2017 from 10 am to 1 pm at EVE and School. Registration forms will be available from Thursday 2nd March. More information on the website: http://nurseryschool.web.cern.ch/. Saturday 4 March 2017 Open day at EVE and School of CERN Staff Association Are you considering enrolling your child to the Children’s Day-Care Centre EVE and School of the CERN Staff Association? If you work at CERN, then this event is for you: come visit the school and meet the Management on Saturday 4 March 2017 from 10 to 12 am We look forward to welcoming you and will be delighted to present our structure, its projects and premises to you, and answer all of your questions. Sign up for one of the two sessions on Doodle via the link below before Wednesday 1st March 2017 : http://doodle.com/poll/gbrz683wuvixk8as

  10. Analysis list: eve [Chip-atlas[Archive

    Lifescience Database Archive (English)

    Full Text Available eve Embryo + dm3 http://dbarchive.biosciencedbc.jp/kyushu-u/dm3/target/eve.1.tsv ht...tp://dbarchive.biosciencedbc.jp/kyushu-u/dm3/target/eve.5.tsv http://dbarchive.biosciencedbc.jp/kyushu-u/dm3/target/eve....10.tsv http://dbarchive.biosciencedbc.jp/kyushu-u/dm3/colo/eve.Embryo.tsv http://dbarchive.biosciencedbc.jp/kyushu-u/dm3/colo/Embryo.gml ...

  11. Learning to forecast: Genetic algorithms and experiments

    NARCIS (Netherlands)

    Makarewicz, T.A.

    2014-01-01

    The central question that this thesis addresses is how economic agents learn to form price expectations, which are a crucial element of macroeconomic and financial models. The thesis applies a Genetic Algorithms model of learning to previous laboratory experiments, explaining the observed

  12. Poster - Thur Eve - 06: Comparison of an open source genetic algorithm to the commercially used IPSA for generation of seed distributions in LDR prostate brachytherapy.

    Science.gov (United States)

    McGeachy, P; Khan, R

    2012-07-01

    In early stage prostate cancer, low dose rate (LDR) prostate brachytherapy is a favorable treatment modality, where small radioactive seeds are permanently implanted throughout the prostate. Treatment centres currently rely on a commercial optimization algorithm, IPSA, to generate seed distributions for treatment plans. However, commercial software does not allow the user access to the source code, thus reducing the flexibility for treatment planning and impeding any implementation of new and, perhaps, improved clinical techniques. An open source genetic algorithm (GA) has been encoded in MATLAB to generate seed distributions for a simplified prostate and urethra model. To assess the quality of the seed distributions created by the GA, both the GA and IPSA were used to generate seed distributions for two clinically relevant scenarios and the quality of the GA distributions relative to IPSA distributions and clinically accepted standards for seed distributions was investigated. The first clinically relevant scenario involved generating seed distributions for three different prostate volumes (19.2 cc, 32.4 cc, and 54.7 cc). The second scenario involved generating distributions for three separate seed activities (0.397 mCi, 0.455 mCi, and 0.5 mCi). Both GA and IPSA met the clinically accepted criteria for the two scenarios, where distributions produced by the GA were comparable to IPSA in terms of full coverage of the prostate by the prescribed dose, and minimized dose to the urethra, which passed straight through the prostate. Further, the GA offered improved reduction of high dose regions (i.e hot spots) within the planned target volume. © 2012 American Association of Physicists in Medicine.

  13. Bootstrapping white matter segmentation, Eve++

    Science.gov (United States)

    Plassard, Andrew; Hinton, Kendra E.; Venkatraman, Vijay; Gonzalez, Christopher; Resnick, Susan M.; Landman, Bennett A.

    2015-03-01

    Multi-atlas labeling has come in wide spread use for whole brain labeling on magnetic resonance imaging. Recent challenges have shown that leading techniques are near (or at) human expert reproducibility for cortical gray matter labels. However, these approaches tend to treat white matter as essentially homogeneous (as white matter exhibits isointense signal on structural MRI). The state-of-the-art for white matter atlas is the single-subject Johns Hopkins Eve atlas. Numerous approaches have attempted to use tractography and/or orientation information to identify homologous white matter structures across subjects. Despite success with large tracts, these approaches have been plagued by difficulties in with subtle differences in course, low signal to noise, and complex structural relationships for smaller tracts. Here, we investigate use of atlas-based labeling to propagate the Eve atlas to unlabeled datasets. We evaluate single atlas labeling and multi-atlas labeling using synthetic atlases derived from the single manually labeled atlas. On 5 representative tracts for 10 subjects, we demonstrate that (1) single atlas labeling generally provides segmentations within 2mm mean surface distance, (2) morphologically constraining DTI labels within structural MRI white matter reduces variability, and (3) multi-atlas labeling did not improve accuracy. These efforts present a preliminary indication that single atlas labels with correction is reasonable, but caution should be applied. To purse multi-atlas labeling and more fully characterize overall performance, more labeled datasets would be necessary.

  14. EVE et École

    CERN Multimedia

    Staff Association

    2017-01-01

    Une belle fête de fin d’année Le vendredi 23 juin, il y avait foule dans la structure EVE et École de l’Association du personnel du CERN! Et pour cause, au programme de cette fin d’après-midi : de nombreuses animations et jeux variés proposés pour le plaisir des enfants et des parents ; une fête en plein air, dans la cour de l’école et le jardin, avec des jeux, des stands, une buvette. Les frites et saucisses ont été préparées par l’équipe de l’EVE et École, qui portait des t-shirts fluo, en référence au thème de l’année: les couleurs. Plusieurs stands étaient proposés comme par exemple, un coin maquillage ; avec ces artistes maquilleurs, les enfants sont devenus lion, papillon ou coccinelle, l’histoire de quelques...

  15. EVE and School: Important announcement

    CERN Document Server

    Staff Association

    2017-01-01

    Children’s Day-Care Centre (EVE) and School of the CERN Staff Association would like to inform you that there are still a few places available within the structure for the school year 2017–2018, in the: Nursery for 2, 3 or 5 days a week (2- to 3-year-olds); Kindergarten for mornings (2- to 4-year-olds); Primary 1 (1p) class in the School (4- to 5-year-olds). Please get in touch with us quickly if you are interested in any of the available places. We will gladly provide further information and answer any questions you may have: Staff.Kindergarten@cern.ch or (+41) 022 767 36 04 (mornings). EVE and School of the CERN Staff Association welcomes children of CERN Members of Personnel (MPE, MPA), as well as children whose parents do not work on CERN site. We would like to remind you that registrations are also open for the Summer Camp. The camp will run through the four weeks of July from 8.30 am to 5.30 pm with a weekly registration for 450 CHF, lunch included. For more information and regist...

  16. EVE et École

    CERN Multimedia

    Staff Association

    2017-01-01

    Il reste des places disponibles ! La structure Espace de vie enfantine (EVE) et École de l’Association du personnel du CERN vous informe qu’il reste quelques places pour la rentrée scolaire 2017-2018 : à la crèche (2-3 ans) (accueil sur 2, 3 ou 5 jours) ; au jardin d’enfants (2-4 ans) (accueil à la matinée) ; en classe de 1ère primaire (1P) (4-5 ans). N’hésitez pas à rapidement nous contacter si vous êtes intéressés ; nous sommes à votre disposition pour répondre à toutes vos questions : Staff.Kindergarten@cern.ch. L’EVE et École de l’Association du personnel du CERN est ouverte non seulement aux enfants des personnels du CERN (MPE, MPA) mais également aux enfants des personnes ne travaillant pas sur le domaine du CERN. ...

  17. SDO-EVE multiple EUV grating spectrograph (MEGS) optical design

    Science.gov (United States)

    Crotser, David A.; Woods, Thomas N.; Eparvier, Francis G.; Ucker, Greg; Kohnert, Richard A.; Berthiaume, Gregory D.; Weitz, David M.

    2004-10-01

    The NASA Solar Dynamics Observatory (SDO), scheduled for launch in 2008, incorporates a suite of instruments including the EUV Variability Experiment (EVE). The EVE instrument package contains grating spectrographs used to measure the solar extreme ultraviolet (EUV) irradiance from 0.1 to 105 nm. The Multiple EUV Grating Spectrograph (MEGS) channels use concave reflection gratings to image solar spectra onto CCDs that are operated at -100°C. MEGS provides 0.1nm spectral resolution between 5-105nm every 10 seconds with an absolute accuracy of better than 25% over the SDO 5-year mission. MEGS-A utilizes a unique grazing-incidence, off-Rowland circle (RC) design to minimize angle of incidence at the detector while meeting high resolution requirements. MEGS-B utilizes a double-pass, cross-dispersed double-Rowland circle design. MEGS-P, a Ly-α monitor, will provide a proxy model calibration in the 60-105 nm range. Finally, the Solar Aspect Monitor (SAM) channel will provide continual pointing information for EVE as well as low-resolution X-ray images of the sun. In-flight calibrations for MEGS will be provided by the on-board EUV Spectrophotometer (ESP) in the 0.1-7nm and 17-37nm ranges, as well as from annual under-flight rocket experiments. We present the methodology used to develop the MEGS optical design.

  18. Experiments with parallel algorithms for combinatorial problems

    NARCIS (Netherlands)

    G.A.P. Kindervater (Gerard); H.W.J.M. Trienekens

    1985-01-01

    textabstractIn the last decade many models for parallel computation have been proposed and many parallel algorithms have been developed. However, few of these models have been realized and most of these algorithms are supposed to run on idealized, unrealistic parallel machines. The parallel machines

  19. Experiments with the auction algorithm for the shortest path problem

    DEFF Research Database (Denmark)

    Larsen, Jesper; Pedersen, Ib

    1999-01-01

    The auction approach for the shortest path problem (SPP) as introduced by Bertsekas is tested experimentally. Parallel algorithms using the auction approach are developed and tested. Both the sequential and parallel auction algorithms perform significantly worse than a state-of-the-art Dijkstra-l......-like reference algorithm. Experiments are run on a distributed-memory MIMD class Meiko parallel computer....

  20. Algorithmic Animation in Education--Review of Academic Experience

    Science.gov (United States)

    Esponda-Arguero, Margarita

    2008-01-01

    This article is a review of the pedagogical experience obtained with systems for algorithmic animation. Algorithms consist of a sequence of operations whose effect on data structures can be visualized using a computer. Students learn algorithms by stepping the animation through the different individual operations, possibly reversing their effect.…

  1. Solar flare impulsive phase emission observed with SDO/EVE

    Energy Technology Data Exchange (ETDEWEB)

    Kennedy, Michael B.; Milligan, Ryan O.; Mathioudakis, Mihalis; Keenan, Francis P., E-mail: mkennedy29@qub.ac.uk [Astrophysics Research Centre, School of Mathematics and Physics, Queen' s University Belfast, University Road, Belfast BT7 1NN (United Kingdom)

    2013-12-10

    Differential emission measures (DEMs) during the impulsive phase of solar flares were constructed using observations from the EUV Variability Experiment (EVE) and the Markov-Chain Monte Carlo method. Emission lines from ions formed over the temperature range log T{sub e} = 5.8-7.2 allow the evolution of the DEM to be studied over a wide temperature range at 10 s cadence. The technique was applied to several M- and X-class flares, where impulsive phase EUV emission is observable in the disk-integrated EVE spectra from emission lines formed up to 3-4 MK and we use spatially unresolved EVE observations to infer the thermal structure of the emitting region. For the nine events studied, the DEMs exhibited a two-component distribution during the impulsive phase, a low-temperature component with peak temperature of 1-2 MK, and a broad high-temperature component from 7 to 30 MK. A bimodal high-temperature component is also found for several events, with peaks at 8 and 25 MK during the impulsive phase. The origin of the emission was verified using Atmospheric Imaging Assembly images to be the flare ribbons and footpoints, indicating that the constructed DEMs represent the spatially average thermal structure of the chromospheric flare emission during the impulsive phase.

  2. The Adam and Eve Robot Scientists for the Automated Discovery of Scientific Knowledge

    Science.gov (United States)

    King, Ross

    A Robot Scientist is a physically implemented robotic system that applies techniques from artificial intelligence to execute cycles of automated scientific experimentation. A Robot Scientist can automatically execute cycles of hypothesis formation, selection of efficient experiments to discriminate between hypotheses, execution of experiments using laboratory automation equipment, and analysis of results. The motivation for developing Robot Scientists is to better understand science, and to make scientific research more efficient. The Robot Scientist `Adam' was the first machine to autonomously discover scientific knowledge: both form and experimentally confirm novel hypotheses. Adam worked in the domain of yeast functional genomics. The Robot Scientist `Eve' was originally developed to automate early-stage drug development, with specific application to neglected tropical disease such as malaria, African sleeping sickness, etc. We are now adapting Eve to work with on cancer. We are also teaching Eve to autonomously extract information from the scientific literature.

  3. HIV testing experiences and their implications for patient engagement with HIV care and treatment on the eve of 'test and treat': findings from a multicountry qualitative study.

    Science.gov (United States)

    Wringe, Alison; Moshabela, Mosa; Nyamukapa, Constance; Bukenya, Dominic; Ondenge, Ken; Ddaaki, William; Wamoyi, Joyce; Seeley, Janet; Church, Kathryn; Zaba, Basia; Hosegood, Victoria; Bonnington, Oliver; Skovdal, Morten; Renju, Jenny

    2017-07-01

    In view of expanding 'test and treat' initiatives, we sought to elicit how the experience of HIV testing influenced subsequent engagement in HIV care among people diagnosed with HIV. As part of a multisite qualitative study, we conducted in-depth interviews in Uganda, South Africa, Tanzania, Kenya, Malawi and Zimbabwe with 5-10 health workers and 28-59 people living with HIV, per country. Topic guides covered patient and provider experiences of HIV testing and treatment services. Themes were derived through deductive and inductive coding. Various practices and techniques were employed by health workers to increase HIV testing uptake in line with national policies, some of which affected patients' subsequent engagement with HIV services. Provider-initiated testing was generally appreciated, but rarely considered voluntary, with instances of coercion and testing without consent, which could lead to disengagement from care.Conflicting rationalities for HIV testing between health workers and their clients caused tensions that undermined engagement in HIV care among people living with HIV. Although many health workers helped clients to accept their diagnosis and engage in care, some delivered static, morally charged messages regarding sexual behaviours and expectations of clinic use which discouraged future care seeking. Repeat testing was commonly reported, reflecting patients' doubts over the accuracy of prior results and beliefs that antiretroviral therapy may cure HIV. Repeat testing provided an opportunity to develop familiarity with clinical procedures, address concerns about HIV services and build trust with health workers. The principles of consent and confidentiality that should underlie HIV testing and counselling practices may be modified or omitted by health workers to achieve perceived public health benefits and policy expectations. While such actions can increase HIV testing rates, they may also jeopardise efforts to connect people diagnosed with HIV to

  4. Uus korter ja midagi veel / Eve Kaunis

    Index Scriptorium Estoniae

    Kaunis, Eve

    2008-01-01

    Uus Maa kinnisvarabüroo eluruumide konsultant Eve Kaunis ostjate eelistustest korterite valikul. Peamised müügiargumendid on soodne hind ja rohked lisaväärtused. Näiteks toodud 2-toaline korter (sisekujundus: Aet Piel, 71 m2) Põhja-Tallinnas Eugen Sachariase projekti järgi ehitatud majas ja 3-toaline korter (66,4 m2) Keilas 1980. aastatel ehitatud elamus

  5. The Drosophila eve insulator Homie promotes eve expression and protects the adjacent gene from repression by polycomb spreading.

    Science.gov (United States)

    Fujioka, Miki; Sun, Guizhi; Jaynes, James B

    2013-10-01

    Insulators can block the action of enhancers on promoters and the spreading of repressive chromatin, as well as facilitating specific enhancer-promoter interactions. However, recent studies have called into question whether the activities ascribed to insulators in model transgene assays actually reflect their functions in the genome. The Drosophila even skipped (eve) gene is a Polycomb (Pc) domain with a Pc-group response element (PRE) at one end, flanked by an insulator, an arrangement also seen in other genes. Here, we show that this insulator has three major functions. It blocks the spreading of the eve Pc domain, preventing repression of the adjacent gene, TER94. It prevents activation of TER94 by eve regulatory DNA. It also facilitates normal eve expression. When Homie is deleted in the context of a large transgene that mimics both eve and TER94 regulation, TER94 is repressed. This repression depends on the eve PRE. Ubiquitous TER94 expression is "replaced" by expression in an eve pattern when Homie is deleted, and this effect is reversed when the PRE is also removed. Repression of TER94 is attributable to spreading of the eve Pc domain into the TER94 locus, accompanied by an increase in histone H3 trimethylation at lysine 27. Other PREs can functionally replace the eve PRE, and other insulators can block PRE-dependent repression in this context. The full activity of the eve promoter is also dependent on Homie, and other insulators can promote normal eve enhancer-promoter communication. Our data suggest that this is not due to preventing promoter competition, but is likely the result of the insulator organizing a chromosomal conformation favorable to normal enhancer-promoter interactions. Thus, insulator activities in a native context include enhancer blocking and enhancer-promoter facilitation, as well as preventing the spread of repressive chromatin.

  6. The Drosophila eve Insulator Homie Promotes eve Expression and Protects the Adjacent Gene from Repression by Polycomb Spreading

    Science.gov (United States)

    Fujioka, Miki; Sun, Guizhi; Jaynes, James B.

    2013-01-01

    Insulators can block the action of enhancers on promoters and the spreading of repressive chromatin, as well as facilitating specific enhancer-promoter interactions. However, recent studies have called into question whether the activities ascribed to insulators in model transgene assays actually reflect their functions in the genome. The Drosophila even skipped (eve) gene is a Polycomb (Pc) domain with a Pc-group response element (PRE) at one end, flanked by an insulator, an arrangement also seen in other genes. Here, we show that this insulator has three major functions. It blocks the spreading of the eve Pc domain, preventing repression of the adjacent gene, TER94. It prevents activation of TER94 by eve regulatory DNA. It also facilitates normal eve expression. When Homie is deleted in the context of a large transgene that mimics both eve and TER94 regulation, TER94 is repressed. This repression depends on the eve PRE. Ubiquitous TER94 expression is “replaced” by expression in an eve pattern when Homie is deleted, and this effect is reversed when the PRE is also removed. Repression of TER94 is attributable to spreading of the eve Pc domain into the TER94 locus, accompanied by an increase in histone H3 trimethylation at lysine 27. Other PREs can functionally replace the eve PRE, and other insulators can block PRE-dependent repression in this context. The full activity of the eve promoter is also dependent on Homie, and other insulators can promote normal eve enhancer-promoter communication. Our data suggest that this is not due to preventing promoter competition, but is likely the result of the insulator organizing a chromosomal conformation favorable to normal enhancer-promoter interactions. Thus, insulator activities in a native context include enhancer blocking and enhancer-promoter facilitation, as well as preventing the spread of repressive chromatin. PMID:24204298

  7. Rõdu annab kodule meeleolu / Eve Kaunis

    Index Scriptorium Estoniae

    Kaunis, Eve

    2008-01-01

    Rõdud ja terrassid on korteri lisaväärtuseks. Eve Kaunis Uus Maa kinnisvarabüroost 7-toalisest korterist Tallinnas Pärnu mnt 110 (pindala 221,2 m2, lisandub terrass 80 m2 ja 2 rõdu 14,35 m2), 4-toalisest korterist Pirita tee 26f (pindala 166 m2, 2 rõdu 58 m2) ja 5-toalisest korterist Merirahu Klipperi majas (pindala 189,4 m2, 5 rõdu ja katuseterrass 221,5 m2, korteri viimistlus ja planeering on tehtud koostöös sisekujundaja Helen Mäkeläga)

  8. Trigger Algorithms for Alignment and Calibration at the CMS Experiment

    CERN Document Server

    Fernandez Perez Tomei, Thiago Rafael

    2017-01-01

    The data needs of the Alignment and Calibration group at the CMS experiment are reasonably different from those of the physics studies groups. Data are taken at CMS through the online event selection system, which is implemented in two steps. The Level-1 Trigger is implemented on custom-made electronics and dedicated to analyse the detector information at a coarse-grained scale, while the High Level Trigger (HLT) is implemented as a series of software algorithms, running in a computing farm, that have access to the full detector information. In this paper we describe the set of trigger algorithms that is deployed to address the needs of the Alignment and Calibration group, how it fits in the general infrastructure of the HLT, and how it feeds the Prompt Calibration Loop (PCL), allowing for a fast turnaround for the alignment and calibration constants.

  9. Machine learning based global particle indentification algorithms at LHCb experiment

    CERN Multimedia

    Derkach, Denis; Likhomanenko, Tatiana; Rogozhnikov, Aleksei; Ratnikov, Fedor

    2017-01-01

    One of the most important aspects of data processing at LHC experiments is the particle identification (PID) algorithm. In LHCb, several different sub-detector systems provide PID information: the Ring Imaging CHerenkov (RICH) detector, the hadronic and electromagnetic calorimeters, and the muon chambers. To improve charged particle identification, several neural networks including a deep architecture and gradient boosting have been applied to data. These new approaches provide higher identification efficiencies than existing implementations for all charged particle types. It is also necessary to achieve a flat dependency between efficiencies and spectator variables such as particle momentum, in order to reduce systematic uncertainties during later stages of data analysis. For this purpose, "flat” algorithms that guarantee the flatness property for efficiencies have also been developed. This talk presents this new approach based on machine learning and its performance.

  10. [A peaceful Christmas Eve at the hospital].

    Science.gov (United States)

    Ramanathan, Ramshanker; Brabrand, Mikkel; Folkestad, Lars; Hallas, Peter

    2011-12-05

    The aim of this study was to investigate admittance rates and doctors workload during Christmas. In addition, we examined if admittance data supports the common notions that overeating during Christmas results in increased rate of admittances for abdominal problems and that there is an increase in admittance of the elderly at the end of Christmas (i.e. "granny dumping"). A retrospective study analyzing data from the database of the hospital units of Sydvestjysk Sygehus was performed. Data covered admittance in the months spanning from November through January in 1994-2010. Data from Christmas was compared with data from adjacent months. During Christmas more patients with abdominal complaints were admitted to the hospital (p Christmas (p Christmas Eve. No increased admittance among the elderly at the end of Christmas was observed in our data. We conclude that overeating during the festivities of Christmas probably results in increased admittance rates in Danish hospitals. Christmas Eve is the day on which doctors can expect the lowest workload. Although the rate of admission due to lack of care at home was high, we could find no evidence of "granny dumping".

  11. Dose intensity and efficacy of the combination of everolimus and exemestane (EVE/EXE) in a real-world population of hormone receptor-positive (ER+/PgR+), HER2-negative advanced breast cancer (ABC) patients: a multicenter Italian experience.

    Science.gov (United States)

    Ciccarese, Mariangela; Fabi, Alessandra; Moscetti, Luca; Cazzaniga, Maria Elena; Petrucelli, Luciana; Forcignanò, Rosachiara; Lupo, Laura Isabella; De Matteis, Elisabetta; Chiuri, Vincenzo Emanuele; Cairo, Giuseppe; Febbraro, Antonio; Giordano, Guido; Giampaglia, Marianna; Bilancia, Domenico; La Verde, Nicla; Maiello, Evaristo; Morritti, Maria; Giotta, Francesco; Lorusso, Vito; Latorre, Agnese; Scavelli, Claudio; Romito, Sante; Cusmai, Antonio; Palmiotti, Gennaro; Surico, Giammarco

    2017-06-01

    This retrospective analysis focused on the effect of treatment with EVE/EXE in a real-world population outside of clinical trials. We examined the efficacy of this combination in terms of PFS and RR related to dose intensity (5 mg daily versus 10 mg daily) and tolerability. 163 HER2-negative ER+/PgR+ ABC patients, treated with EVE/EXE from May 2011 to March 2016, were included in the analysis. The primary endpoints were the correlation between the daily dose and RR and PFS, as well as an evaluation of the tolerability of the combination. Secondary endpoints were RR, PFS, and OS according to the line of treatment. Patients were classified into three different groups, each with a different dose intensity of everolimus (A, B, C). RR was 29.8% (A), 27.8% (B) (p = 0.953), and not evaluable (C). PFS was 9 months (95% CI 7-11) (A), 10 months (95% CI 9-11) (B), and 5 months (95% CI 2-8) (C), p = 0.956. OS was 38 months (95% CI 24-38) (A), median not reached (B), and 13 months (95% CI 10-25) (C), p = 0.002. Adverse events were stomatitis 57.7% (11.0% grade 3-4), asthenia 46.0% (6.1% grade 3-4), hypercholesterolemia 46.0% (0.6% grade 3-4), and hyperglycemia 35.6% (5.5% grade 3-4). The main reason for discontinuation/interruption was grade 2-3 stomatitis. No correlation was found between dose intensity (5 vs. 10 mg labeled dose) and efficacy in terms of RR and PFS. The tolerability of the higher dose was poor in our experience, although this had no impact on efficacy.

  12. Approximate Quantum Adders with Genetic Algorithms: An IBM Quantum Experience

    Directory of Open Access Journals (Sweden)

    Li Rui

    2017-07-01

    Full Text Available It has been proven that quantum adders are forbidden by the laws of quantum mechanics. We analyze theoretical proposals for the implementation of approximate quantum adders and optimize them by means of genetic algorithms, improving previous protocols in terms of efficiency and fidelity. Furthermore, we experimentally realize a suitable approximate quantum adder with the cloud quantum computing facilities provided by IBM Quantum Experience. The development of approximate quantum adders enhances the toolbox of quantum information protocols, paving the way for novel applications in quantum technologies.

  13. Approximate Quantum Adders with Genetic Algorithms: An IBM Quantum Experience

    Science.gov (United States)

    Li, Rui; Alvarez-Rodriguez, Unai; Lamata, Lucas; Solano, Enrique

    2017-07-01

    It has been proven that quantum adders are forbidden by the laws of quantum mechanics. We analyze theoretical proposals for the implementation of approximate quantum adders and optimize them by means of genetic algorithms, improving previous protocols in terms of efficiency and fidelity. Furthermore, we experimentally realize a suitable approximate quantum adder with the cloud quantum computing facilities provided by IBM Quantum Experience. The development of approximate quantum adders enhances the toolbox of quantum information protocols, paving the way for novel applications in quantum technologies.

  14. Prediction of Extreme Ultraviolet Variability Experiment (EVE)/ Extreme Ultraviolet Spectro-Photometer (ESP) Irradiance from Solar Dynamics Observatory (SDO)/ Atmospheric Imaging Assembly (AIA) Images Using Fuzzy Image Processing and Machine Learning

    Science.gov (United States)

    Colak, T.; Qahwaji, R.

    2013-03-01

    The cadence and resolution of solar images have been increasing dramatically with the launch of new spacecraft such as STEREO and SDO. This increase in data volume provides new opportunities for solar researchers, but the efficient processing and analysis of these data create new challenges. We introduce a fuzzy-based solar feature-detection system in this article. The proposed system processes SDO/AIA images using fuzzy rules to detect coronal holes and active regions. This system is fast and it can handle different size images. It is tested on six months of solar data (1 October 2010 to 31 March 2011) to generate filling factors (ratio of area of solar feature to area of rest of the solar disc) for active regions and coronal holes. These filling factors are then compared to SDO/EVE/ESP irradiance measurements. The correlation between active-region filling factors and irradiance measurements is found to be very high, which has encouraged us to design a time-series prediction system using Radial Basis Function Networks to predict ESP irradiance measurements from our generated filling factors.

  15. IDEAL: Images Across Domains, Experiments, Algorithms and Learning

    Science.gov (United States)

    Ushizima, Daniela M.; Bale, Hrishikesh A.; Bethel, E. Wes; Ercius, Peter; Helms, Brett A.; Krishnan, Harinarayan; Grinberg, Lea T.; Haranczyk, Maciej; Macdowell, Alastair A.; Odziomek, Katarzyna; Parkinson, Dilworth Y.; Perciano, Talita; Ritchie, Robert O.; Yang, Chao

    2016-11-01

    Research across science domains is increasingly reliant on image-centric data. Software tools are in high demand to uncover relevant, but hidden, information in digital images, such as those coming from faster next generation high-throughput imaging platforms. The challenge is to analyze the data torrent generated by the advanced instruments efficiently, and provide insights such as measurements for decision-making. In this paper, we overview work performed by an interdisciplinary team of computational and materials scientists, aimed at designing software applications and coordinating research efforts connecting (1) emerging algorithms for dealing with large and complex datasets; (2) data analysis methods with emphasis in pattern recognition and machine learning; and (3) advances in evolving computer architectures. Engineering tools around these efforts accelerate the analyses of image-based recordings, improve reusability and reproducibility, scale scientific procedures by reducing time between experiments, increase efficiency, and open opportunities for more users of the imaging facilities. This paper describes our algorithms and software tools, showing results across image scales, demonstrating how our framework plays a role in improving image understanding for quality control of existent materials and discovery of new compounds.

  16. Experiments on Supervised Learning Algorithms for Text Categorization

    Science.gov (United States)

    Namburu, Setu Madhavi; Tu, Haiying; Luo, Jianhui; Pattipati, Krishna R.

    2005-01-01

    Modern information society is facing the challenge of handling massive volume of online documents, news, intelligence reports, and so on. How to use the information accurately and in a timely manner becomes a major concern in many areas. While the general information may also include images and voice, we focus on the categorization of text data in this paper. We provide a brief overview of the information processing flow for text categorization, and discuss two supervised learning algorithms, viz., support vector machines (SVM) and partial least squares (PLS), which have been successfully applied in other domains, e.g., fault diagnosis [9]. While SVM has been well explored for binary classification and was reported as an efficient algorithm for text categorization, PLS has not yet been applied to text categorization. Our experiments are conducted on three data sets: Reuter's- 21578 dataset about corporate mergers and data acquisitions (ACQ), WebKB and the 20-Newsgroups. Results show that the performance of PLS is comparable to SVM in text categorization. A major drawback of SVM for multi-class categorization is that it requires a voting scheme based on the results of pair-wise classification. PLS does not have this drawback and could be a better candidate for multi-class text categorization.

  17. Flare Comparisons of the Flare Irradiance Spectral Model (FISM) to Preliminary SDO EVE Data

    Science.gov (United States)

    Chamberlon, Phillip C.

    2010-01-01

    The Solar Dynamics Observatory (SDO) launched February 11, 2010 from Kennedy Space Center and started normal science operations in April 2010. One of the instruments onboard SDO, the EUV Variability- Experiment (EVE), will measure the solar EUV irradiance from 0.1-105 nm with 0.1 nm spectral resolution as well as a measure of the broad-band Lyman-Alpha emission (121.0 rim), all with less than 10 percent uncertainties. One of the biggest improvements of EVE over its predecessors is its ability to continuously measure the complete spectrum ever y 10 seconds, 24 hours a day, 7 days a week. This temporal coverage and cadence will greatly enhance the knowledge of the solar EUV variations during solar flares. This paper will present a comparison of the Flare Irradiance Spectral Model (FISM), which can produce an estimated EUV spectrum at 10 seconds temporal resolution, to the preliminary flare observation results from SDO EVE. The discussion will focus on the short-term EUV flare variations and evolution.

  18. Comparisons of the Flare Irradiance Spectral Model (FISM) to Preliminary SDO EVE Data

    Science.gov (United States)

    Chamberlin, Phillip

    2010-01-01

    The Solar Dynamics Observatory (SDO) launched February 11,2010 from Kennedy Space Center and started normal science operations in April 2010. One of the instruments onboard SDO, the EUV Variability Experiment (EVE), will measure the solar EUV irradiance from 0.1-105 nm with 0.1 nm spectral resolution as well as a measure of the broad-band Lyman-Alpha emission (121.6 nm), all with less than 10 percent uncertainties. One of the biggest improvements of EVE over its predecessors is its ability to continuously measure the complete spectrum every 10 seconds, 24 hours a day, 7 days a week. This temporal coverage and cadence will greatly enhance the knowledge of the solar EUV variations during solar flares. This paper will present a comparison of the Flare Irradiance Spectral Model (FISM), which can produce an estimated EUV spectrum at 10 seconds temporal resolution, to the preliminary results from SDO EVE. The discussion will focus on the short-term EUV flare variations and evolution.

  19. COMPARING SEARCHING AND SORTING ALGORITHMS EFFICIENCY IN IMPLEMENTING COMPUTATIONAL EXPERIMENT IN PROGRAMMING ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    R. Sagan

    2011-11-01

    Full Text Available This article considers different aspects which allow defining correctness of choosing sorting algorithms. Also some algorithms, needed for computational experiments for certain class of programs, are compared.

  20. Problems afoot for the CERN kindergarten (EVEE)?

    CERN Multimedia

    Staff Association

    2016-01-01

    You might have noticed that recently the Kindergarten changed names, it’s is now known under the name of EVEE which stands for ‘Espace de Vie Enfantine et École’ and currently welcomes 150 children between 4 months and 6 years of age. This establishment which is under the aegis of the Staff Association is governed by a committee composed of a mixture of the following: employers (from the Staff Association), employees, parents and the Headmistress who is an ex officio member (see Echo 238: http://staff-association.web.cern.ch/content/quoi-de-neuf-au-jardin-d%E2%80%99enfants). Great strides have been made in the past decade Over the previous decade in conjuction with the CERN Administration several new services have been proposed, including: the establishment of a canteen with a capacity of up to 60 children/day; the setting-up of a creche for infants ranging between 4 months and 3 years (approx 35 infants); the creation of a day-camp with the capacity to welcome up ...

  1. Algorithms

    Indian Academy of Sciences (India)

    positive numbers. The word 'algorithm' was most often associated with this algorithm till 1950. It may however be pOinted out that several non-trivial algorithms such as synthetic (polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used.

  2. Experiments with conjugate gradient algorithms for homotopy curve tracking

    Science.gov (United States)

    Irani, Kashmira M.; Ribbens, Calvin J.; Watson, Layne T.; Kamat, Manohar P.; Walker, Homer F.

    1991-01-01

    There are algorithms for finding zeros or fixed points of nonlinear systems of equations that are globally convergent for almost all starting points, i.e., with probability one. The essence of all such algorithms is the construction of an appropriate homotopy map and then tracking some smooth curve in the zero set of this homotopy map. HOMPACK is a mathematical software package implementing globally convergent homotopy algorithms with three different techniques for tracking a homotopy zero curve, and has separate routines for dense and sparse Jacobian matrices. The HOMPACK algorithms for sparse Jacobian matrices use a preconditioned conjugate gradient algorithm for the computation of the kernel of the homotopy Jacobian matrix, a required linear algebra step for homotopy curve tracking. Here, variants of the conjugate gradient algorithm are implemented in the context of homotopy curve tracking and compared with Craig's preconditioned conjugate gradient method used in HOMPACK. The test problems used include actual large scale, sparse structural mechanics problems.

  3. Algorithms

    Indian Academy of Sciences (India)

    In the description of algorithms and programming languages, what is the role of control abstraction? • What are the inherent limitations of the algorithmic processes? In future articles in this series, we will show that these constructs are powerful and can be used to encode any algorithm. In the next article, we will discuss ...

  4. Designing, Visualizing, and Discussing Algorithms within a CS 1 Studio Experience: An Empirical Study

    Science.gov (United States)

    Hundhausen, Christopher D.; Brown, Jonathan L.

    2008-01-01

    Within the context of an introductory CS1 unit on algorithmic problem-solving, we are exploring the pedagogical value of a novel active learning activity--the "studio experience"--that actively engages learners with algorithm visualization technology. In a studio experience, student pairs are tasked with (a) developing a solution to an algorithm…

  5. Limits on Light WIMPs with a Germanium Detector at 172 eVee threshold at the China Jinping Underground Laboratory

    CERN Document Server

    Liu, S K; Kang, K J; Cheng, J P; Wong, H T; Li, Y J; Lin, S T; Chang, J P; Chen, N; Chen, Q H; Chen, Y H; Chuang, Y C; Deng, Z; Du, Q; Gong, H; Hao, X Q; He, H J; He, Q J; Huang, H X; Huang, T R; Jiang, H; Li, H B; Li, J M; Li, J; Li, X; Li, X Q; Li, X Y; Li, Y L; Liao, H Y; Lin, F K; Lü, L C; Ma, H; Mao, S J; Qin, J Q; Ren, J; Ruan, X C; Shen, M B; Singh, L; Singh, M K; Soma, A K; Su, J; Tang, C J; Tseng, C H; Wang, J M; Wang, L; Wang, Q; Wu, S Y; Wu, Y C; Xianyu, Z Z; Xiao, R Q; Xing, H Y; Xu, F Z; Xu, Y; Xu, X J; Xue, T; Yang, C W; Yang, L T; Yang, S W; Yi, N; Yu, C X; Yu, H; Yu, X Z; Zeng, X H; Zeng, Z; Zhang, L; Zhang, Y H; Zhao, M G; Zhao, W; Zhou, Z Y; Zhu, J J; Zhu, W B; Zhu, X Z; Zhu, Z H

    2014-01-01

    The China Dark Matter Experiment reports results on light WIMP dark matter searches at the China Jinping Underground Laboratory with a germanium detector array with a total mass of 20 g. The physics threshold achieved is 172 eVee at 50% signal efficiency. With 0.784 kg-days of data, exclusion region on spin-independent coupling with the nucleon is derived, improving over our earlier bounds at WIMP mass less than 4.6 GeV.

  6. Negotiating Marriage on the Eve of Human Rights | Besendahl ...

    African Journals Online (AJOL)

    African Sociological Review / Revue Africaine de Sociologie. Journal Home · ABOUT · Advanced Search · Current Issue · Archives · Journal Home > Vol 8, No 1 (2004) >. Log in or Register to get access to full text downloads. Username, Password, Remember me, or Register. Negotiating Marriage on the Eve of Human ...

  7. LHCb New algorithms for Flavour Tagging at the LHCb experiment

    CERN Multimedia

    Fazzini, Davide

    2016-01-01

    The Flavour Tagging technique allows to identify the B initial flavour, required in the measurements of flavour oscillations and time-dependent CP asymmetries in neutral B meson systems. The identification performances at LHCb are further enhanced thanks to the contribution of new algorithms.

  8. Algorithms

    Indian Academy of Sciences (India)

    , i is referred to as the loop-index, 'stat-body' is any sequence of ... while i ~ N do stat-body; i: = i+ 1; endwhile. The algorithm for sorting the numbers is described in Table 1 and the algorithmic steps on a list of 4 numbers shown in. Figure 1.

  9. Parallel Algorithms for Online Track Finding for the \\bar{{\\rm{P}}}ANDA Experiment at FAIR

    Science.gov (United States)

    Bianchi, L.; Herten, A.; Ritman, J.; Stockmanns, T.; PANDA Collaboration

    2017-10-01

    \\bar{{{P}}}ANDA is a future hadron and nuclear physics experiment at the FAIR facility in construction in Darmstadt, Germany. Unlike the majority of current experiments, \\bar{{{P}}}ANDA’s strategy for data acquisition is based on online event reconstruction from free-streaming data, performed in real time entirely by software algorithms using global detector information. This paper reports on the status of the development of algorithms for the reconstruction of charged particle tracks, targeted towards online data processing applications, designed for execution on data-parallel processors such as GPUs (Graphic Processing Units). Two parallel algorithms for track finding, derived from the Circle Hough algorithm, are being developed to extend the parallelism to all stages of the algorithm. The concepts of the algorithms are described, along with preliminary results and considerations about their implementations and performance.

  10. Learning motor skills from algorithms to robot experiments

    CERN Document Server

    Kober, Jens

    2014-01-01

    This book presents the state of the art in reinforcement learning applied to robotics both in terms of novel algorithms and applications. It discusses recent approaches that allow robots to learn motor skills and presents tasks that need to take into account the dynamic behavior of the robot and its environment, where a kinematic movement plan is not sufficient. The book illustrates a method that learns to generalize parameterized motor plans which is obtained by imitation or reinforcement learning, by adapting a small set of global parameters, and appropriate kernel-based reinforcement learning algorithms. The presented applications explore highly dynamic tasks and exhibit a very efficient learning process. All proposed approaches have been extensively validated with benchmarks tasks, in simulation, and on real robots. These tasks correspond to sports and games but the presented techniques are also applicable to more mundane household tasks. The book is based on the first author’s doctoral thesis, which wo...

  11. Multimedia over cognitive radio networks algorithms, protocols, and experiments

    CERN Document Server

    Hu, Fei

    2014-01-01

    PrefaceAbout the EditorsContributorsNetwork Architecture to Support Multimedia over CRNA Management Architecture for Multimedia Communication in Cognitive Radio NetworksAlexandru O. Popescu, Yong Yao, Markus Fiedler , and Adrian P. PopescuPaving a Wider Way for Multimedia over Cognitive Radios: An Overview of Wideband Spectrum Sensing AlgorithmsBashar I. Ahmad, Hongjian Sun, Cong Ling, and Arumugam NallanathanBargaining-Based Spectrum Sharing for Broadband Multimedia Services in Cognitive Radio NetworkYang Yan, Xiang Chen, Xiaofeng Zhong, Ming Zhao, and Jing WangPhysical Layer Mobility Challen

  12. Percolation Model for the Existence of a Mitochondrial Eve

    CERN Document Server

    Neves, A G M

    2005-01-01

    We look at the process of inheritance of mitochondrial DNA as a percolation model on trees equivalent to the Galton-Watson process. The model is exactly solvable for its percolation threshold $p_c$ and percolation probability critical exponent. In the approximation of small percolation probability, and assuming limited progeny number, we are also able to find the maximum and minimum percolation probabilities over all probability distributions for the progeny number constrained to a given $p_c$. As a consequence, we can relate existence of a mitochondrial Eve to quantitative knowledge about demographic evolution of early mankind. In particular, we show that a mitochondrial Eve may exist even in an exponentially growing population, provided that the average number of children per individual is constrained to a small range depending on the probability $p$ that a newborn child is a female.

  13. HiEve: A corpus for extracting event hierarchies from news stories

    OpenAIRE

    Glavaš, Goran; Šnajder, Jan; Kordjamshidi, Parisa; Moens, Marie-Francine

    2014-01-01

    Narratives in news stories typically describe a real-world event of coarse spatial and temporal granularity along with its subevents. In this work, we present HiEve, a corpus for recognizing relations of spatiotemporal containment between events. In HiEve, the narratives are represented as hierarchies of events based on relations of spatiotemporal containment (i.e., superevent–subevent relations). We describe the process of manual annotation of HiEve. Furthermore, we build a supervised cla...

  14. Algorithms

    Indian Academy of Sciences (India)

    Algorithms. 3. Procedures and Recursion. R K Shyamasundar. In this article we introduce procedural abstraction and illustrate its uses. Further, we illustrate the notion of recursion which is one of the most useful features of procedural abstraction. Procedures. Let us consider a variation of the pro blem of summing the first M.

  15. Algorithms

    Indian Academy of Sciences (India)

    number of elements. We shall illustrate the widely used matrix multiplication algorithm using the two dimensional arrays in the following. Consider two matrices A and B of integer type with di- mensions m x nand n x p respectively. Then, multiplication of. A by B denoted, A x B , is defined by matrix C of dimension m xp where.

  16. Control Algorithms for a Sailboat Robot with a Sea Experiment

    OpenAIRE

    Clement, Benoit

    2013-01-01

    International audience; A sailboat robot is a highly nonlinear system but which control is relatively easy, however. Indeed, its mechanical design is the result of an evolution over thousands of years. This paper focuses on a control strategy which remains simple, with few parameters to adjust and meaningful with respect to the intuition. A test on the sailboat robot called Vaimos is presented to illustrate the performance of the regulator with a sea experiment. Moreover, the HardWare In the ...

  17. Online Tracking Algorithms on GPUs for the P̅ANDA Experiment at FAIR

    Science.gov (United States)

    Bianchi, L.; Herten, A.; Ritman, J.; Stockmanns, T.; Adinetz, A.; Kraus, J.; Pleiter, D.

    2015-12-01

    P̅ANDA is a future hadron and nuclear physics experiment at the FAIR facility in construction in Darmstadt, Germany. In contrast to the majority of current experiments, PANDA's strategy for data acquisition is based on event reconstruction from free-streaming data, performed in real time entirely by software algorithms using global detector information. This paper reports the status of the development of algorithms for the reconstruction of charged particle tracks, optimized online data processing applications, using General-Purpose Graphic Processing Units (GPU). Two algorithms for trackfinding, the Triplet Finder and the Circle Hough, are described, and details of their GPU implementations are highlighted. Average track reconstruction times of less than 100 ns are obtained running the Triplet Finder on state-of- the-art GPU cards. In addition, a proof-of-concept system for the dispatch of data to tracking algorithms using Message Queues is presented.

  18. Optimal control inspired algorithm for real-space optimization with application to Majorana fermion experiments

    Science.gov (United States)

    Boutin, Samuel; Camirand Lemyre, Julien; Turcotte, Sara; Pioro-LadrièRe, Michel; Garate, Ion

    Inspired by the success of optimal control theory algorithms in the design of new, fast and accurate gates for quantum information processing, we import the mindset of these time-domain optimization strategies to static real-space functions in solid-state systems. Combining ideas from the GRAPE (Gradient Ascent Pulse Engineering) algorithm and transport calculations, we devise a new gradient-based algorithm for the optimization of transport-related quantities through the real-space variation of experimentally controllable parameters. This technique can be useful for the design of experiments in mesoscopic solid-state systems. As an example, we apply our algorithm to the optimization of the topological visibility of Majorana fermions in superconducting nanowires without spin-orbit coupling in a non-uniform magnetic field.

  19. Thermodynamic Spectrum of Solar Flares Based on SDO/EVE Observations: Techniques and First Results

    Science.gov (United States)

    Wang, Yuming; Zhou, Zhenjun; Zhang, Jie; Liu, Kai; Liu, Rui; Shen, Chenglong; Chamberlin, Phillip C.

    2016-01-01

    The Solar Dynamics Observatory (SDO)/EUV Variability Experiment (EVE) provides rich information on the thermodynamic processes of solar activities, particularly on solar flares. Here, we develop a method to construct thermodynamic spectrum (TDS) charts based on the EVE spectral lines. This tool could potentially be useful for extreme ultraviolet (EUV) astronomy to learn about the eruptive activities on distant astronomical objects. Through several cases, we illustrate what we can learn from the TDS charts. Furthermore, we apply the TDS method to 74 flares equal to or greater than the M5.0 class, and reach the following statistical results. First, EUV peaks are always behind the soft X-ray (SXR) peaks and stronger flares tend to have faster cooling rates. There is a power-law correlation between the peak delay times and the cooling rates, suggesting a coherent cooling process of flares from SXR to EUV emissions. Second, there are two distinct temperature drift patterns, called Type I and Type II. For Type I flares, the enhanced emission drifts from high to low temperature like a quadrilateral, whereas for Type II flares the drift pattern looks like a triangle. Statistical analysis suggests that Type II flares are more impulsive than Type I flares. Third, for late-phase flares, the peak intensity ratio of the late phase to the main phase is roughly correlated with the flare class, and the flares with a strong late phase are all confined. We believe that the re-deposition of the energy carried by a flux rope, which unsuccessfully erupts out, into thermal emissions is responsible for the strong late phase found in a confined flare. Furthermore, we show the signatures of the flare thermodynamic process in the chromosphere and transition region in the TDS charts. These results provide new clues to advance our understanding of the thermodynamic processes of solar flares and associated solar eruptions, e.g., coronal mass ejections.

  20. Overview of EVE – the event visualization environment of ROOT

    CERN Document Server

    Tadel, M

    2010-01-01

    EVE is a high-level visualization library using ROOT's data-processing, GUI and OpenGL interfaces. It is designed as a framework for object management offering hierarchical data organization, object interaction and visualization via GUI and OpenGL representations. Automatic creation of 2D projected views is also supported. On the other hand, it can serve as an event visualization toolkit satisfying most HEP requirements: visualization of geometry, simulated and reconstructed data such as hits, clusters, tracks and calorimeter information. Special classes are available for visualization of raw-data. Object-interaction layer allows for easy selection and highlighting of objects and their derived representations (projections) across several views (3D, Rho-Z, R-Phi). Object-specific tooltips are provided in both GUI and GL views. The visual-configuration layer of EVE is built around a data-base of template objects that can be applied to specific instances of visualization objects to ensure consistent object prese...

  1. Classical boson sampling algorithms with superior performance to near-term experiments

    Science.gov (United States)

    Neville, Alex; Sparrow, Chris; Clifford, Raphaël; Johnston, Eric; Birchall, Patrick M.; Montanaro, Ashley; Laing, Anthony

    2017-12-01

    It is predicted that quantum computers will dramatically outperform their conventional counterparts. However, large-scale universal quantum computers are yet to be built. Boson sampling is a rudimentary quantum algorithm tailored to the platform of linear optics, which has sparked interest as a rapid way to demonstrate such quantum supremacy. Photon statistics are governed by intractable matrix functions, which suggests that sampling from the distribution obtained by injecting photons into a linear optical network could be solved more quickly by a photonic experiment than by a classical computer. The apparently low resource requirements for large boson sampling experiments have raised expectations of a near-term demonstration of quantum supremacy by boson sampling. Here we present classical boson sampling algorithms and theoretical analyses of prospects for scaling boson sampling experiments, showing that near-term quantum supremacy via boson sampling is unlikely. Our classical algorithm, based on Metropolised independence sampling, allowed the boson sampling problem to be solved for 30 photons with standard computing hardware. Compared to current experiments, a demonstration of quantum supremacy over a successful implementation of these classical methods on a supercomputer would require the number of photons and experimental components to increase by orders of magnitude, while tackling exponentially scaling photon loss.

  2. Vertigo in childhood: proposal for a diagnostic algorithm based upon clinical experience.

    Science.gov (United States)

    Casani, A P; Dallan, I; Navari, E; Sellari Franceschini, S; Cerchiai, N

    2015-06-01

    The aim of this paper is to analyse, after clinical experience with a series of patients with established diagnoses and review of the literature, all relevant anamnestic features in order to build a simple diagnostic algorithm for vertigo in childhood. This study is a retrospective chart review. A series of 37 children underwent complete clinical and instrumental vestibular examination. Only neurological disorders or genetic diseases represented exclusion criteria. All diagnoses were reviewed after applying the most recent diagnostic guidelines. In our experience, the most common aetiology for dizziness is vestibular migraine (38%), followed by acute labyrinthitis/neuritis (16%) and somatoform vertigo (16%). Benign paroxysmal vertigo was diagnosed in 4 patients (11%) and paroxysmal torticollis was diagnosed in a 1-year-old child. In 8% (3 patients) of cases, the dizziness had a post-traumatic origin: 1 canalolithiasis of the posterior semicircular canal and 2 labyrinthine concussions, respectively. Menière's disease was diagnosed in 2 cases. A bilateral vestibular failure of unknown origin caused chronic dizziness in 1 patient. In conclusion, this algorithm could represent a good tool for guiding clinical suspicion to correct diagnostic assessment in dizzy children where no neurological findings are detectable. The algorithm has just a few simple steps, based mainly on two aspects to be investigated early: temporal features of vertigo and presence of hearing impairment. A different algorithm has been proposed for cases in which a traumatic origin is suspected.

  3. The Lawless Frontier of Deep Space: Code as Law in EVE Online

    Directory of Open Access Journals (Sweden)

    Melissa de Zwart

    2014-03-01

    Full Text Available This article explores the concepts of player agency with respect to governance and regulation of online games. It considers the unique example of the Council of Stellar Management in EVE Online, and explores the multifaceted role performed by players involved in that Council. In particular, it considers the interaction between code, rules, contracts, and play with respect to EVE Online. This is used as a means to better understand the relations of power generated in game spaces.

  4. Ex-Vivo Uterine Environment (EVE Therapy Induced Limited Fetal Inflammation in a Premature Lamb Model.

    Directory of Open Access Journals (Sweden)

    Yuichiro Miura

    Full Text Available Ex-vivo uterine environment (EVE therapy uses an artificial placenta to provide gas exchange and nutrient delivery to a fetus submerged in an amniotic fluid bath. Development of EVE may allow us to treat very premature neonates without mechanical ventilation. Meanwhile, elevations in fetal inflammation are associated with adverse neonatal outcomes. In the present study, we analysed fetal survival, inflammation and pulmonary maturation in preterm lambs maintained on EVE therapy using a parallelised umbilical circuit system with a low priming volume.Ewes underwent surgical delivery at 115 days of gestation (term is 150 days, and fetuses were transferred to EVE therapy (EVE group; n = 5. Physiological parameters were continuously monitored; fetal blood samples were intermittently obtained to assess wellbeing and targeted to reference range values for 2 days. Age-matched animals (Control group; n = 6 were surgically delivered at 117 days of gestation. Fetal blood and tissue samples were analysed and compared between the two groups.Fetal survival time in the EVE group was 27.0 ± 15.5 (group mean ± SD hours. Only one fetus completed the pre-determined study period with optimal physiological parameters, while the other 4 animals demonstrated physiological deterioration or death prior to the pre-determined study end point. Significant elevations (p0.05 in surfactant protein mRNA expression level between the two groups.In this study, we achieved limited fetal survival using EVE therapy. Despite this, EVE therapy only induced a modest fetal inflammatory response and did not promote lung maturation. These data provide additional insight into markers of treatment efficacy for the assessment of future studies.

  5. An Experience Oriented-Convergence Improved Gravitational Search Algorithm for Minimum Variance Distortionless Response Beamforming Optimum.

    Directory of Open Access Journals (Sweden)

    Soodabeh Darzi

    Full Text Available An experience oriented-convergence improved gravitational search algorithm (ECGSA based on two new modifications, searching through the best experiments and using of a dynamic gravitational damping coefficient (α, is introduced in this paper. ECGSA saves its best fitness function evaluations and uses those as the agents' positions in searching process. In this way, the optimal found trajectories are retained and the search starts from these trajectories, which allow the algorithm to avoid the local optimums. Also, the agents can move faster in search space to obtain better exploration during the first stage of the searching process and they can converge rapidly to the optimal solution at the final stage of the search process by means of the proposed dynamic gravitational damping coefficient. The performance of ECGSA has been evaluated by applying it to eight standard benchmark functions along with six complicated composite test functions. It is also applied to adaptive beamforming problem as a practical issue to improve the weight vectors computed by minimum variance distortionless response (MVDR beamforming technique. The results of implementation of the proposed algorithm are compared with some well-known heuristic methods and verified the proposed method in both reaching to optimal solutions and robustness.

  6. Achievement of Sustained Net Plasma Heating in a Fusion Experiment with the Optometrist Algorithm.

    Science.gov (United States)

    Baltz, E A; Trask, E; Binderbauer, M; Dikovsky, M; Gota, H; Mendoza, R; Platt, J C; Riley, P F

    2017-07-25

    Many fields of basic and applied science require efficiently exploring complex systems with high dimensionality. An example of such a challenge is optimising the performance of plasma fusion experiments. The highly-nonlinear and temporally-varying interaction between the plasma, its environment and external controls presents a considerable complexity in these experiments. A further difficulty arises from the fact that there is no single objective metric that fully captures both plasma quality and equipment constraints. To efficiently optimise the system, we develop the Optometrist Algorithm, a stochastic perturbation method combined with human choice. Analogous to getting an eyeglass prescription, the Optometrist Algorithm confronts a human operator with two alternative experimental settings and associated outcomes. A human operator then chooses which experiment produces subjectively better results. This innovative technique led to the discovery of an unexpected record confinement regime with positive net heating power in a field-reversed configuration plasma, characterised by a >50% reduction in the energy loss rate and concomitant increase in ion temperature and total plasma energy.

  7. THERMODYNAMIC SPECTRUM OF SOLAR FLARES BASED ON SDO/EVE OBSERVATIONS: TECHNIQUES AND FIRST RESULTS

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Yuming; Zhou, Zhenjun; Liu, Kai; Liu, Rui; Shen, Chenglong [CAS Key Laboratory of Geospace Environment, Department of Geophysics and Planetary Sciences, University of Science and Technology of China, Hefei, Anhui 230026 (China); Zhang, Jie [School of Physics, Astronomy and Computational Sciences, George Mason University, 4400 University Drive, MSN 6A2, Fairfax, VA 22030 (United States); Chamberlin, Phillip C., E-mail: ymwang@ustc.edu.cn [Solar Physics Laboratory, Heliophysics Division, NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States)

    2016-03-15

    The Solar Dynamics Observatory (SDO)/EUV Variability Experiment (EVE) provides rich information on the thermodynamic processes of solar activities, particularly on solar flares. Here, we develop a method to construct thermodynamic spectrum (TDS) charts based on the EVE spectral lines. This tool could potentially be useful for extreme ultraviolet (EUV) astronomy to learn about the eruptive activities on distant astronomical objects. Through several cases, we illustrate what we can learn from the TDS charts. Furthermore, we apply the TDS method to 74 flares equal to or greater than the M5.0 class, and reach the following statistical results. First, EUV peaks are always behind the soft X-ray (SXR) peaks and stronger flares tend to have faster cooling rates. There is a power-law correlation between the peak delay times and the cooling rates, suggesting a coherent cooling process of flares from SXR to EUV emissions. Second, there are two distinct temperature drift patterns, called Type I and Type II. For Type I flares, the enhanced emission drifts from high to low temperature like a quadrilateral, whereas for Type II flares the drift pattern looks like a triangle. Statistical analysis suggests that Type II flares are more impulsive than Type I flares. Third, for late-phase flares, the peak intensity ratio of the late phase to the main phase is roughly correlated with the flare class, and the flares with a strong late phase are all confined. We believe that the re-deposition of the energy carried by a flux rope, which unsuccessfully erupts out, into thermal emissions is responsible for the strong late phase found in a confined flare. Furthermore, we show the signatures of the flare thermodynamic process in the chromosphere and transition region in the TDS charts. These results provide new clues to advance our understanding of the thermodynamic processes of solar flares and associated solar eruptions, e.g., coronal mass ejections.

  8. SU-E-T-344: Validation and Clinical Experience of Eclipse Electron Monte Carlo Algorithm (EMC)

    Energy Technology Data Exchange (ETDEWEB)

    Pokharel, S [21st Century Oncology, Fort Myers, FL (United States); Rana, S [Procure Proton Therapy Center, Oklahoma City, OK (United States)

    2014-06-01

    Purpose: The purpose of this study is to validate Eclipse Electron Monte Carlo (Algorithm for routine clinical uses. Methods: The PTW inhomogeneity phantom (T40037) with different combination of heterogeneous slabs has been CT-scanned with Philips Brilliance 16 slice scanner. The phantom contains blocks of Rando Alderson materials mimicking lung, Polystyrene (Tissue), PTFE (Bone) and PMAA. The phantom has 30×30×2.5 cm base plate with 2cm recesses to insert inhomogeneity. The detector systems used in this study are diode, tlds and Gafchromic EBT2 films. The diode and tlds were included in CT scans. The CT sets are transferred to Eclipse treatment planning system. Several plans have been created with Eclipse Monte Carlo (EMC) algorithm 11.0.21. Measurements have been carried out in Varian TrueBeam machine for energy from 6–22mev. Results: The measured and calculated doses agreed very well for tissue like media. The agreement was reasonably okay for the presence of lung inhomogeneity. The point dose agreement was within 3.5% and Gamma passing rate at 3%/3mm was greater than 93% except for 6Mev(85%). The disagreement can reach as high as 10% in the presence of bone inhomogeneity. This is due to eclipse reporting dose to the medium as opposed to the dose to the water as in conventional calculation engines. Conclusion: Care must be taken when using Varian Eclipse EMC algorithm for dose calculation for routine clinical uses. The algorithm dose not report dose to water in which most of the clinical experiences are based on rather it just reports dose to medium directly. In the presence of inhomogeneity such as bone, the dose discrepancy can be as high as 10% or even more depending on the location of normalization point or volume. As Radiation oncology as an empirical science, care must be taken before using EMC reported monitor units for clinical uses.

  9. Birds flee en mass from New Year's Eve fireworks.

    Science.gov (United States)

    Shamoun-Baranes, Judy; Dokter, Adriaan M; van Gasteren, Hans; van Loon, E Emiel; Leijnse, Hidde; Bouten, Willem

    2011-11-01

    Anthropogenic disturbances of wildlife, such as noise, human presence, hunting activity, and motor vehicles, are becoming an increasing concern in conservation biology. Fireworks are an important part of celebrations worldwide, and although humans often find fireworks spectacular, fireworks are probably perceived quite differently by wild animals. Behavioral responses to fireworks are difficult to study at night, and little is known about the negative effects fireworks may have on wildlife. Every year, thousands of tons of fireworks are lit by civilians on New Year's Eve in the Netherlands. Using an operational weather radar, we quantified the reaction of birds to fireworks in 3 consecutive years. Thousands of birds took flight shortly after midnight, with high aerial movements lasting at least 45 min and peak densities measured at 500 m altitude. The highest densities were observed over grasslands and wetlands, including nature conservation sites, where thousands of waterfowl rest and feed. The Netherlands is the most important winter staging area for several species of waterfowl in Europe. We estimate that hundreds of thousands of birds in the Netherlands take flight due to fireworks. The spatial and temporal extent of disturbance is substantial, and potential consequences are discussed. Weather radar provides a unique opportunity to study the reaction of birds to fireworks, which has otherwise remained elusive.

  10. Registrations for EVE and School and Summer Camp

    CERN Multimedia

    Staff Association

    2017-01-01

    In the wake of the Open Day, held on Saturday, 4 March 2017 (see Echo No. 264), EVE and School launched into an enrolment campaign on 6, 7 and 8 March. Once again, this year, we registered a great number of applications, and most of the groups are now full. The Nursery is already full, including the groups for babies (4 months to 1 year old), walkers (1 to 2 years old), and 2- to 3-year-olds. Regarding the Kindergarten, which welcomes 2- to 3 year-old children enrolled for mornings, as well as 3- to 4-year-olds enrolled either for mornings or for full days, there are still places available in the morning groups. Finally, the School for children aged 4 to 6 (Primary 1 and 2) enrolled for mornings or for full days, will be composed of three classes of around twenty children in 2017–2018 (one class of P1 and two classes of P2). All of these classes are currently full. If you are interested in a place in the morning groups of the Kindergarten (2- to 4-year-olds), please contact us to enroll your ...

  11. Xerxes in Mikhail Bulgakov’s Play Adam and Eve

    Directory of Open Access Journals (Sweden)

    Souren A. Takhtajan

    2015-08-01

    Full Text Available In his Histories 8:118, Herodotus tells the dramatic story of Xerxes’ return to Asia from Greece by sea. The overcrowded ship was caught in a storm, and the captain advised the king to get rid of most of the passengers. Xerxes called on the Persians to prove their loyalty to the king, and they showed their obeisance by leaping into the sea. After his return, Xerxes awarded the captain of the ship with a golden crown for saving the king’s life—and cut off his head for causing the deaths of so many Persians. In the play Adam and Eve, Bulgakov depicts the outbreak of war between the Soviet Union and the Western world. Nearly everyone in Leningrad dies from a gas attack. But Efrosimov, a chemistry professor and a man of genius, has managed to save the lives of the other characters in the play, the Soviet fighter pilot Daragan included. Nevertheless, Daragan hates Efrosimov for his efforts to prevent the war and then to stop it. The fighter pilot imagines for the professor both an award for his merits and subsequent capital punishment for his alleged crime. In this article, I suggest that Bulgakov has borrowed the motif of award and execution from Herodotus.

  12. Sounding Rocket Observations of Active Region Soft X-Ray Spectra Between 0.5 and 2.5 nm Using a Modified SDO/EVE Instrument

    Science.gov (United States)

    Wieman, Seth; Didkovsky, Leonid; Woods, Thomas; Jones, Andrew; Moore, Christopher

    2016-12-01

    Spectrally resolved measurements of individual solar active regions (ARs) in the soft X-ray (SXR) range are important for studying dynamic processes in the solar corona and their associated effects on the Earth's upper atmosphere. They are also a means of evaluating atomic data and elemental abundances used in physics-based solar spectral models. However, very few such measurements are available. We present spectral measurements of two individual ARs in the 0.5 to 2.5 nm range obtained on the NASA 36.290 sounding rocket flight of 21 October 2013 (at about 18:30 UT) using the Solar Aspect Monitor (SAM), a channel of the Extreme Ultaviolet Variability Experiment (EVE) payload designed for underflight calibrations of the orbital EVE on the Solar Dynamics Observatory (SDO). The EVE rocket instrument is a duplicate of the EVE on SDO, except the SAM channel on the rocket version was modified in 2012 to include a freestanding transmission grating to provide spectrally resolved images of the solar disk with the best signal to noise ratio for the brightest features, such as ARs. Calibrations of the EVE sounding rocket instrument at the National Institute of Standards and Technology Synchrotron Ultraviolet Radiation Facility (NIST/SURF) have provided a measurement of the SAM absolute spectral response function and a mapping of wavelength separation in the grating diffraction pattern. We discuss techniques (incorporating the NIST/SURF data) for determining SXR spectra from the dispersed AR images as well as the resulting spectra for NOAA ARs 11877 and 11875 observed on the 2013 rocket flight. In comparisons with physics-based spectral models using the CHIANTI v8 atomic database we find that both AR spectra are in good agreement with isothermal spectra (4 MK), as well as spectra based on an AR differential emission measure (DEM) included with the CHIANTI distribution, with the exception of the relative intensities of strong Fe xvii lines associated with 2p6-2p53{s} and 2p6-2p

  13. An artificial neural network based $b$ jet identification algorithm at the CDF Experiment

    CERN Document Server

    Freeman, J; Ketchum, W; Poprocki, S; Pronko, A; Rusu, V; Wittich, P

    2011-01-01

    We present the development and validation of a new multivariate $b$ jet identification algorithm ("$b$ tagger") used at the CDF experiment at the Fermilab Tevatron. At collider experiments, $b$ taggers allow one to distinguish particle jets containing $B$ hadrons from other jets. Employing feed-forward neural network architectures, this tagger is unique in its emphasis on using information from individual tracks. This tagger not only contains the usual advantages of a multivariate technique such as maximal use of information in a jet and tunable purity/efficiency operating points, but is also capable of evaluating jets with only a single track. To demonstrate the effectiveness of the tagger, we employ a novel method wherein we calculate the false tag rate and tag efficiency as a function of the placement of a lower threshold on a jet's neural network output value in $Z+1$ jet and $t\\bar{t}$ candidate samples, rich in light flavor and $b$ jets, respectively.

  14. Head and neck paragangliomas: A two-decade institutional experience and algorithm for management.

    Science.gov (United States)

    Smith, Joshua D; Harvey, Rachel N; Darr, Owen A; Prince, Mark E; Bradford, Carol R; Wolf, Gregory T; Else, Tobias; Basura, Gregory J

    2017-12-01

    Paragangliomas of the head and neck and cranial base are typically benign, slow-growing tumors arising within the jugular foramen, middle ear, carotid bifurcation, or vagus nerve proper. The objective of this study was to provide a comprehensive characterization of our institutional experience with clinical management of these tumors and posit an algorithm for diagnostic evaluation and treatment. This was a retrospective cohort study of patients undergoing treatment for paragangliomas of the head and neck and cranial base at our institution from 2000-2017. Data on tumor location, catecholamine levels, and specific imaging modalities employed in diagnostic work-up, pre-treatment cranial nerve palsy, treatment modality, utilization of preoperative angiographic embolization, complications of treatment, tumor control and recurrence, and hereditary status (ie, succinate dehydrogenase mutations) were collected and summarized. The mean (SD) age of our cohort was 51.8 (±16.1) years with 123 (63.4%) female patients and 71 (36.6%) male patients. Catecholamine-secreting lesions were found in nine (4.6%) patients. Fifty-one patients underwent genetic testing, with mutations identified in 43 (20 SDHD , 13 SDHB, 7 SDHD , 1 SDHA, SDHAF2, and NF1 ). Observation with serial imaging, surgical extirpation, radiation, and stereotactic radiosurgery were variably employed as treatment approaches across anatomic subsites. An algorithmic approach to clinical management of these tumors, derived from our longitudinal institutional experience and current empiric evidence, may assist otolaryngologists, radiation oncologists, and geneticists in the care of these complex neoplasms. 4.

  15. The Slowly Varying Corona. I. Daily Differential Emission Measure Distributions Derived from EVE Spectra

    Science.gov (United States)

    Schonfeld, S. J.; White, S. M.; Hock-Mysliwiec, R. A.; McAteer, R. T. J.

    2017-08-01

    Daily differential emission measure (DEM) distributions of the solar corona are derived from spectra obtained by the Extreme-ultraviolet Variability Experiment (EVE) over a 4 yr period starting in 2010 near solar minimum and continuing through the maximum of solar cycle 24. The DEMs are calculated using six strong emission features dominated by Fe lines of charge states viii, ix, xi, xii, xiv, and xvi that sample the nonflaring coronal temperature range 0.3-5 MK. A proxy for the non-Fe xviii emission in the wavelength band around the 93.9 Å line is demonstrated. There is little variability in the cool component of the corona (T 2.0 MK) varies by more than an order of magnitude. A discontinuity in the behavior of coronal diagnostics in 2011 February-March, around the time of the first X-class flare of cycle 24, suggests fundamentally different behavior in the corona under solar minimum and maximum conditions. This global state transition occurs over a period of several months. The DEMs are used to estimate the thermal energy of the visible solar corona (of order 1031 erg), its radiative energy loss rate ((2.5-8) × {10}27 erg s-1), and the corresponding energy turnover timescale (about an hour). The uncertainties associated with the DEMs and these derived values are mostly due to the coronal Fe abundance and density and the CHIANTI atomic line database.

  16. The Soil Moisture Active Passive Experiments (SMAPEx) for SMAP Algorithm Development (Invited)

    Science.gov (United States)

    Panciera, R.; Walker, J. P.; Ryu, D.; Gray, D.; Jackson, T. J.; Yardley, H.

    2010-12-01

    The availability of global L-band observations from passive (the recently launched SMOS), and active (such as the PALSAR) microwave sensors has boosted the interest in making joint use of the two techniques to improve the retrieval of global near-surface soil moisture at unprecedented resolutions. The Soil Moisture Active Passive (SMAP) mission (scheduled launch, 2014) will fully exploit this synergy by providing concurrent active (radar) and passive (radiometer) microwave observations, resulting in passive-only, active-only and a merged active-passive soil moisture products at spatial resolutions of respectively 40km, 3km and 9km. The Soil Moisture Active Passive Experiments (SMAPEx) are a series of airborne field experiments specifically designed for algorithm development for SMAP and currently ongoing in the context of the SMAP pre-launch cal/val activities for Australia. Four SMAPEx campaigns are scheduled across the 2010-2011 seasonal cycle, with the first campaign (SMAPEx-1) successfully conducted on moderately wet winter conditions (July 5-10, 2010) and the second campaign (SMAPEx-2), scheduled for the summer (December 4-8,2010). SMAPEx is making use of a novel SMAP airborne simulator, including an L-band radar and radiometer to collect SMAP-like data over a well monitored semi-arid agricultural area in the Murrumbidgee catchment in south-eastern Australia. High resolution radar and radiometer observations collected during SMAPEx are supported by extensive ground sampling of soil moisture and ancillary data, allowing for testing of a variety of algorithms over semi-arid agricultural areas, typical of the Australian environment but similar to large areas of the central continental USA, including radiometer-only, radar-only, merged active-passive, downscaling and radar change-detection algorithms. In this paper a preliminary assessment of the performance of the radar-only and radiometer-only retrieval algorithms proposed as baseline for SMAP is presented. The

  17. OPTIMOS-EVE: a fibre-fed optical-near-infrared multi-object spectrograph for the E-ELT

    NARCIS (Netherlands)

    Hammer, F.; Kaper, L.; Dalton, G.

    2010-01-01

    OPTIMOS-EVE is a fibre-fed, optical-to-infrared multi-object spectrograph designed to explore the largest field of view provided by the E-ELT at seeing or GLAO-limited conditions. OPTIMOS-EVE can detect planets in nearby galaxies, explore stellar populations beyond the Local Group, and probe the

  18. Optimal design of a smart post-buckled beam actuator using bat algorithm: simulations and experiments

    Science.gov (United States)

    Mallick, Rajnish; Ganguli, Ranjan; Kumar, Ravi

    2017-05-01

    The optimized design of a smart post-buckled beam actuator (PBA) is performed in this study. A smart material based piezoceramic stack actuator is used as a prime-mover to drive the buckled beam actuator. Piezoceramic actuators are high force, small displacement devices; they possess high energy density and have high bandwidth. In this study, bench top experiments are conducted to investigate the angular tip deflections due to the PBA. A new design of a linear-to-linear motion amplification device (LX-4) is developed to circumvent the small displacement handicap of piezoceramic stack actuators. LX-4 enhances the piezoceramic actuator mechanical leverage by a factor of four. The PBA model is based on dynamic elastic stability and is analyzed using the Mathieu-Hill equation. A formal optimization is carried out using a newly developed meta-heuristic nature inspired algorithm, named as the bat algorithm (BA). The BA utilizes the echolocation capability of bats. An optimized PBA in conjunction with LX-4 generates end rotations of the order of 15° at the output end. The optimized PBA design incurs less weight and induces large end rotations, which will be useful in development of various mechanical and aerospace devices, such as helicopter trailing edge flaps, micro and nano aerial vehicles and other robotic systems.

  19. Vitamins in Spanish food patterns: the eVe Study.

    Science.gov (United States)

    Aranceta, J; Serra-Majem, L; Pérez-Rodrigo, C; Llopis, J; Mataix, J; Ribas, L; Tojo, R; Tur, J A

    2001-12-01

    To describe vitamin intakes in Spanish food patterns, identify groups at risk for inadequacy and determine conditioning factors that may influence this situation. Pooled-analysis of eight cross-sectional regional nutrition surveys. Ten thousand two hundred and eight free-living subjects (4728 men, 5480 women) aged 25-60 years. Respondents of population nutritional surveys carried out in eight Spanish regions (Alicante, Andalucia, Balearic Islands, Canary Islands, Catalunya, Galicia, Madrid and Basque Country) from 1990 to 1998. The samples were pooled together and weighted to build a national random sample. Dietary assessment by means of repeated 24-hour recall using photograph models to estimate portion size. Adjusted data for intra-individual variation were used to estimate the prevalence of inadequate intake. A Diet Quality Score (DQS) was computed considering the risk for inadequate intake for folate, vitamin C, vitamin A and vitamin E. DQS scores vary between 0 (good) and 4 (very poor). Influence of lifestyle (smoking, alcohol consumption and physical activity) was considered as well. Inadequate intakes (<2/3 Recommended Dietary Intake) were estimated in more than 10% of the sample for riboflavin (in men), folate (in women), vitamin C, vitamin A, vitamin D and vitamin E. More than 35% of the sample had diets classified as poor quality or very poor quality. Factors identified to have an influence on a poor-quality diet were old age, low education level and low socio-economical level. A sedentary lifestyle, smoking, usual consumption of alcohol and being overweight were conditioning factors for a poor-quality diet as well. Results from The eVe Study suggest that a high proportion of the Spanish population has inadequate intakes for at least one nutrient and nearly 50% should adjust their usual food pattern towards a more nutrient-dense, healthier diet.

  20. Optimization of identity operation in NMR spectroscopy via genetic algorithm: Application to the TEDOR experiment

    Science.gov (United States)

    Manu, V. S.; Veglia, Gianluigi

    2016-12-01

    Identity operation in the form of π pulses is widely used in NMR spectroscopy. For an isolated single spin system, a sequence of even number of π pulses performs an identity operation, leaving the spin state essentially unaltered. For multi-spin systems, trains of π pulses with appropriate phases and time delays modulate the spin Hamiltonian to perform operations such as decoupling and recoupling. However, experimental imperfections often jeopardize the outcome, leading to severe losses in sensitivity. Here, we demonstrate that a newly designed Genetic Algorithm (GA) is able to optimize a train of π pulses, resulting in a robust identity operation. As proof-of-concept, we optimized the recoupling sequence in the transferred-echo double-resonance (TEDOR) pulse sequence, a key experiment in biological magic angle spinning (MAS) solid-state NMR for measuring multiple carbon-nitrogen distances. The GA modified TEDOR (GMO-TEDOR) experiment with improved recoupling efficiency results in a net gain of sensitivity up to 28% as tested on a uniformly 13C, 15N labeled microcrystalline ubiquitin sample. The robust identity operation achieved via GA paves the way for the optimization of several other pulse sequences used for both solid- and liquid-state NMR used for decoupling, recoupling, and relaxation experiments.

  1. Experiment for validation of fluid-structure interaction models and algorithms.

    Science.gov (United States)

    Hessenthaler, A; Gaddum, N R; Holub, O; Sinkus, R; Röhrle, O; Nordsletten, D

    2017-09-01

    In this paper a fluid-structure interaction (FSI) experiment is presented. The aim of this experiment is to provide a challenging yet easy-to-setup FSI test case that addresses the need for rigorous testing of FSI algorithms and modeling frameworks. Steady-state and periodic steady-state test cases with constant and periodic inflow were established. Focus of the experiment is on biomedical engineering applications with flow being in the laminar regime with Reynolds numbers 1283 and 651. Flow and solid domains were defined using computer-aided design (CAD) tools. The experimental design aimed at providing a straightforward boundary condition definition. Material parameters and mechanical response of a moderately viscous Newtonian fluid and a nonlinear incompressible solid were experimentally determined. A comprehensive data set was acquired by using magnetic resonance imaging to record the interaction between the fluid and the solid, quantifying flow and solid motion. Copyright © 2016 The Authors. International Journal for Numerical Methods in Biomedical Engineering published by John Wiley & Sons Ltd.

  2. Optimization of identity operation in NMR spectroscopy via genetic algorithm: Application to the TEDOR experiment.

    Science.gov (United States)

    Manu, V S; Veglia, Gianluigi

    2016-12-01

    Identity operation in the form of π pulses is widely used in NMR spectroscopy. For an isolated single spin system, a sequence of even number of π pulses performs an identity operation, leaving the spin state essentially unaltered. For multi-spin systems, trains of π pulses with appropriate phases and time delays modulate the spin Hamiltonian to perform operations such as decoupling and recoupling. However, experimental imperfections often jeopardize the outcome, leading to severe losses in sensitivity. Here, we demonstrate that a newly designed Genetic Algorithm (GA) is able to optimize a train of π pulses, resulting in a robust identity operation. As proof-of-concept, we optimized the recoupling sequence in the transferred-echo double-resonance (TEDOR) pulse sequence, a key experiment in biological magic angle spinning (MAS) solid-state NMR for measuring multiple carbon-nitrogen distances. The GA modified TEDOR (GMO-TEDOR) experiment with improved recoupling efficiency results in a net gain of sensitivity up to 28% as tested on a uniformly (13)C, (15)N labeled microcrystalline ubiquitin sample. The robust identity operation achieved via GA paves the way for the optimization of several other pulse sequences used for both solid- and liquid-state NMR used for decoupling, recoupling, and relaxation experiments. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Evaluation of clustering algorithms at the electromagnetic calorimeter of the PADME experiment

    Science.gov (United States)

    Leonardi, E.; Piperno, G.; Raggi, M.

    2017-10-01

    A possible solution to the Dark Matter problem postulates that it interacts with Standard Model particles through a new force mediated by a “portal”. If the new force has a U(1) gauge structure, the “portal” is a massive photon-like vector particle, called dark photon or A’. The PADME experiment at the DAΦNE Beam-Test Facility (BTF) in Frascati is designed to detect dark photons produced in positron on fixed target annihilations decaying to dark matter (e+e-→γA‧) by measuring the final state missing mass. One of the key roles of the experiment will be played by the electromagnetic calorimeter, which will be used to measure the properties of the final state recoil γ. The calorimeter will be composed by 616 21×21×230 mm3 BGO crystals oriented with the long axis parallel to the beam direction and disposed in a roughly circular shape with a central hole to avoid the pile up due to the large number of low angle Bremsstrahlung photons. The total energy and position of the electromagnetic shower generated by a photon impacting on the calorimeter can be reconstructed by collecting the energy deposits in the cluster of crystals interested by the shower. In PADME we are testing two different clustering algorithms, PADME-Radius and PADME-Island, based on two complementary strategies. In this paper we will describe the two algorithms, with the respective implementations, and report on the results obtained with them at the PADME energy scale (< 1 GeV), both with a GEANT4 based simulation and with an existing 5×5 matrix of BGO crystals tested at the DAΦNE BTF.

  4. EVE TEASING, TEARS OF THE GIRLS: Bangladesh Open University towards Women Empowerment

    OpenAIRE

    Zobaida AKHTER

    2013-01-01

    Many Young girls of Bangladesh are curtailed from education, which is their basic right due to eve teasing. Parents are afraid of their daughter’s honor, family and social prestige, so ensure the safety of the daughters; sometimes they take the decision to withdraw their daughters from schools and colleges. Most of the time this type of occurrence like eve teasing happen when girls were in the way to educational institutions. In our country most of the people are devoid of their basic rights ...

  5. Exhibition from 6 to 17 March 2017: EVE and School covered in colours.

    CERN Multimedia

    Staff Association

    2017-01-01

    The children of the EVE and School exhibited their artwork in the Main Building from 6 to 17 March. They worked on the theme of “colours” expressing themselves through various drawing, painting, collage and arts-and-crafts techniques. The result was a beautiful explosion of bright and shimmering colours!

  6. Multi-object spectroscopy with the European ELT: scientific synergies between EAGLE and EVE

    NARCIS (Netherlands)

    Evans, C.J.; Barbuy, B.; Bonifacio, P.; Chemla, F.; Cuby, J.G.; Dalton, G.B.; Davies, B.; Disseau, K.; Dohlen, K.; Flores, H.; Gendron, E.; Guinouard, I.; Hammer, F.; Hastings, P.; Horville, D.; Jagourel, P.; Kaper, L.; Laporte, P.; Lee, D.; Morris, S.L.; Morris, T.; Myers, R.; Navarro, R.; Parr-Burman, P.; Petitjean, P.; Puech, M.; Rollinde, E.; Rousset, G.; Schnetler, H.; Welikala, N.; Wells, M.; Yang, Y.

    2012-01-01

    The EAGLE and EVE Phase A studies for instruments for the European Extremely Large Telescope (E-ELT) originated from related top-level scientific questions, but employed different (yet complementary) methods to deliver the required observations. We re-examine the motivations for a multi-object

  7. Tantsija Eve Mutso jäi Londonis silma / Stuart Sweeney

    Index Scriptorium Estoniae

    Sweeney, Stuart

    2006-01-01

    Londonis andis märtsis külalisetendusi Šoti ballett, kus solistina esines juba aastaid Shoti balletis tantsiv eestlanna Eve Mutso. Ta astus üles Ashley Page'i balletis "Tuhkatriinu", samuti ühe juhtivsolistina George Balanchine'i "Episoodides" ja William Forsythe'i "Süidis artefaktist"

  8. Performance tests of signature extension algorithms. [for large area crop inventory experiment

    Science.gov (United States)

    Abotteen, R.; Levy, S.; Mendlowitz, M.; Moritz, T.; Potter, J.; Thadani, S.; Wehmanen, O.

    1977-01-01

    Comparative tests were performed on seven signature extension algorithms to evaluate their effectiveness in correcting for changes in atmospheric haze and sun angle in a Landsat scene. Four of the algorithms were cluster matching, and two were maximum likelihood algorithms. The seventh algorithm determined the haze level in both training and recognition segments and used a set of tables calculated from an atmospheric model to determine the affine transformation that corrects the training signatures for changes in sun angle and haze level. Three of the algorithms were tested on a simulated data set, and all of the algorithms were tested on consecutive-day data. The classification performance on the data sets using the algorithms is presented, along with results of statistical tests on the accuracy and proportion estimates. The three algorithms tested on the simulated data produced significant improvements over the results obtained using untransformed signatures. For the consecutive-day data, the tested algorithms produced improvements in most but not all cases. The tests indicated also that no statistically significant differences were noted among the algorithms.

  9. LHCb: Optimization and Calibration of Flavour Tagging Algorithms for the LHCb experiment

    CERN Multimedia

    Falabella, A

    2013-01-01

    The LHCb purposes are to make precise measurements of $B$ and $D$ meson decays. In particular in time-dependent CP violation studies the determination of $B$ flavour at production is fundamental. This is known as "flavour tagging" and at LHCb it is performed with several algorithms. The performances and calibration of the flavour tagging algorithms with 2011 data collected by LHCb are reported. Also the performances of the flavour tagging algorithms in the relevant CP violation and asymmetry studies are also reported.

  10. Biology, the way it should have been, experiments with a Lamarckian algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Brown, F.M.; Snider, J. [Univ. of Kansas, Lawrence, KS (United States)

    1996-12-31

    This paper investigates the case where some information can be extracted directly from the fitness function of a genetic algorithm so that mutation may be achieved essentially on the Lamarckian principle of acquired characteristics. The basic rationale is that such additional information will provide better mutations, thus speeding up the search process. Comparisons are made between a pure Neo-Darwinian genetic algorithm and this Lamarckian algorithm on a number of problems, including a problem of interest to the US Army.

  11. Feature selection using genetic algorithm for breast cancer diagnosis: experiment on three different datasets

    Directory of Open Access Journals (Sweden)

    Shokoufeh Aalaei

    2016-05-01

    Full Text Available Objective(s: This study addresses feature selection for breast cancer diagnosis. The present process uses a wrapper approach using GA-based on feature selection and PS-classifier. The results of experiment show that the proposed model is comparable to the other models on Wisconsin breast cancer datasets. Materials and Methods: To evaluate effectiveness of proposed feature selection method, we employed three different classifiers artificial neural network (ANN and PS-classifier and genetic algorithm based classifier (GA-classifier on Wisconsin breast cancer datasets include Wisconsin breast cancer dataset (WBC, Wisconsin diagnosis breast cancer (WDBC, and Wisconsin prognosis breast cancer (WPBC. Results: For WBC dataset, it is observed that feature selection improved the accuracy of all classifiers expect of ANN and the best accuracy with feature selection achieved by PS-classifier. For WDBC and WPBC, results show feature selection improved accuracy of all three classifiers and the best accuracy with feature selection achieved by ANN. Also specificity and sensitivity improved after feature selection. Conclusion: The results show that feature selection can improve accuracy, specificity and sensitivity of classifiers. Result of this study is comparable with the other studies on Wisconsin breast cancer datasets.

  12. Application of an Image Tracking Algorithm in Fire Ant Motion Experiment

    Directory of Open Access Journals (Sweden)

    Lichuan Gui

    2009-04-01

    Full Text Available An image tracking algorithm, which was originally used with the particle image velocimetry (PIV to determine velocities of buoyant solid particles in water, is modified and applied in the presented work to detect motion of fire ant on a planar surface. A group of fire ant workers are put to the bottom of a tub and excited with vibration of selected frequency and intensity. The moving fire ants are captured with an image system that successively acquires image frames of high digital resolution. The background noise in the imaging recordings is extracted by averaging hundreds of frames and removed from each frame. The individual fire ant images are identified with a recursive digital filter, and then they are tracked between frames according to the size, brightness, shape, and orientation angle of the ant image. The speed of an individual ant is determined with the displacement of its images and the time interval between frames. The trail of the individual fire ant is determined with the image tracking results, and a statistical analysis is conducted for all the fire ants in the group. The purpose of the experiment is to investigate the response of fire ants to the substrate vibration. Test results indicate that the fire ants move faster after being excited, but the number of active ones are not increased even after a strong excitation.

  13. An Object-Oriented Collection of Minimum Degree Algorithms: Design, Implementation, and Experiences

    Science.gov (United States)

    Kumfert, Gary; Pothen, Alex

    1999-01-01

    The multiple minimum degree (MMD) algorithm and its variants have enjoyed 20+ years of research and progress in generating fill-reducing orderings for sparse, symmetric positive definite matrices. Although conceptually simple, efficient implementations of these algorithms are deceptively complex and highly specialized. In this case study, we present an object-oriented library that implements several recent minimum degree-like algorithms. We discuss how object-oriented design forces us to decompose these algorithms in a different manner than earlier codes and demonstrate how this impacts the flexibility and efficiency of our C++ implementation. We compare the performance of our code against other implementations in C or Fortran.

  14. Patenting mathematical algorithms : What's the harm? A thought experiment in algebra

    NARCIS (Netherlands)

    de Laat, P.B.

    The patenting of software-related inventions is on the increase, especially in the United States. Mathematical formulas and algorithms, though, are still sacrosanct. Only under special conditions may algorithms qualify as statutory matter: if they are not solely a mathematical exercise, but if they

  15. Greedy Algorithms for Finding a Small Set of Primers Satisfying Cover and Length Resolution Conditions in PCR Experiments.

    Science.gov (United States)

    Doi; Imai

    1997-01-01

    Selecting a good collection of primers is very important for polymerase chain reaction (PCR) experiments. Most existing algorithms for primer selection are concerned with computing a primer pair for each DNA sequence. In generalizing the arbitrarily primed PCR, etc., to the case that all DNA sequences of target objects are already known, like about 6000 ORFs of yeast, we may design a small set of primers so that all the targets are PCR amplified and resolved electrophoretically in a series of experiments. This is quite useful because deceasing the number of primers greatly reduces the cost of experiments. Pearson et al. (ISMB 1995: 285-291, 1995; Discrete Appl. Math. 71: 231-246, 1996) consider finding a minimum set of primers covering all given DNA sequences, but their method does not meet necessary biological conditions such as primer amplification and electrophoresis resolution. In this paper, based on the modeling and computational complexity analysis by Doi, we propose algorithms for this primer selection problem. These algorithms do not necessarily minimize the number of primers, but, since basic versions of these problems are shown to be computationally intractable, especially even for approximability with the length resolution condition, this is inevitable. In the algorithms, the amplification condition by a primer pair and the length resolution condition by electrophoresis are incorporated. These algorithms are based on the theoretically well-founded greedy algorithm for the set cover in computer science. Preliminary computational results are presented to show the validity of this approach. The number of computed primers is much less than a half of the number of targets, and hence is less than one forth of the number needed in the multiplex PCR.

  16. An algorithmic approach to perineal reconstruction after cancer resection--experience from two international centers.

    Science.gov (United States)

    John, Hannah Eliza; Jessop, Zita Maria; Di Candia, Michele; Simcock, Jeremy; Durrani, Amer J; Malata, Charles M

    2013-07-01

    This paper aims to simplify the approach to reconstruction of the perineum after resection of malignancies of the anal canal, lower rectum, vulva, and vagina. The data were collected from 2 centers, namely, Addenbrooke's Hospital, University of Cambridge, United Kingdom and Christchurch Hospital, University of Otago, New Zealand. All patients who underwent perineal reconstruction from 1997 to 2009 at Christchurch Hospital (13 years) and 2001 to 2009 at Addenbrooke's Hospital (9 years) were included. The diagnosis (indication), primary surgery, reconstructive surgery, complications, tumor outcomes (recurrence and survival), and follow-up were entered into a database (Microsoft Excel; Redmond, Wash). The incidence of previous radiotherapy, requirement for adjuvant radiotherapy, and length of inpatient stay were also recorded. Forty-six patients were identified for this study--13 in New Zealand and 33 in Cambridge. Indications for perineal reconstruction included resection of anal and rectal malignancies (24), vulval and vaginal malignancy (19), perineal sarcoma (1), and perineal squamous cell carcinoma arising in an enterocutaneous fistula (Table 1). The reconstructive strategies adopted included rectus abdominis myocutaneous flaps (26), gluteal fold flaps (9), gracilis V-Y or advancement flaps (7) and others (4), gluteal rotation flaps (1), local flap (2), and free latissimus dorsi flaps (1). Although various surgeons performed the reconstructive surgeries at 2 different centers, the essential approach remained the same. Smaller defects were best treated by local flaps, whereas the rectus abdominis flap remained the standard option for larger defects that additionally required closure of dead space. On the basis of our 2 center experience, we propose a simple algorithm to facilitate the planning of reconstructive surgery for the perineum.

  17. LHCb: Optimization and Calibration of Flavour Tagging Algorithms for the LHCb experiment

    CERN Multimedia

    Falabella, A

    2013-01-01

    The LHCb purposes are to make precise measurements in $B$ and $D$ meson decays. In particular in time-dependent CP violation studies the determination of $B$ flavour at production ("Flavour Tagging") is fundamental. The performances and calibration of the flavour tagging algorithms with 2011 data collected by LHCb are reported. The performances of the flavour tagging algorithms on the relevant CP violation and asymmetry studies are also reported.

  18. Study of a reconstruction algorithm for electrons in the ATLAS experiment in LHC; Etude d'un algorithme de reconstruction des electrons dans l'experience Atlas aupres du LHC

    Energy Technology Data Exchange (ETDEWEB)

    Kerschen, N

    2006-09-15

    The ATLAS experiment is a general purpose particle physics experiment mainly aimed at the discovery of the origin of mass through the research of the Higgs boson. In order to achieve this, the Large Hadron Collider at CERN will accelerate two proton beams and make them collide at the centre of the experiment. ATLAS will discover new particles through the measurement of their decay products. Electrons are such decay products: they produce an electromagnetic shower in the calorimeter by which they lose all their energy. The calorimeter is divided into cells and the deposited energy is reconstructed using an algorithm to assemble the cells into clusters. The purpose of this thesis is to study a new kind of algorithm adapting the cluster to the shower topology. In order to reconstruct the energy of the initially created electron, the cluster has to be calibrated by taking into account the energy lost in the dead material in front of the calorimeter. Therefore. a Monte-Carlo simulation of the ATLAS detector has been used to correct for effects of response modulation in position and in energy and to optimise the energy resolution as well as the linearity. An analysis of test beam data has been performed to study the behaviour of the algorithm in a more realistic environment. We show that the requirements of the experiment can be met for the linearity and resolution. The improvement of this new algorithm, compared to a fixed sized cluster. is the better recovery of Bremsstrahlung photons emitted by the electron in the material in front of the calorimeter. A Monte-Carlo analysis of the Higgs boson decay in four electrons confirms this result. (author)

  19. The donor management algorithm in transplantation of a composite facial tissue allograft.. First experience in Russia

    Directory of Open Access Journals (Sweden)

    V. V. Uyba

    2016-01-01

    Full Text Available In the period from 2005 to December 2015, 37 transplantations of vascularized composite facial tissue allografts (VCAs were performed in the world. A vascularized composite tissue allotransplantation has been recognized as a solid organ transplantation rather than a special kind of tissue transplantation. The recent classification of composite tissue allografts into the category of donor organs gave rise to a number of organizational, ethical, legal, technical, and economic problems. In May 2015, the first successful transplantation of a composite facial tissue allograft was performed in Russia. The article describes our experience of multiple team interactions at donor management stage when involved in the identification, conditioning, harvesting, and delivering donor organs to various hospitals. A man, aged 51 years old, diagnosed with traumatic brain injury became a donor after the diagnosis of brain deathhad been made, his death had been ascertained, and the requested consent for organ donation had been obtained from relatives. At donor management stage, a tracheostomy was performed and a posthumous facial mask was molded. The "face first, concurrent completion" algorithm was chosen for organ harvesting and facial VCA procurement; meanwhile, the facial allograft was procured as the "full face" category. The total surgery duration from the incision to completing the procurement (including that of solid organs made 8 hours 20 minutes. Immediately after the procurement, the facial VCA complex was sent to the St. Petersburg clinic by medical aircraft transportation, and was there transplanted 9 hours later. Donor kidneys were transported to Moscow bycivil aviation and transplanted 17 and 20 hours later. The authors believe that this clinical case report demonstrates the feasibility and safety of multiple harvesting of solid organs and a vascularized composite facial tissue allograft. However, this kind of surgery requires an essential

  20. Treatment of diabetes mellitus-associated neuropathy with vitamin E and Eve primrose.

    Science.gov (United States)

    Ogbera, Anthonia Okeoghene; Ezeobi, Emmanuel; Unachukwu, Chioma; Oshinaike, Olajumoke

    2014-11-01

    The aim of this report was to assess the efficacy and safety of a combination of vitamin E, an antioxidant, and Eve Primrose in the management of painful diabetes mellitus (DM) neuropathy. This was an interventional study that evaluated the efficacy and safety of a combination of vitamin E and Eve Primrose in the management of DM neuropathy. The study was conducted at the Diabetic Centre of the Lagos State University Teaching Hospital, Ikeja. Eighty individuals with type 2 DM who had painful neuropathy were recruited for this study, which took place for a duration of 1 year. The study subjects underwent clinical and biochemical assessment at baseline and were given vitamin E in a dose of 400 mg in combination with Eve Primrose in doses ranging 500-1000 mg/day. They were afterward assessed for relief of symptoms and possible untoward effects after 2 weeks and, thereafter, monthly for 3 months. The main outcome measure was amelioration of symptoms of neuropathy. The mean age and age range of the study subjects were 58.2 years and 37-70 years, respectively. A total of 70 patients (88%) of the study population reported relief from neuropathic pains. Clinical parameters were comparable between the responders and non-responders. One characteristic feature of the non-responders was that they all had vibration perception threshold of ≥25 mV, which was indicative of severe neuropathy. The combination of vitamin E and Eve Primrose is beneficial in the management of mild to moderate diabetic neuropathy.

  1. Structural genomics reveals EVE as a new ASCH/PUA-related domain.

    Science.gov (United States)

    Bertonati, Claudia; Punta, Marco; Fischer, Markus; Yachdav, Guy; Forouhar, Farhad; Zhou, Weihong; Kuzin, Alexander P; Seetharaman, Jayaraman; Abashidze, Mariam; Ramelot, Theresa A; Kennedy, Michael A; Cort, John R; Belachew, Adam; Hunt, John F; Tong, Liang; Montelione, Gaetano T; Rost, Burkhard

    2009-05-15

    We report on several proteins recently solved by structural genomics consortia, in particular by the Northeast Structural Genomics consortium (NESG). The proteins considered in this study differ substantially in their sequences but they share a similar structural core, characterized by a pseudobarrel five-stranded beta sheet. This core corresponds to the PUA domain-like architecture in the SCOP database. By connecting sequence information with structural knowledge, we characterize a new subgroup of these proteins that we propose to be distinctly different from previously described PUA domain-like domains such as PUA proper or ASCH. We refer to these newly defined domains as EVE. Although EVE may have retained the ability of PUA domains to bind RNA, the available experimental and computational data suggests that both the details of its molecular function and its cellular function differ from those of other PUA domain-like domains. This study of EVE and its relatives illustrates how the combination of structure and genomics creates new insights by connecting a cornucopia of structures that map to the same evolutionary potential. Primary sequence information alone would have not been sufficient to reveal these evolutionary links.

  2. Structural genomics reveals EVE as a new ASCH/PUA-related domain

    Science.gov (United States)

    Bertonati, Claudia; Punta, Marco; Fischer, Markus; Yachdav, Guy; Forouhar, Farhad; Zhou, Weihong; Kuzin, Alexander P.; Seetharaman, Jayaraman; Abashidze, Mariam; Ramelot, Theresa A.; Kennedy, Michael A.; Cort, John R.; Belachew, Adam; Hunt, John F.; Tong, Liang; Montelione, Gaetano T.; Rost, Burkhard

    2014-01-01

    Summary We report on several proteins recently solved by structural genomics consortia, in particular by the Northeast Structural Genomics consortium (NESG). The proteins considered in this study differ substantially in their sequences but they share a similar structural core, characterized by a pseudobarrel five-stranded beta sheet. This core corresponds to the PUA domain-like architecture in the SCOP database. By connecting sequence information with structural knowledge, we characterize a new subgroup of these proteins that we propose to be distinctly different from previously described PUA domain-like domains such as PUA proper or ASCH. We refer to these newly defined domains as EVE. Although EVE may have retained the ability of PUA domains to bind RNA, the available experimental and computational data suggests that both the details of its molecular function and its cellular function differ from those of other PUA domain-like domains. This study of EVE and its relatives illustrates how the combination of structure and genomics creates new insights by connecting a cornucopia of structures that map to the same evolutionary potential. Primary sequence information alone would have not been sufficient to reveal these evolutionary links. PMID:19191354

  3. A Randomized Exchange Algorithm for Computing Optimal Approximate Designs of Experiments

    KAUST Repository

    Harman, Radoslav

    2018-01-17

    We propose a class of subspace ascent methods for computing optimal approximate designs that covers both existing as well as new and more efficient algorithms. Within this class of methods, we construct a simple, randomized exchange algorithm (REX). Numerical comparisons suggest that the performance of REX is comparable or superior to the performance of state-of-the-art methods across a broad range of problem structures and sizes. We focus on the most commonly used criterion of D-optimality that also has applications beyond experimental design, such as the construction of the minimum volume ellipsoid containing a given set of data-points. For D-optimality, we prove that the proposed algorithm converges to the optimum. We also provide formulas for the optimal exchange of weights in the case of the criterion of A-optimality. These formulas enable one to use REX for computing A-optimal and I-optimal designs.

  4. Improving the Fine-Tuning of Metaheuristics: An Approach Combining Design of Experiments and Racing Algorithms

    Directory of Open Access Journals (Sweden)

    Eduardo Batista de Moraes Barbosa

    2017-01-01

    Full Text Available Usually, metaheuristic algorithms are adapted to a large set of problems by applying few modifications on parameters for each specific case. However, this flexibility demands a huge effort to correctly tune such parameters. Therefore, the tuning of metaheuristics arises as one of the most important challenges in the context of research of these algorithms. Thus, this paper aims to present a methodology combining Statistical and Artificial Intelligence methods in the fine-tuning of metaheuristics. The key idea is a heuristic method, called Heuristic Oriented Racing Algorithm (HORA, which explores a search space of parameters looking for candidate configurations close to a promising alternative. To confirm the validity of this approach, we present a case study for fine-tuning two distinct metaheuristics: Simulated Annealing (SA and Genetic Algorithm (GA, in order to solve the classical traveling salesman problem. The results are compared considering the same metaheuristics tuned through a racing method. Broadly, the proposed approach proved to be effective in terms of the overall time of the tuning process. Our results reveal that metaheuristics tuned by means of HORA achieve, with much less computational effort, similar results compared to the case when they are tuned by the other fine-tuning approach.

  5. Algorithmic improvements and calibration measurements for flavour tagging at the ATLAS experiment

    CERN Document Server

    Battaglia, Marco; The ATLAS collaboration

    2017-01-01

    Improvements and innovations in physics taggers, new approaches to multivariate analysis and training samples have brought optimised and more performant flavour-tagging algorithms for the analysis of the 2017 LHC collision data with ATLAS. This contribution summarises these recent developments.

  6. Design and implementation of universal mathematical library supporting algorithm development for FPGA based systems in high energy physics experiments

    Energy Technology Data Exchange (ETDEWEB)

    Jalmuzna, W.

    2006-02-15

    The X-ray free-electron laser XFEL that is being planned at the DESY research center in cooperation with European partners will produce high-intensity ultra-short Xray flashes with the properties of laser light. This new light source, which can only be described in terms of superlatives, will open up a whole range of new perspectives for the natural sciences. It could also offer very promising opportunities for industrial users. SIMCON (SIMulator and CONtroller) is the project of the fast, low latency digital controller dedicated for LLRF system in VUV FEL experiment based on modern FPGA chips It is being developed by ELHEP group in Institute of Electronic Systems at Warsaw University of Technology. The main purpose of the project is to create a controller for stabilizing the vector sum of fields in cavities of one cryomodule in the experiment. The device can be also used as the simulator of the cavity and testbench for other devices. Flexibility and computation power of this device allow implementation of fast mathematical algorithms. This paper describes the concept, implementation and tests of universal mathematical library for FPGA algorithm implementation. It consists of many useful components such as IQ demodulator, division block, library for complex and floating point operations, etc. It is able to speed up implementation time of many complicated algorithms. Library have already been tested using real accelerator signals and the performance achieved is satisfactory. (Orig.)

  7. Compression algorithm for data analysis in a radio link (preparation of PACEM 2 experiment)

    Science.gov (United States)

    Leroux, G.; Sylvain, M.

    1982-11-01

    The Hadamard transformation for image compression is applied to a radio data transmission system. The programs used and the performance obtained are described. The algorithms use PASCAL and the listed programs are written in FORTRAN 77. The experimental results of 62 images of 64 lines, show a standard deviation of 1.5% with a compression rate of 18.5, which is in accordance with the proposed goals.

  8. Machine Learning Algorithms for $b$-Jet Tagging at the ATLAS Experiment

    CERN Document Server

    Paganini, Michela; The ATLAS collaboration

    2017-01-01

    The separation of $b$-quark initiated jets from those coming from lighter quark flavors ($b$-tagging) is a fundamental tool for the ATLAS physics program at the CERN Large Hadron Collider. The most powerful $b$-tagging algorithms combine information from low-level taggers, exploiting reconstructed track and vertex information, into machine learning classifiers. The potential of modern deep learning techniques is explored using simulated events, and compared to that achievable from more traditional classifiers such as boosted decision trees.

  9. Experiences with Implementing a Distributed and Self-Organizing Scheduling Algorithm for Energy-Efficient Data Gathering on a Real-Life Sensor Network Platform

    NARCIS (Netherlands)

    Zhang, Y.; Chatterjea, Supriyo; Havinga, Paul J.M.

    2007-01-01

    We report our experiences with implementing a distributed and self-organizing scheduling algorithm designed for energy-efficient data gathering on a 25-node multihop wireless sensor network (WSN). The algorithm takes advantage of spatial correlations that exist in readings of adjacent sensor nodes

  10. Machine Learning Algorithms for $b$-Jet Tagging at the ATLAS Experiment

    CERN Document Server

    Paganini, Michela; The ATLAS collaboration

    2017-01-01

    The separation of b-quark initiated jets from those coming from lighter quark flavours (b-tagging) is a fundamental tool for the ATLAS physics program at the CERN Large Hadron Collider. The most powerful b-tagging algorithms combine information from low-level taggers exploiting reconstructed track and vertex information using a multivariate classifier. The potential of modern Machine Learning techniques such as Recurrent Neural Networks and Deep Learning is explored using simulated events, and compared to that achievable from more traditional classifiers such as boosted decision trees.

  11. Enhanced temporal resolution at cardiac CT with a novel CT image reconstruction algorithm: Initial patient experience

    Energy Technology Data Exchange (ETDEWEB)

    Apfaltrer, Paul, E-mail: paul.apfaltrer@medma.uni-heidelberg.de [Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, PO Box 250322, 169 Ashley Avenue, Charleston, SC 29425 (United States); Institute of Clinical Radiology and Nuclear Medicine, Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, D-68167 Mannheim (Germany); Schoendube, Harald, E-mail: harald.schoendube@siemens.com [Siemens Healthcare, CT Division, Forchheim Siemens, Siemensstr. 1, 91301 Forchheim (Germany); Schoepf, U. Joseph, E-mail: schoepf@musc.edu [Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, PO Box 250322, 169 Ashley Avenue, Charleston, SC 29425 (United States); Allmendinger, Thomas, E-mail: thomas.allmendinger@siemens.com [Siemens Healthcare, CT Division, Forchheim Siemens, Siemensstr. 1, 91301 Forchheim (Germany); Tricarico, Francesco, E-mail: francescotricarico82@gmail.com [Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, PO Box 250322, 169 Ashley Avenue, Charleston, SC 29425 (United States); Department of Bioimaging and Radiological Sciences, Catholic University of the Sacred Heart, “A. Gemelli” Hospital, Largo A. Gemelli 8, Rome (Italy); Schindler, Andreas, E-mail: andreas.schindler@campus.lmu.de [Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, PO Box 250322, 169 Ashley Avenue, Charleston, SC 29425 (United States); Vogt, Sebastian, E-mail: sebastian.vogt@siemens.com [Siemens Healthcare, CT Division, Forchheim Siemens, Siemensstr. 1, 91301 Forchheim (Germany); Sunnegårdh, Johan, E-mail: johan.sunnegardh@siemens.com [Siemens Healthcare, CT Division, Forchheim Siemens, Siemensstr. 1, 91301 Forchheim (Germany); and others

    2013-02-15

    Objective: To evaluate the effect of a temporal resolution improvement method (TRIM) for cardiac CT on diagnostic image quality for coronary artery assessment. Materials and methods: The TRIM-algorithm employs an iterative approach to reconstruct images from less than 180° of projections and uses a histogram constraint to prevent the occurrence of limited-angle artifacts. This algorithm was applied in 11 obese patients (7 men, 67.2 ± 9.8 years) who had undergone second generation dual-source cardiac CT with 120 kV, 175–426 mAs, and 500 ms gantry rotation. All data were reconstructed with a temporal resolution of 250 ms using traditional filtered-back projection (FBP) and of 200 ms using the TRIM-algorithm. Contrast attenuation and contrast-to-noise-ratio (CNR) were measured in the ascending aorta. The presence and severity of coronary motion artifacts was rated on a 4-point Likert scale. Results: All scans were considered of diagnostic quality. Mean BMI was 36 ± 3.6 kg/m{sup 2}. Average heart rate was 60 ± 9 bpm. Mean effective dose was 13.5 ± 4.6 mSv. When comparing FBP- and TRIM reconstructed series, the attenuation within the ascending aorta (392 ± 70.7 vs. 396.8 ± 70.1 HU, p > 0.05) and CNR (13.2 ± 3.2 vs. 11.7 ± 3.1, p > 0.05) were not significantly different. A total of 110 coronary segments were evaluated. All studies were deemed diagnostic; however, there was a significant (p < 0.05) difference in the severity score distribution of coronary motion artifacts between FBP (median = 2.5) and TRIM (median = 2.0) reconstructions. Conclusion: The algorithm evaluated here delivers diagnostic imaging quality of the coronary arteries despite 500 ms gantry rotation. Possible applications include improvement of cardiac imaging on slower gantry rotation systems or mitigation of the trade-off between temporal resolution and CNR in obese patients.

  12. Optimized multilayered graded fractal FSS: microgenetic algorithm and comparison with experiment

    Science.gov (United States)

    Sha, Yanan; Vinoy, K. J.; Jose, K. A.; Neo, C.; Varadan, Vijay K.

    2002-07-01

    Radar absorbing material is a very effective means of RCS reduction in the content of stealth technology. In this paper, we present our design of multi-layer microwave absorber with fractal Frequency Selective Surface embedded. Micro-genetic algorithm is employed for the optimization of the design. Two designs are presented in the paper with regard to different frequency spectrum. The experimental result shown here indicates more than 15 dB reduction at X- band in the reflection of a flat surface, by the use of this configuration with lossy dielectrics, which is in good agreement with the simulation result.

  13. The p-EVES study design and methodology: a randomised controlled trial to compare portable electronic vision enhancement systems (p-EVES) to optical magnifiers for near vision activities in visual impairment.

    Science.gov (United States)

    Taylor, John; Bambrick, Rachel; Dutton, Michelle; Harper, Robert; Ryan, Barbara; Tudor-Edwards, Rhiannon; Waterman, Heather; Whitaker, Chris; Dickinson, Chris

    2014-09-01

    To describe the study design and methodology for the p-EVES study, a trial designed to determine the effectiveness, cost-effectiveness and acceptability of portable Electronic Vision Enhancement System (p-EVES) devices and conventional optical low vision aids (LVAs) for near tasks in people with low vision. The p-EVES study is a prospective two-arm randomised cross-over trial to test the hypothesis that, in comparison to optical LVAs, p-EVES can be: used for longer duration; used for a wider range of tasks than a single optical LVA and/or enable users to do tasks that they were not able to do with optical LVAs; allow faster performance of instrumental activities of daily living; and allow faster reading. A total of 100 adult participants with visual impairment are currently being recruited from Manchester Royal Eye Hospital and randomised into either Group 1 (receiving the two interventions A and B in the order AB), or Group 2 (receiving the two interventions in the order BA). Intervention A is a 2-month period with conventional optical LVAs and a p-EVES device, and intervention B is a 2-month period with conventional optical LVAs only. The study adopts a mixed methods approach encompassing a broad range of outcome measures. The results will be obtained from the following primary outcome measures: Manchester Low Vision Questionnaire, capturing device 'usage' data (which devices are used, number of times, for what purposes, and for how long) and the MNRead test, measuring threshold print size, critical print size, and acuity reserve in addition to reading speed at high (≈90%) contrast. Results will also be obtained from a series of secondary outcome measures which include: assessment of timed instrumental activities of daily living and a 'near vision' visual functioning questionnaire. A companion qualitative study will permit comparison of results on how, where, and under what circumstances, p-EVES devices and LVAs are used in daily life. A health economic

  14. Virtual reality visualization algorithms for the ALICE high energy physics experiment on the LHC at CERN

    Science.gov (United States)

    Myrcha, Julian; Trzciński, Tomasz; Rokita, Przemysław

    2017-08-01

    Analyzing massive amounts of data gathered during many high energy physics experiments, including but not limited to the LHC ALICE detector experiment, requires efficient and intuitive methods of visualisation. One of the possible approaches to that problem is stereoscopic 3D data visualisation. In this paper, we propose several methods that provide high quality data visualisation and we explain how those methods can be applied in virtual reality headsets. The outcome of this work is easily applicable to many real-life applications needed in high energy physics and can be seen as a first step towards using fully immersive virtual reality technologies within the frames of the ALICE experiment.

  15. Thermal weapon sights with integrated fire control computers: algorithms and experiences

    Science.gov (United States)

    Rothe, Hendrik; Graswald, Markus; Breiter, Rainer

    2008-04-01

    The HuntIR long range thermal weapon sight of AIM is deployed in various out of area missions since 2004 as a part of the German Future Infantryman system (IdZ). In 2007 AIM fielded RangIR as upgrade with integrated laser Range finder (LRF), digital magnetic compass (DMC) and fire control unit (FCU). RangIR fills the capability gaps of day/night fire control for grenade machine guns (GMG) and the enhanced system of the IdZ. Due to proven expertise and proprietary methods in fire control, fast access to military trials for optimisation loops and similar hardware platforms, AIM and the University of the Federal Armed Forces Hamburg (HSU) decided to team for the development of suitable fire control algorithms. The pronounced ballistic trajectory of the 40mm GMG requires most accurate FCU-solutions specifically for air burst ammunition (ABM) and is most sensitive to faint effects like levelling or firing up/downhill. This weapon was therefore selected to validate the quality of the FCU hard- and software under relevant military conditions. For exterior ballistics the modified point mass model according to STANAG 4355 is used. The differential equations of motions are solved numerically, the two point boundary value problem is solved iteratively. Computing time varies according to the precision needed and is typical in the range from 0.1 - 0.5 seconds. RangIR provided outstanding hit accuracy including ABM fuze timing in various trials of the German Army and allied partners in 2007 and is now ready for series production. This paper deals mainly with the fundamentals of the fire control algorithms and shows how to implement them in combination with any DSP-equipped thermal weapon sights (TWS) in a variety of light supporting weapon systems.

  16. On The Eve Of IYA2009 In Canada

    Science.gov (United States)

    Hesser, James E.; Breland, K.; Hay, K.; Lane, D.; Lacasse, R.; Lemay, D.; Langill, P.; Percy, J.; Welch, D.; Woodsworth, A.

    2009-01-01

    Local events organized by astronomy clubs, colleges and universities across Canada will softly launch IYA on Saturday, 10 January and begin building awareness of opportunities for every Canadian to experience a `Galileo Moment’ in 2009. The launch typifies our `grass roots’ philosophy based upon our strong partnership of amateurs and professionals which already represents an IYA legacy. In this poster we anticipate the activities of the first half of 2009 and exhibit the educational and public outreach materials and programs we have produced in both official languages, e.g., Astronomy Trading Cards, Mary Lou's New Telescope, Star Finder, etc. Some of these play central roles in our tracking of participation, including allowing people to register to have their name launched into space in 2010. Several contests for youth are underway, with the prize in one being an hour of Gemini telescope observing. In the first half of 2009 some 30,000 grade 6 students will experience `Music of the Spheres’ astronomical orchestral programming conducted by Galileo (a.k.a. Tania Miller, Victoria Symphony). Audiences in Canada and the US will experience Taflemusik's marvelous new soundscape of music and words exploring the deep connections between astronomy and Baroque-era music. An Astronomy Kit featuring Galileoscope for classroom and astronomy club EPO will be tested. Canada Post will issue two stamps during 100 Hours of Astronomy. A new production, Galileo Live!, by Canadian planetaria involving live actors will premier, as will the national Galileo Legacy Lectures in which top astronomers familiarize the public with forefront research being done in Canada. Image exhibits drawing upon material generated by Canadian astronomers and artists, as well as from the IAU Cornerstones, FETTU and TWAN, are opening in malls and airports early in 2009. We will present the latest information about these and other events.

  17. Algorithms and Algorithmic Languages.

    Science.gov (United States)

    Veselov, V. M.; Koprov, V. M.

    This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

  18. Feature selection using genetic algorithm for breast cancer diagnosis: experiment on three different datasets

    NARCIS (Netherlands)

    Aalaei, Shokoufeh; Shahraki, Hadi; Rowhanimanesh, Alireza; Eslami, Saeid

    2016-01-01

    This study addresses feature selection for breast cancer diagnosis. The present process uses a wrapper approach using GA-based on feature selection and PS-classifier. The results of experiment show that the proposed model is comparable to the other models on Wisconsin breast cancer datasets. To

  19. EXPERIMENT BASED FAULT DIAGNOSIS ON BOTTLE FILLING PLANT WITH LVQ ARTIFICIAL NEURAL NETWORK ALGORITHM

    Directory of Open Access Journals (Sweden)

    Mustafa DEMETGÜL

    2008-01-01

    Full Text Available In this study, an artificial neural network is developed to find an error rapidly on pneumatic system. Also the ANN prevents the system versus the failure. The error on the experimental bottle filling plant can be defined without any interference using analog values taken from pressure sensors and linear potentiometers. The sensors and potentiometers are placed on different places of the plant. Neural network diagnosis faults on plant, where no bottle, cap closing cylinder B is not working, bottle cap closing cylinder C is not working, air pressure is not sufficient, water is not filling and low air pressure faults. The fault is diagnosed by artificial neural network with LVQ. It is possible to find an failure by using normal programming or PLC. The reason offing Artificial Neural Network is to give a information where the fault is. However, ANN can be used for different systems. The aim is to find the fault by using ANN simultaneously. In this situation, the error taken place on the pneumatic system is collected by a data acquisition card. It is observed that the algorithm is very capable program for many industrial plants which have mechatronic systems.

  20. How are women living with HIV in France coping with their perceived side effects of antiretroviral therapy? Results from the EVE study.

    Directory of Open Access Journals (Sweden)

    Guillemette Quatremère

    Full Text Available Side effects of antiretroviral therapy (ART can have a negative impact on health-related quality of life threatening long-term retention in HIV care and adherence to ART. The aim of the French community-based survey EVE was to document personal experiences with side effects, the related physician-patient communication, and solutions found to deal with them.Cross-sectional study of women between September 2013 to September 2014.An anonymous online questionnaire included the HIV Symptom Distress Module, which explores 20 symptoms.In all, 301 women on ART participated in the study (median age: 49 years; median duration of ART: 14 years. They reported having experienced a median of 12 symptoms (Q1-Q3: 9-15 during the previous 12 months. Overall, 56% of them reported having found at least a partial solution to dealing with their symptoms. Women reporting financial difficulties were twice less likely to have found solutions to coping with their side effects (AOR: 0.5; 95% CI: 0.3-0.8. Feeling supported by the health-care provider (AOR: 2.1; 95% CI: 1.1-3.9 and being in contact with HIV/AIDS organisations (AOR: 1.9; 95% CI: 1.2-3.2 were positively associated with coping. Seventeen percent reported having modified their ART regimen to improve tolerance, with only 2 in 3 informing their physician afterwards. Reporting financial difficulties and living with more bothersome symptoms increased the risk of ART regimen modification without health-care provider consultation.The EVE study has called attention to the large number of side effects experienced by WLWHIV, only half of whom have found self-care strategies to manage their symptoms. Modification of ART regimen by the women themselves was not uncommon.

  1. X-ray digital intra-oral tomosynthesis for quasi-three-dimensional imaging: system, reconstruction algorithm, and experiments

    Science.gov (United States)

    Li, Liang; Chen, Zhiqiang; Zhao, Ziran; Wu, Dufan

    2013-01-01

    At present, there are mainly three x-ray imaging modalities for dental clinical diagnosis: radiography, panorama and computed tomography (CT). We develop a new x-ray digital intra-oral tomosynthesis (IDT) system for quasi-three-dimensional dental imaging which can be seen as an intermediate modality between traditional radiography and CT. In addition to normal x-ray tube and digital sensor used in intra-oral radiography, IDT has a specially designed mechanical device to complete the tomosynthesis data acquisition. During the scanning, the measurement geometry is such that the sensor is stationary inside the patient's mouth and the x-ray tube moves along an arc trajectory with respect to the intra-oral sensor. Therefore, the projection geometry can be obtained without any other reference objects, which makes it be easily accepted in clinical applications. We also present a compressed sensing-based iterative reconstruction algorithm for this kind of intra-oral tomosynthesis. Finally, simulation and experiment were both carried out to evaluate this intra-oral imaging modality and algorithm. The results show that IDT has its potentiality to become a new tool for dental clinical diagnosis.

  2. Hardware Demonstrator of a Level-1 Track Finding Algorithm with FPGAs for the Phase II CMS Experiment

    CERN Document Server

    AUTHOR|(CDS)2090481

    2016-01-01

    At the HL-LHC, proton bunches collide every 25\\,ns, producing an average of 140 pp interactions per bunch crossing. To operate in such an environment, the CMS experiment will need a Level-1 (L1) hardware trigger, able to identify interesting events within a latency of 12.5\\,$\\mu$s. This novel L1 trigger will make use of data coming from the silicon tracker to constrain the trigger rate. Goal of this new \\textit{track trigger} will be to build L1 tracks from the tracker information. The architecture that will be implemented in future to process tracker data is still under discussion. One possibility is to adopt a system entirely based on FPGA electronic. The proposed track finding algorithm is based on the Hough transform method. The algorithm has been tested using simulated pp collision data and it is currently being demonstrated in hardware, using the ``MP7'', which is a $\\mu$TCA board with a powerful FPGA capable of handling data rates approaching 1 Tb/s. Two different implementations of the Hough tran...

  3. Programs for the work with ENSDF format files: Evaluator's editor EVE, Viewer for the nuclear level schemes

    CERN Document Server

    Shulyak, G I

    2010-01-01

    Tools for the regular work of the nuclear data evaluator are presented: the context-dependent editor EVE and the viewer for the level schemes of nuclei from ENSDF datasets. These programs may be used by everybody who works with the Evaluated Nuclear Structure Data File and for the educational purposed.

  4. Mille poolest on algav õppeaasta teile eriline? / Valmar Pantšenko, Martin Adamson, Maire Tamm, Eve Reisalu

    Index Scriptorium Estoniae

    2012-01-01

    Küsimusele vastavad: Tartu Täiskasvanute Gümnaasiumi abiturient Valmar Pantšenko, Eesti meister motokrossis klassis Quad 100 Martin Adamson, Tamsalu Gümnaasiumi õppealajuhataja Maire Tamm ja Ristiku põhikooli õpetaja-metoodik Eve Reisalu

  5. Project overview of OPTIMOS-EVE: the fibre-fed multi-object spectrograph for the E-ELT

    NARCIS (Netherlands)

    Navarro, R.; Chemla, F.; Bonifacio, P.; Flores, H.; Guinouard, I.; Huet, J.-M.; Puech, M.; Royer, F.; Pragt, J.H.; Wulterkens, G.; Sawyer, E.C.; Caldwell, M.E.; Tosh, I.A.J.; Whalley, M.S.; Woodhouse, G.F.W.; Spanò, P.; Di Marcantonio, P.; Andersen, M.I.; Dalton, G.B.; Kaper, L.; Hammer, F.

    2010-01-01

    OPTIMOS-EVE (OPTical Infrared Multi Object Spectrograph - Extreme Visual Explorer) is the fibre fed multi object spectrograph proposed for the European Extremely Large Telescope (E-ELT), planned to be operational in 2018 at Cerro Armazones (Chile). It is designed to provide a spectral resolution of

  6. Deaf-Accessibility for Spoonies: Lessons from Touring "Eve and Mary Are Having Coffee" While Chronically Ill

    Science.gov (United States)

    Barokka (Okka), Khairani

    2017-01-01

    This article presents lessons from touring a show on pain with limited resources and in chronic pain. In 2014, I toured solo deaf-accessible poetry/art show "Eve and Mary Are Having Coffee" in various forms in the UK, Austria, and India. As an Indonesian woman with then-extreme chronic pain and fatigue, herein are lessons learned from…

  7. Development of reconstruction algorithms for inelastic processes studies in the TOTEM experiment at LHC

    CERN Document Server

    Berretti, Mirko; Latino, Giuseppe

    The TOTEM experiment at the Large Hadron Collider (LHC) is designed and optimized to measure the total pp cross section at a center of mass energy of E = 14 TeV with a precision of about 1÷2 %, to study the nuclear elastic pp cross section over a wide range of the squared four-momentum transfer (10^{-3} GeV^2 < |t| < 10 GeV^2) and to perform a comprehensive physics program on diffractive dissociation processes, partially in cooperation with the CMS experiment. Based on the “luminosity independent method”, the evaluation of the total cross section with such a small error will in particular require simultaneous measurement of the pp elastic scattering cross section d\\sigma/dt down to |t| ~10^{-3} GeV^2 (to be extrapolated to t = 0) as well as of the pp inelastic interaction rate, with a large coverage in the forward region. The TOTEM physics programme will be accomplished by using three different types of detectors: elastically scattered protons will be detected by Roman Pots detectors (based on sili...

  8. Level 3 trigger algorithm and hardware platform for the HADES experiment

    Energy Technology Data Exchange (ETDEWEB)

    Kirschner, Daniel Georg

    2007-10-26

    One focus of the HADES experiment is the investigation of the decay of light vector mesons inside a dense medium into lepton pairs. These decays provide a conceptually ideal tool to study the invariant mass of the vector meson in-medium, since the lepton pairs of these meson decays leave the reaction without further strong interaction. Thus, no final state interaction affects the measurement. Unfortunately, the branching ratios of vector mesons into lepton pairs are very small ({approx} 10{sup -5}). This calls for a high rate, high acceptance experiment. In addition, a sophisticated real time trigger system is used in HADES to enrich the interesting events in the recorded data. The focus of this thesis is the development of a next generation real time trigger method to improve the enrichment of lepton events in the HADES trigger. In addition, a flexible hardware platform (GE-MN) was developed to implement and test the trigger method. The GE-MN features two Gigabit-Ethernet interfaces for data transport, a VMEbus for slow control and configuration, and a TigerSHARC DSP for data processing. It provides the experience to discuss the challenges and benefits of using a commercial standard network technology based system in an experiment. The developed and tested trigger method correlates the ring information of the HADES RICH with the fired wires (cells) of the HADES MDC detector. This correlation method operates by calculating for each event the cells which should have seen the signal of a traversing lepton, and compares these calculated cells to all the cells that did see a signal. The cells which should have fired are calculated from the polar and azimuthal angle information of the RICH rings by assuming a straight line in space, which is starting at the target and extending into a direction given by the ring angles. The line extends through the inner MDC chambers and the traversed cells are those that should have been hit. To compensate different sources for

  9. Summer Student Project Report. Parallelization of the path reconstruction algorithm for the inner detector of the ATLAS experiment.

    CERN Document Server

    Maldonado Puente, Bryan Patricio

    2014-01-01

    The inner detector of the ATLAS experiment has two types of silicon detectors used for tracking: Pixel Detector and SCT (semiconductor tracker). Once a proton-proton collision occurs, the result- ing particles pass through these detectors and these are recorded as hits on the detector surfaces. A medium to high energy particle passes through seven different surfaces of the two detectors, leaving seven hits, while lower energy particles can leave many more hits as they circle through the detector. For a typical event during the expected operational conditions, there are 30 000 hits in average recorded by the sensors. Only high energy particles are of interest for physics analysis and are taken into account for the path reconstruction; thus, a filtering process helps to discard the low energy particles produced in the collision. The following report presents a solution for increasing the speed of the filtering process in the path reconstruction algorithm.

  10. Testing Nelder-Mead based repulsion algorithms for multiple roots of nonlinear systems via a two-level factorial design of experiments.

    Science.gov (United States)

    Ramadas, Gisela C V; Rocha, Ana Maria A C; Fernandes, Edite M G P

    2015-01-01

    This paper addresses the challenging task of computing multiple roots of a system of nonlinear equations. A repulsion algorithm that invokes the Nelder-Mead (N-M) local search method and uses a penalty-type merit function based on the error function, known as 'erf', is presented. In the N-M algorithm context, different strategies are proposed to enhance the quality of the solutions and improve the overall efficiency. The main goal of this paper is to use a two-level factorial design of experiments to analyze the statistical significance of the observed differences in selected performance criteria produced when testing different strategies in the N-M based repulsion algorithm. The main goal of this paper is to use a two-level factorial design of experiments to analyze the statistical significance of the observed differences in selected performance criteria produced when testing different strategies in the N-M based repulsion algorithm.

  11. Performance of the reconstruction algorithms of the FIRST experiment pixel sensors vertex detector

    CERN Document Server

    Rescigno, R; Juliani, D; Spiriti, E; Baudot, J; Abou-Haidar, Z; Agodi, C; Alvarez, M A G; Aumann, T; Battistoni, G; Bocci, A; Böhlen, T T; Boudard, A; Brunetti, A; Carpinelli, M; Cirrone, G A P; Cortes-Giraldo, M A; Cuttone, G; De Napoli, M; Durante, M; Gallardo, M I; Golosio, B; Iarocci, E; Iazzi, F; Ickert, G; Introzzi, R; Krimmer, J; Kurz, N; Labalme, M; Leifels, Y; Le Fevre, A; Leray, S; Marchetto, F; Monaco, V; Morone, M C; Oliva, P; Paoloni, A; Patera, V; Piersanti, L; Pleskac, R; Quesada, J M; Randazzo, N; Romano, F; Rossi, D; Rousseau, M; Sacchi, R; Sala, P; Sarti, A; Scheidenberger, C; Schuy, C; Sciubba, A; Sfienti, C; Simon, H; Sipala, V; Tropea, S; Vanstalle, M; Younis, H

    2014-01-01

    Hadrontherapy treatments use charged particles (e.g. protons and carbon ions) to treat tumors. During a therapeutic treatment with carbon ions, the beam undergoes nuclear fragmentation processes giving rise to significant yields of secondary charged particles. An accurate prediction of these production rates is necessary to estimate precisely the dose deposited into the tumours and the surrounding healthy tissues. Nowadays, a limited set of double differential carbon fragmentation cross-section is available. Experimental data are necessary to benchmark Monte Carlo simulations for their use in hadrontherapy. The purpose of the FIRST experiment is to study nuclear fragmentation processes of ions with kinetic energy in the range from 100 to 1000 MeV/u. Tracks are reconstructed using information from a pixel silicon detector based on the CMOS technology. The performances achieved using this device for hadrontherapy purpose are discussed. For each reconstruction step (clustering, tracking and vertexing), different...

  12. HISTOPATHOLOGICAL SCALE AND SYNOVITIS ALGORITHM – 15 YEARS OF EXPERIENCE: EVALUATION AND FOLLOWING PROGRESS

    Directory of Open Access Journals (Sweden)

    V. Krenn

    2017-01-01

      inflammatory antigens  were suggested  for immunohistochemical analysis (including Ki-67, CD68-, CD3-, CD15и CD20.  This immunohistochemical scale and subdivision into low and high degree synovitis  provided  a possibility  to assess the risk of development and biological sensitivity of rheumatoid arthritis. Thus, an important histological  input  was made into primary rheumatology diagnostics which did not consider tissue  changes.  Due  to  formal  integration of synovitis  scale into  the  algorithm of synovial  pathology  diagnostics a comprehensive classification was developed specifically for differentiated orthopaedics diagnostics.

  13. Design, implementation and deployment of the Saclay muon reconstruction algorithms (Muonbox/y) in the Athena software framework of the ATLAS experiment

    CERN Document Server

    Formica, A

    2003-01-01

    This paper gives an overview of a reconstruction algorithm for muon events in ATLAS experiment at CERN. After a short introduction on ATLAS Muon Spectrometer, we will describe the procedure performed by the algorithms Muonbox and Muonboy (last version) in order to achieve correctly the reconstruction task. These algorithms have been developed in Fortran language and are working in the official C++ framework Athena, as well as in stand alone mode. A description of the interaction between Muonboy and Athena will be given, together with the reconstruction performances (efficiency and momentum resolution) obtained with MonteCarlo data.

  14. On the Juno Radio Science Experiment: models, algorithms and sensitivity analysis

    CERN Document Server

    Tommei, Giacomo; Serra, Daniele; Milani, Andrea

    2014-01-01

    Juno is a NASA mission launched in 2011 with the goal of studying Jupiter. The probe will arrive to the planet in 2016 and will be placed for one year in a polar high-eccentric orbit to study the composition of the planet, the gravity and the magnetic field. The Italian Space Agency (ASI) provided the radio science instrument KaT (Ka-Band Translator) used for the gravity experiment, which has the goal of studying the Jupiter's deep structure by mapping the planet's gravity: such instrument takes advantage of synergies with a similar tool in development for BepiColombo, the ESA cornerstone mission to Mercury. The Celestial Mechanics Group of the University of Pisa, being part of the Juno Italian team, is developing an orbit determination and parameters estimation software for processing the real data independently from NASA software ODP. This paper has a twofold goal: first, to tell about the development of this software highlighting the models used, second, to perform a sensitivity analysis on the parameters ...

  15. Remixing music using source separation algorithms to improve the musical experience of cochlear implant users.

    Science.gov (United States)

    Pons, Jordi; Janer, Jordi; Rode, Thilo; Nogueira, Waldo

    2016-12-01

    Music perception remains rather poor for many Cochlear Implant (CI) users due to the users' deficient pitch perception. However, comprehensible vocals and simple music structures are well perceived by many CI users. In previous studies researchers re-mixed songs to make music more enjoyable for them, favoring the preferred music elements (vocals or beat) attenuating the others. However, mixing music requires the individually recorded tracks (multitracks) which are usually not accessible. To overcome this limitation, Source Separation (SS) techniques are proposed to estimate the multitracks. These estimated multitracks are further re-mixed to create more pleasant music for CI users. However, SS may introduce undesirable audible distortions and artifacts. Experiments conducted with CI users (N = 9) and normal hearing listeners (N = 9) show that CI users can have different mixing preferences than normal hearing listeners. Moreover, it is shown that CI users' mixing preferences are user dependent. It is also shown that SS methods can be successfully used to create preferred re-mixes although distortions and artifacts are present. Finally, CI users' preferences are used to propose a benchmark that defines the maximum acceptable levels of SS distortion and artifacts for two different mixes proposed by CI users.

  16. The Algorithmic Imaginary

    DEFF Research Database (Denmark)

    Bucher, Taina

    2017-01-01

    This article reflects the kinds of situations and spaces where people and algorithms meet. In what situations do people become aware of algorithms? How do they experience and make sense of these algorithms, given their often hidden and invisible nature? To what extent does an awareness....... Examining how algorithms make people feel, then, seems crucial if we want to understand their social power....

  17. Polymer-based blood vessel models with micro-temperature sensors in EVE

    Science.gov (United States)

    Mizoshiri, Mizue; Ito, Yasuaki; Hayakawa, Takeshi; Maruyama, Hisataka; Sakurai, Junpei; Ikeda, Seiichi; Arai, Fumihito; Hata, Seiichi

    2017-04-01

    Cu-based micro-temperature sensors were directly fabricated on poly(dimethylsiloxane) (PDMS) blood vessel models in EVE using a combined process of spray coating and femtosecond laser reduction of CuO nanoparticles. CuO nanoparticle solution coated on a PDMS blood vessel model are thermally reduced and sintered by focused femtosecond laser pulses in atmosphere to write the sensors. After removing the non-irradiated CuO nanoparticles, Cu-based microtemperature sensors are formed. The sensors are thermistor-type ones whose temperature dependences of the resistance are used for measuring temperature inside the blood vessel model. This fabrication technique is useful for direct-writing of Cu-based microsensors and actuators on arbitrary nonplanar substrates.

  18. Recepcija Boškovićeve teorije silā u Parizu

    OpenAIRE

    Martinović, Ivica

    2013-01-01

    Recepcija Boškovićeve teorije silā u Parizu može se pratiti u pariškim znanstvenim časopisima i knjigama od 1754. do 1803. godine, a određuje ju ovih pet imena: Gerdil, Berthier, Para du Phanjas, Saury i Lalande. Toj recepciji prethode dvije propuštene prilike. Pola godine nakon imenovanja Boškovića za dopisnoga člana Académie Royale des Sciences prosinački broj pariškog časopisa Journal des Sçavans za 1748. spomenuo je raspravu De viribus vivis (1745) u prikazu Boškovićeva rada, ali je propu...

  19. Salt extraction by the Solovetsky monastery on the eve of the 1764 secularization

    Directory of Open Access Journals (Sweden)

    A. BOGDANOVA

    2014-02-01

    Full Text Available The article deals with the circumstances ofthe Solovetsky monastery salt production at the eve of the secularization that took place in 1764. The Author’s goal is to examine the economic value of the Solovetsky monastery salt production at the time of its lost as a result of the secularization reform. This examination was based on the study of the archival documents. Therefore the following issues were considered: (1 the structure of the salt business in Russia in the 18th century; (2 the value of the Pomorian salt in the all-Russian salt market in the middle of the 18th century; (3 the value of the Solovetsky monastery salt production share in the all-Russian Pomorian salt delivery; (4 estimation of the volume and profitability of the salt business for the Solovetsky monastery at the eve of 1764. The analysis shows a significant market decline of the Pomorian salt in the 18th century, about half of which was the salt produced by the Solovetsky monastery. First of all it was caused by the appearance of a significant number of competing salt manufacturers with lower salt costs œming from cheaper primecost and cheaper delivery. In particular the market of Vologda, formerly the largest place for Solovetsky salt sales was fully occupied by other suppliers by the middle of the 18th century. The overall Solovetsky monastery’s salt production shrinked to a half of its middle of the 17th century value. By the time of secularization reform the salt production was not earing significant profit to the Solovetsky monastery. Under the strong state monopoly the salt business transformed into a labor-consuming national service obligation. Bereaved of the land estates together with its salt mines the Solovetsky monastery was also set free from a hard economical obligation.

  20. Poster - Thurs Eve-31: Clinical implementation and experience with EPID-based precision isocentre localization.

    Science.gov (United States)

    Heaton, R; Smale, J; Norrlinger, B; Wang, Y; van Prooijen, M; Islam, M

    2008-07-01

    Modern linear accelerators contain multiple isocentres, defined by the mechanical motions of gantry, collimator and table. Isocentre localization for these motions has been performed using film and manual evaluations which have difficulty in relating the individual motions. To address these limitations, we have developed an EPID based technique to measure the isocentre position for each of the treatment unit motions. This technique uses the projected position of a radio-opaque marker at the isocentre in a series of MV images to determine the motion of the isocentre. This analytical procedure has been implemented in the clinic using a MatLab code to automatically analyze images and determine both the isocentre position and motion about the mean for each of gantry, collimator and table. Results of isocentre measurements for 18 machines from 2 different vendors at 2 separate clinics are reported. These measurements show that while the position of the mean isocentres are contained within a 2mm sphere, combinations of gantry, table and collimator rotations can be found that result in treatment isocentres more than 2mm apart. Results for a treatment unit, which underwent a recent equipment upgrade, are also presented that show a small change in the location of the gantry relative to the table isocentre. The implementation of this of isocentre localization technique has provided important clinical information which can be efficiently completed in less than an hour. This information is an important consideration in monitoring the changes and in assessing the treatment precision that can be obtained. © 2008 American Association of Physicists in Medicine.

  1. Industrial experience of process identification and set-point decision algorithm in a full-scale treatment plant.

    Science.gov (United States)

    Yoo, Changkyoo; Kim, Min Han

    2009-06-01

    This paper presents industrial experience of process identification, monitoring, and control in a full-scale wastewater treatment plant. The objectives of this study were (1) to apply and compare different process-identification methods of proportional-integral-derivative (PID) autotuning for stable dissolved oxygen (DO) control, (2) to implement a process monitoring method that estimates the respiration rate simultaneously during the process-identification step, and (3) to propose a simple set-point decision algorithm for determining the appropriate set point of the DO controller for optimal operation of the aeration basin. The proposed method was evaluated in the industrial wastewater treatment facility of an iron- and steel-making plant. Among the process-identification methods, the control signal of the controller's set-point change was best for identifying low-frequency information and enhancing the robustness to low-frequency disturbances. Combined automatic control and set-point decision method reduced the total electricity consumption by 5% and the electricity cost by 15% compared to the fixed gain PID controller, when considering only the surface aerators. Moreover, as a result of improved control performance, the fluctuation of effluent quality decreased and overall effluent water quality was better.

  2. Kick off of the 2017-2018 school year at the EVE and School of the CERN Staff Association

    CERN Multimedia

    Staff Association

    2017-01-01

    The Children’s Day-Care Centre (“Espace de Vie Enfantine” - EVE) and School of the CERN Staff Association opened its doors once again to welcome the children, along with the teaching and administrative staff of the structure. The start of the school year was carried out gradually and in small groups to allow quality interaction between children, professionals and parents. At the EVE (Nursery and Kindergarten) and School, the children have the opportunity to thrive in a privileged environment, rich in cultural diversity, since the families (parents and children) come from many different nationalities. The teaching staff do their utmost to ensure that the children can become more autonomous and develop their social skills, all the while taking care of their well-being. This year, several new features are being introduced, for instance, first steps towards English language awareness. Indeed, the children will get to discover the English language in creative classes together with tr...

  3. Open Day at EVE and School of CERN Staff Association: an opportunity for many parents to discover the structure.

    CERN Document Server

    Staff Association

    2017-01-01

    On Saturday, 4 March 2017, the Children’s Day-Care Centre EVE and School of CERN Staff Association opened its doors to allow interested parents to visit the structure. Staff Association - Carole Dargagnon presents the EVE and school during the open day. This event was a great success and brought together many families. The Open Day was held in two sessions (first session at 10 am and second at 11 am), each consisting in two parts: a general presentation of the structure by the Headmistress Carole Dargagnon, a tour of the installations with Marie-Luz Cavagna and Stéphanie Palluel, the administrative assistants. The management team was delighted to offer parents the opportunity to participate in this pleasant event, where everyone could express themselves, ask questions and find answers in a friendly atmosphere.

  4. Inspiratsiooni lätetel Amsterdamis : kuhu püüelda ja millest hoiduda... / Eve Koha

    Index Scriptorium Estoniae

    Koha, Eve

    2002-01-01

    Ülemaailmse klaasikunsti ühingu Glass Art Society 32. aastakonverentsist Amsterdamis 30. V-2. VI. Eestist osalesid Viivi-Ann Keerdo, Eve Koha, Kai Koppel, Eeva Käsper-Lennuk, Ivo Lill ja Tiina Sarapu. Näitusel "Young and Hot, European Emerging Artists" esindasid Eestit E. Käsper-Lennuk ja T. Sarapu. Elutööpreemiad ameeriklasele Fritz Dreisbachile (sünd. 1941) ja taanlasele Finn Lynggaardile (sünd. 1939)

  5. Algorithming the Algorithm

    DEFF Research Database (Denmark)

    Mahnke, Martina; Uprichard, Emma

    2014-01-01

    changes: it’s not the ocean, it’s the internet we’re talking about, and it’s not a TV show producer, but algorithms that constitute a sort of invisible wall. Building on this assumption, most research is trying to ‘tame the algorithmic tiger’. While this is a valuable and often inspiring approach, we...... would like to emphasize another side to the algorithmic everyday life. We argue that algorithms can instigate and facilitate imagination, creativity, and frivolity, while saying something that is simultaneously old and new, always almost repeating what was before but never quite returning. We show...... this by threading together stimulating quotes and screenshots from Google’s autocomplete algorithms. In doing so, we invite the reader to re-explore Google’s autocomplete algorithms in a creative, playful, and reflexive way, thereby rendering more visible some of the excitement and frivolity that comes from being...

  6. Clinical experience of the use of a pharmacological treatment algorithm for major depressive disorder in patients with advanced cancer.

    Science.gov (United States)

    Okamura, Masako; Akizuki, Nobuya; Nakano, Tomohito; Shimizu, Ken; Ito, Tatsuhiko; Akechi, Tatsuo; Uchitomi, Yosuke

    2008-02-01

    The objective of this study was to describe the applicability and the dropout of the pharmacological treatment algorithm for major depressive disorder in patients with advanced cancer. Psychiatrists treated major depressive disorder in advanced cancer patients on the basis of the algorithm. For discussing the problems related to the algorithm, we reviewed the reasons for the non-application of the algorithm and the reasons for dropout of patients within a week of initiation of treatment. The algorithm was applied in 54 of 59 cases (applicability rate, 92%). The reasons for the non-application of the algorithm were as follows: the need to add a benzodiazepine to an antidepressant in 4 cases and the need to choose alprazolam despite the depression being moderate in severity, in order to obtain a rapid onset action and reduce anxiety in a patient with short prognosis. Nineteen of the 55 patients dropped out within a week of initiation of treatment based on the algorithm. Delirium was the most frequent reason for dropout. The applicability rate was high, but several problems were identified, including those related to the combination of antidepressants and benzodiazepines, pharmacological treatment of depression in patients with short prognosis, and delirium due to antidepressants.

  7. Clinical experience of a new rate drop response algorithm in the treatment of vasovagal and carotid sinus syncope.

    Science.gov (United States)

    Johansen, J B; Bexton, R S; Simonsen, E H; Markowitz, T; Erickson, M K

    2000-07-01

    Dual chamber pacing has proven beneficial in patients with sudden drops in heart rate as seen in vasovagal syncope and carotid sinus syndrome. Newer algorithms for faster detection of an insidious drop in heart rate and short lasting intervention pacing at a high rate, as in the rate drop response algorithm in the Medtronic Kappa series of pacemakers, might improve the effect of pacing. Two case reports, that demonstrate the use of these rate drop response algorithms, are presented. A 24-year-old woman with recurrent episodes of syncope and repeated tilt-table tests with vasovagal cardioinhibitory outcomes had a Medtronic Kappa 400 pacemaker implanted. Syncope was abolished during repeat tilt-table testing following pacemaker implantation and proper functioning of the rate drop response algorithm. The patient has been free of syncope during follow-up apart from a single episode that occurred due to neglect of vasovagal warning symptoms. A 52-year-old man with coronary artery disease developed recurrent blackouts. Carotid sinus massage resulted in 5.5 s of asystole and presyncope. A Medtronic Kappa 700 pacemaker with a rate drop response algorithm was implanted and the patient became asymptomatic. The rate drop response algorithm is discussed in detail based upon the case reports, and recommendations are given for the use of this algorithm in patients with vasovagal syncope and carotid sinus syndrome.

  8. The Northern Lights Experience - Negotiation strategies

    OpenAIRE

    Smedseng, Nina

    2014-01-01

    With rapidly increasing tourist numbers, the potential in commercialising the Northern Lights has grown immensely over the last few years, and one can only imagine what possibilities the future holds for professional Northern Lights experience providers. One of the biggest challenges of the Northern Lights experiences is how to deal with the natural conditions and constraints of this ever shifting phenomenon. The experience providers cannot guarantee sightings of the Northern Lights eve...

  9. Algorithms Introduction to Algorithms

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 1; Issue 1. Algorithms Introduction to Algorithms. R K Shyamasundar. Series Article Volume 1 Issue 1 January 1996 pp 20-27. Fulltext. Click here to view fulltext PDF. Permanent link: http://www.ias.ac.in/article/fulltext/reso/001/01/0020-0027 ...

  10. Start of enrolment for the Champs-Fréchets crèche (EVE)

    CERN Document Server

    HR Department

    2008-01-01

    As announced in Bulletin 43/2007, CERN signed an agreement with the commune of Meyrin on 17 October 2007 under which 20 places will be reserved for the children of CERN personnel in the Champs-Fréchets day care centre (EVE), which will open on Monday, 25 August, and CERN will contribute to the funding. This agreement allows members of the CERN personnel (employees and associates) access to the crèche, for children aged between 4 months and 4 years, irrespective of where they are living. Applications for the school year starting autumn 2008 will be accepted from Monday 17 March until Monday 30 June 2008. Members of the personnel must complete the enrolment formalities with the Meyrin infant education service themselves: Mairie de Meyrin Service de la Petite Enfance 2 rue des Boudines Case postale 367 - 1217 Meyrin 1 - Tel. + 41 (0)22 782 21 21 mailto:meyrin@meyrin.ch http://www.meyrin.ch/petiteenfance Application forms (in PDF) can be downloaded from the website of the com...

  11. Birds flee en mass from New Year’s Eve fireworks

    Science.gov (United States)

    Dokter, Adriaan M.; van Gasteren, Hans; van Loon, E. Emiel; Leijnse, Hidde; Bouten, Willem

    2011-01-01

    Anthropogenic disturbances of wildlife, such as noise, human presence, hunting activity, and motor vehicles, are becoming an increasing concern in conservation biology. Fireworks are an important part of celebrations worldwide, and although humans often find fireworks spectacular, fireworks are probably perceived quite differently by wild animals. Behavioral responses to fireworks are difficult to study at night, and little is known about the negative effects fireworks may have on wildlife. Every year, thousands of tons of fireworks are lit by civilians on New Year’s Eve in the Netherlands. Using an operational weather radar, we quantified the reaction of birds to fireworks in 3 consecutive years. Thousands of birds took flight shortly after midnight, with high aerial movements lasting at least 45 min and peak densities measured at 500 m altitude. The highest densities were observed over grasslands and wetlands, including nature conservation sites, where thousands of waterfowl rest and feed. The Netherlands is the most important winter staging area for several species of waterfowl in Europe. We estimate that hundreds of thousands of birds in the Netherlands take flight due to fireworks. The spatial and temporal extent of disturbance is substantial, and potential consequences are discussed. Weather radar provides a unique opportunity to study the reaction of birds to fireworks, which has otherwise remained elusive. PMID:22476363

  12. The Eroticism of Artificial Flesh in Villiers de L'Isle Adam's L'Eve Future

    Directory of Open Access Journals (Sweden)

    Patricia Pulham

    2008-10-01

    Full Text Available Villiers de L'Isle Adam's 'L'Eve Future' published in 1886 features a fictional version of the inventor Thomas Edison who constructs a complex, custom-made android for Englishman Lord Ewald as a substitute for his unsatisfactory lover. Hadaly, the android, has a number of literary and cultural precursors and successors. Her most commonly accepted ancestor is Olympia in E. T. A. Hoffmann's 'The Sandman' (1816 and among her fascinating descendants are Oskar Kokoschka's 'Silent Woman'; Model Borghild, a sex doll designed by German technicians during World War II;‘Caracas' in Tommaso Landolfi's short story ‘Gogol's Wife' (1954; a variety of gynoids and golems from the realms of science fiction, including Ira Levin's 'Stepford Wives' (1972; and, most recently, that silicon masterpiece - the Real Doll. All, arguably, have their genesis in the classical myth of Pygmalion. This essay considers the tension between animation and stasis in relation to this myth, and explores the necrophiliac aesthetic implicit in Villiers's novel.

  13. The Alpine marmot from the cave Matjaževe kamre

    Directory of Open Access Journals (Sweden)

    1994-12-01

    Full Text Available The excavations carried out in the Pleistocene sediments of a minor chamber belonging to a major cave complex, referred to as Matjaževe kamre, disclosed a paleolithic station with two cultural horizons of various ages and a fauna rather abundant in quantity but frugal in the number of species involved. In the present treatise the two authors deal merely with the paleontological elaboration ofAlpine marmot fossil remains assembled in the layer 2 of the paleolithic station as well as in the adjacent side gallery of the same cave complex. In the paleolithic station the bones have been crashed and burnt on purpose, while in the sidegallery, they have persisted nearly intact. The comparisons of measured dimensions of teeth, and of cranial and postcranial skeleton with findings of Alpine marmots of the same or surpassing age on the territory of Slovenia, show that measures differ but slightly, agreeing usually with the size of recent animals.Due to the fact that the fauna unearthed from the layer 2 is essentially represented by the fossil remains of the Alpine marmot, whilst the representatives of the tundra are missing, the authors attributed this layer to the late glacial. The statement is furthermore confirmed by the classification of stone tools havingbeen discovered in the same layer and attributed by Osole (1974, 29 to Epigravettian.The marmot remains from the side gallery are most probably of the same age.

  14. Allowance officers Russian and Austro-Hungarian armies on the eve of the First World War

    Directory of Open Access Journals (Sweden)

    Alexander P. Abramov

    2016-09-01

    Full Text Available On the basis of historical material provides information on measures of state and military administration on the eve of the First World War to improve the welfare of Russian officers and Austro-Hungary, through various forms of material incentives, which are reflected in the cash payments, promotions, awards and social guarantees. On the basis of archival materials of the study period, open scientific publications and Internet resources there are disclosed the features of the destination of salaries, various allowances and compensations Russian army in comparison to the Austro-Hungarian army, who spoke Russian opponent in the First World War. The author notes that the existing system of money allowances in the Russian army was more advantageous than in the Austro-Hungarian army. However, neither one nor the other could not fully meet the needs of the majority of officers of both armies, entered as opponents in the First World War. One of its major shortcomings, both in Russia and in the Austro-Hungarian Empire, was a wide gap in the amounts of all kinds of money allowances between chief officers, staff officers and generals.

  15. New Year's Eve injuries caused by celebratory gunfire--Puerto Rico, 2003.

    Science.gov (United States)

    2004-12-24

    Bullets fired into the air during celebrations fall with sufficient force to cause injury and death. However, few data exist regarding the epidemiology of injuries related to celebratory gunfire. In Puerto Rico, where such celebratory actions are common, news media reports have indicated that approximately two persons die and an estimated 25 more are injured each year from celebratory gunfire on New Year's Eve. The Puerto Rico Department of Health (PRDOH) invited CDC and local law enforcement agencies to assist in the investigation of injuries resulting from celebratory gunfire that occurred during December 31, 2003-January 1, 2004. This report summarizes the findings of that investigation, which determined that 1) bullets from probable celebratory gunfire caused 19 injuries, including one death and 2) such injuries affected a higher percentage of women and children aged <15 years than injuries from noncelebratory gunfire, with the majority occurring in certain public housing areas in densely populated, metropolitan San Juan. Education and enforcement of existing laws are needed to prevent these injuries.

  16. Effect of radiologists' experience with an adaptive statistical iterative reconstruction algorithm on detection of hypervascular liver lesions and perception of image quality.

    Science.gov (United States)

    Marin, Daniele; Mileto, Achille; Gupta, Rajan T; Ho, Lisa M; Allen, Brian C; Choudhury, Kingshuk Roy; Nelson, Rendon C

    2015-10-01

    To prospectively evaluate whether clinical experience with an adaptive statistical iterative reconstruction algorithm (ASiR) has an effect on radiologists' diagnostic performance and confidence for the diagnosis of hypervascular liver tumors, as well as on their subjective perception of image quality. Forty patients, having 65 hypervascular liver tumors, underwent contrast-enhanced MDCT during the hepatic arterial phase. Image datasets were reconstructed with filtered backprojection algorithm and ASiR (20%, 40%, 60%, and 80% blending). During two reading sessions, performed before and after a three-year period of clinical experience with ASiR, three readers assessed datasets for lesion detection, likelihood of malignancy, and image quality. For all reconstruction algorithms, there was no significant change in readers' diagnostic accuracy and sensitivity for the detection of liver lesions, between the two reading sessions. However, a 60% ASiR dataset yielded a significant improvement in specificity, lesion conspicuity, and confidence for lesion likelihood of malignancy during the second reading session (P ASiR dataset resulted in significant improvement in readers' perception of image quality during the second reading session (P ASiR algorithm may improve radiologists' diagnostic performance for the diagnosis of hypervascular liver tumors, as well as their perception of image quality.

  17. How calibration and reference spectra affect the accuracy of absolute soft X-ray solar irradiance measured by the SDO/EVE/ESP during high solar activity

    Science.gov (United States)

    Didkovsky, Leonid; Wieman, Seth; Woods, Thomas

    2016-10-01

    The Extreme ultraviolet Spectrophotometer (ESP), one of the channels of SDO's Extreme ultraviolet Variability Experiment (EVE), measures solar irradiance in several EUV and soft x-ray (SXR) bands isolated using thin-film filters and a transmission diffraction grating, and includes a quad-diode detector positioned at the grating zeroth-order to observe in a wavelength band from about 0.1 to 7.0 nm. The quad diode signal also includes some contribution from shorter wavelength in the grating's first-order and the ratio of zeroth-order to first-order signal depends on both source geometry, and spectral distribution. For example, radiometric calibration of the ESP zeroth-order at the NIST SURF BL-2 with a near-parallel beam provides a different zeroth-to-first-order ratio than modeled for solar observations. The relative influence of "uncalibrated" first-order irradiance during solar observations is a function of the solar spectral irradiance and the locations of large Active Regions or solar flares. We discuss how the "uncalibrated" first-order "solar" component and the use of variable solar reference spectra affect determination of absolute SXR irradiance which currently may be significantly overestimated during high solar activity.

  18. An effective and optimal quality control approach for green energy manufacturing using design of experiments framework and evolutionary algorithm

    Science.gov (United States)

    Saavedra, Juan Alejandro

    Quality Control (QC) and Quality Assurance (QA) strategies vary significantly across industries in the manufacturing sector depending on the product being built. Such strategies range from simple statistical analysis and process controls, decision-making process of reworking, repairing, or scraping defective product. This study proposes an optimal QC methodology in order to include rework stations during the manufacturing process by identifying the amount and location of these workstations. The factors that are considered to optimize these stations are cost, cycle time, reworkability and rework benefit. The goal is to minimize the cost and cycle time of the process, but increase the reworkability and rework benefit. The specific objectives of this study are: (1) to propose a cost estimation model that includes energy consumption, and (2) to propose an optimal QC methodology to identify quantity and location of rework workstations. The cost estimation model includes energy consumption as part of the product direct cost. The cost estimation model developed allows the user to calculate product direct cost as the quality sigma level of the process changes. This provides a benefit because a complete cost estimation calculation does not need to be performed every time the processes yield changes. This cost estimation model is then used for the QC strategy optimization process. In order to propose a methodology that provides an optimal QC strategy, the possible factors that affect QC were evaluated. A screening Design of Experiments (DOE) was performed on seven initial factors and identified 3 significant factors. It reflected that one response variable was not required for the optimization process. A full factorial DOE was estimated in order to verify the significant factors obtained previously. The QC strategy optimization is performed through a Genetic Algorithm (GA) which allows the evaluation of several solutions in order to obtain feasible optimal solutions. The GA

  19. Combination therapy Eve and Pac to induce apoptosis in cervical cancer cells by targeting PI3K/AKT/mTOR pathways.

    Science.gov (United States)

    Dong, Pingping; Hao, Fengmei; Dai, Shufeng; Tian, Lin

    2018-02-01

    This study aimed to investigate the anti-cervical cancer effects of everolimus (Eve) and paclitaxel (Pac) when used alone or in combination. Human cervical cancer cells HeLa and SiHa were divided into four group: Blank control group (control), everolimus group (Eve), paclitaxel group (Pac) and combined therapy group (Eve + Pac). The cell viability was detected by CCK-8 assay and the cell cloning ability was detected by clonegenic assay. Flow cytometry was used to detect cell apoptosis. Meanwhile, the expression of phosphatidylinositol 3-kinase (PI3K), protein kinase B (AKT), mammalian target of rapamycin (mTOR) and their phosphorylated proteins were studied by western blot. The HeLa and SiHa cells proliferation and cloning ability were significantly inhibited in drug treatment groups compared with control group (p Pac combinatorial therapy showed the better results than single treatment with Eve or Pac. Combination of Eve and Pac has synergistic effect on the induction of apoptosis in cervical cancer cells. In addition, the protein ratios in HeLa and SiHa cell treated with the Eve + Pac combination were significantly lower than that of cervical cancer cells treated with either Eve or Pac cell alone. Our study suggested that Eve + Pac provide a novel therapeutic strategy for cervical cancer.

  20. THE FOREIGN POLICY OF THE BOLSHEVIKS ON THE EVE OF THE PARIS PEACE CONFERENCE OF 1919

    Directory of Open Access Journals (Sweden)

    Elena Nikolaevna Emelyanova

    2017-11-01

    Full Text Available Purpose. The article examines the foreign policy activities of the Bolshevik leadership on the eve of the opening of the Paris Peace Conference. The strategy and tactics of the RCP (B in the autumn-winter of 1918–1919 are analyzed, as well as the attempt to establish relations with the great powers hostile to the RSFSR, the striving of Soviet Russia to take its place in the new Versailles system. The ways to achieve this goal are exploring. The methodological basis of the article are the principles of objectivity, historicism, a critical approach to the sources used and a comprehensive analysis of the problem posed. Results: it is argued that the international situation, the growth of the revolutionary movement in Europe in 1918–1919, the unification of all left-wing forces around the Soviet state forced the leaders of Britain and the US to send their representative for talks with the Bolsheviks. On the other hand, the Bolshevik leadership sought to reach agreement with the world powers on recognizing the Soviet government, even by temporarily abandoning international goals, the implementation of these tasks was delegated to the Communist International created in March 1919. The preservation of the Soviet state was put by the Bolsheviks above the idea of a “world revolution”. Scope of application of the results. The results of the work can be used for further research in the field of history and political science, as well as in the teaching of these disciplines in the university.

  1. Management of Hypertensive Patients With Multiple Drug Intolerances: A Single-Center Experience of a Novel Treatment Algorithm.

    Science.gov (United States)

    Antoniou, Sotiris; Saxena, Manish; Hamedi, Nadya; de Cates, Catherine; Moghul, Sakib; Lidder, Satnam; Kapil, Vikas; Lobo, Melvin D

    2016-02-01

    Multiple drug intolerance to antihypertensive medications (MDI-HTN) is an overlooked cause of nonadherence. In this study, 55 patients with MDI-HTN were managed with a novel treatment algorithm utilizing sequentially initiated monotherapies or combinations of maximally tolerated doses of fractional tablet doses, liquid formulations, transdermal preparations, and off-label tablet medications. A total of 10% of referred patients had MDI-HTN, resulting in insufficient pharmacotherapy and baseline office blood pressure (OBP) of 178±24/94±15 mm Hg. At baseline, patients were intolerant to 7.6±3.6 antihypertensives; they were receiving 1.4±1.1 medications. After 6 months on the novel MDI-HTN treatment algorithm, both OBP and home blood pressure (HBP) were significantly reduced, with patients receiving 2.0±1.2 medications. At 12 months, OBP was reduced from baseline by 17±5/9±3 mm Hg (P<.01, P<.05) and HBP was reduced by 11±5/12±3 mm Hg (P<.01 for both) while patients were receiving 1.9±1.1 medications. Application of a stratified medicine approach allowed patients to tolerate increased numbers of medications and achieved significant long-term lowering of blood pressure. © 2015 The Authors. The Journal of Clinical Hypertension published by Wiley Periodicals, Inc.

  2. A Novel Flavour Tagging Algorithm using Machine Learning Techniques and a Precision Measurement of the $B^0 - \\overline{B^0}$ Oscillation Frequency at the LHCb Experiment

    CERN Document Server

    Kreplin, Katharina

    This thesis presents a novel flavour tagging algorithm using machine learning techniques and a precision measurement of the $B^0 -\\overline{B^0}$ oscillation frequency $\\Delta m_d$ using semileptonic $B^0$ decays. The LHC Run I data set is used which corresponds to $3 \\textrm{fb}^{-1}$ of data taken by the LHCb experiment at a center-of-mass energy of 7 TeV and 8 TeV. The performance of flavour tagging algorithms, exploiting the $b\\bar{b}$ pair production and the $b$ quark hadronization, is relatively low at the LHC due to the large amount of soft QCD background in inelastic proton-proton collisions. The standard approach is a cut-based selection of particles, whose charges are correlated to the production flavour of the $B$ meson. The novel tagging algorithm classifies the particles using an artificial neural network (ANN). It assigns higher weights to particles, which are likely to be correlated to the $b$ flavour. A second ANN combines the particles with the highest weights to derive the tagging decision. ...

  3. Implementation of trigger algorithms and studies for the measurement of the Higgs boson self-coupling in the ATLAS experiment at the LHC

    CERN Document Server

    Dahlhoff, Andrea

    2006-01-01

    At the LHC in Geneva the ATLAS experiment will start at 2007. The first part of the present work describes the implementation of trigger algorithms for the Jet/Energy Processor (JEP) as well as all other required features like controlling, diagnostics and read-out. The JEP is one of three processing units of the ATLAS Level-1 Calorimeter Trigger. It identifies and finds the location of jets, and sums total and missing transverse energy information from the trigger data. The Jet/Energy Module (JEM) is the main module of the JEP. The JEM prototype is designed to be functionally identical to the final production module for ATLAS. The thesis presents a description of the architecture, required functionality, and jet and energy summation algorithm of the JEM. Various input test vector patterns were used to check the performance of the comlete energy summation algorithm. The test results using two JEM prototypes are presented and discussed. The subject of the second part is a Monte-Carlo study which determines the ...

  4. Algorithm design

    CERN Document Server

    Kleinberg, Jon

    2006-01-01

    Algorithm Design introduces algorithms by looking at the real-world problems that motivate them. The book teaches students a range of design and analysis techniques for problems that arise in computing applications. The text encourages an understanding of the algorithm design process and an appreciation of the role of algorithms in the broader field of computer science.

  5. Genetic algorithms

    Science.gov (United States)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  6. Moralized Hygiene and Nationalized Body: Anti-Cigarette Campaigns in China on the Eve of the 1911 Revolution

    Directory of Open Access Journals (Sweden)

    Wennan Liu

    2013-03-01

    Full Text Available Western knowledge about the injurious effects of cigarette smoking on smokers’ health appeared in the late nineteenth century and was shaped by both the Christian temperance movement and scientific developments in chemistry and physiology. Along with the increasing import of cigarettes into China, this new knowledge entered China through translations published at the turn of the twentieth century. It was reinterpreted and modified to dissuade the Chinese people from smoking cigarettes in two anti-cigarette campaigns: one launched by a former American missionary, Edward Thwing, in Tianjin, and a second by progressive social elites in Shanghai on the eve of the 1911 Revolution. By examining the rhetoric and practice of the campaigns, I argue that the discourse of hygiene they deployed moralized the individual habit of cigarette smoking as undermining national strength and endangering the future of the Chinese nation, thus helping to construct the idea of a nationalized body at this highly politically charged moment.

  7. On the eve of Copenhagen: Obama and the environment; A la veille de Copenhague: Obama et l'environnement

    Energy Technology Data Exchange (ETDEWEB)

    Pereon, Y.M.

    2009-07-01

    The author proposes a rather detailed overview of the United States posture with respect to climate change challenges on the eve of the Copenhagen conference. First, he shows how the public opinion in the United States is ambivalent and changing. Then, he outlines that the arrival of President Obama resulted in an important change for the United States environmental policy: a new team was set up and the environment protection became a priority. The author then reports the Congressional process for this policy, first through the House of Representatives, and then the Senate. He highlights the differences between agendas of firms and of environment protection organisations, and between that of the Copenhagen conference and that of the US government

  8. Evaluation of the effects of transmission impairments on perceived video quality by exploiting ReTRiEVED dataset

    Science.gov (United States)

    Paudyal, Pradip; Battisti, Federica; Carli, Marco

    2017-03-01

    The robust design and adaptation of multimedia networks relies on the study of the influence of potential network impairments on the perceived quality. Video quality may be affected by network impairments, such as delay, jitter, packet loss, and bandwidth, and the perceptual impact of these impairments may vary according to the video content. The effects of packet loss and encoding artifacts on the perceived quality have been widely addressed in the literature. However, the relationship between video content and network impairments on the perceived video quality has not been deeply investigated. A detailed analysis of ReTRiEVED test video dataset, designed by considering a set of potential network impairments, is presented, and the effects of transmission impairments on perceived quality are analyzed. Furthermore, the impact on the perceived quality of the video content in the presence of transmission impairments is studied by using video content descriptors. Finally, the performances of well-known quality metrics are tested on the proposed dataset.

  9. Ambrosio Alberto Fabio, Feuillebois Eve, Zarcone Thierry, Les derviches tourneurs. Doctrine, histoire et pratiques, Cerf, 2006, 210 p.

    Directory of Open Access Journals (Sweden)

    Catherine Mayeur-Jaouen

    2009-05-01

    Full Text Available Ce petit livre exemplaire est mieux qu’un vade mecum du néophyte dans le labyrinthe d’une pensée mystique majeure, dans l’histoire d’une confrérie aussi réputée que finalement mal connue, enfin dans l’interprétation de la danse célèbre des derviches tourneurs. Trois auteurs ont ici conjugué des spécialités et talents variés pour réussir la première synthèse en français sur Rûmî, la Mevleviyye et le samâ‘ (concert mystique et danse. C’est Eve Feuillebois, spécialiste de littérature persane, ...

  10. Algorithmic cryptanalysis

    CERN Document Server

    Joux, Antoine

    2009-01-01

    Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic

  11. The Stuff of Christmas Homemaking: Transforming the House and Church on Christmas Eve in the Bay of Kotor, Montenegro

    Directory of Open Access Journals (Sweden)

    Vesna Vučinić-Nešković

    2016-03-01

    Full Text Available The domestic burning of Yule logs on Christmas Eve is an archaic tradition characteristic of the Christian population in the central Balkans. In the fifty years following World War Two, the socialist state suppressed these and other popular religious practices. However, ethnographic research in Serbia and Montenegro in the late 1980s showed that many village households, nevertheless, preserved their traditional Christmas rituals at home, in contrast to the larger towns, in which they were practically eradicated. Even in the micro-regions, such as the Bay of Kotor, there were observable differences between more secluded rural communities, in which the open hearth is still the ritual center of the house (on which the Yule logs are burned as many as seven times during the Christmas season, and the towns in which only a few households continued with the rite (burning small logs in the wood-stove. In the early 1990s, however, a revival of domestic religious celebrations as well as their extension into the public realm has occurred. This study shows how on Christmas Eve, houses and churchyards (as well as townsquares are being transformed into sacred places. By analyzing the temporal and spatial aspects of this ritual event, the roles that the key actors play, the actions they undertake and artifacts they use, I attempt to demonstrate how the space of everyday life is transformed into a sacred home. In the end, the meanings and functions of homemaking are discussed in a way that confronts the classic distinction between private and public ritual environs.

  12. A Markov Chain Monte Carlo Algorithm for Infrasound Atmospheric Sounding: Application to the Humming Roadrunner experiment in New Mexico

    Science.gov (United States)

    Lalande, Jean-Marie; Waxler, Roger; Velea, Doru

    2016-04-01

    As infrasonic waves propagate at long ranges through atmospheric ducts it has been suggested that observations of such waves can be used as a remote sensing techniques in order to update properties such as temperature and wind speed. In this study we investigate a new inverse approach based on Markov Chain Monte Carlo methods. This approach as the advantage of searching for the full Probability Density Function in the parameter space at a lower computational cost than extensive parameters search performed by the standard Monte Carlo approach. We apply this inverse methods to observations from the Humming Roadrunner experiment (New Mexico) and discuss implications for atmospheric updates, explosion characterization, localization and yield estimation.

  13. SU-F-P-45: Clinical Experience with Radiation Dose Reduction of CT Examinations Using Iterative Reconstruction Algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Weir, V [Baylor Scott and White Healthcare System, Dallas, TX (United States); Zhang, J [University of Kentucky, Lexington, KY (United States)

    2016-06-15

    Purpose: Iterative reconstruction (IR) algorithms have been adopted by medical centers in the past several years. IR has a potential to substantially reduce patient dose while maintaining or improving image quality. This study characterizes dose reductions in clinical settings for CT examinations using IR. Methods: We retrospectively analyzed dose information from patients who underwent abdomen/pelvis CT examinations with and without contrast media in multiple locations of our Healthcare system. A total of 743 patients scanned with ASIR on 64 slice GE lightspeed VCTs at three sites, and 30 patients scanned with SAFIRE on a Siemens 128 slice Definition Flash in one site was retrieved. For comparison, patient data (n=291) from a GE scanner and patient data (n=61) from two Siemens scanners where filtered back-projection (FBP) was used was collected retrospectively. 30% and 10% ASIR, and SAFIRE Level 2 was used. CTDIvol, Dose-length-product (DLP), weight and height from all patients was recorded. Body mass index (BMI) was calculated accordingly. To convert CTDIvol to SSDE, AP and lateral dimensions at the mid-liver level was measured for each patient. Results: Compared with FBP, 30% ASIR reduces dose by 44.1% (SSDE: 12.19mGy vs. 21.83mGy), while 10% ASIR reduced dose by 20.6% (SSDE 17.32mGy vs. 21.83). Use of SAFIRE reduced dose by 61.4% (SSDE: 8.77mGy vs. 22.7mGy). The geometric mean for patients scanned with ASIR was larger than for patients scanned with FBP (geometric mean is 297.48 mmm vs. 284.76 mm). The same trend was observed for the Siemens scanner where SAFIRE was used (geometric mean: 316 mm with SAFIRE vs. 239 mm with FBP). Patient size differences suggest that further dose reduction is possible. Conclusion: Our data confirmed that in clinical practice IR can significantly reduce dose to patients who undergo CT examinations, while meeting diagnostic requirements for image quality.

  14. A new method for class prediction based on signed-rank algorithms applied to Affymetrix® microarray experiments

    Directory of Open Access Journals (Sweden)

    Vassal Aurélien

    2008-01-01

    Full Text Available Abstract Background The huge amount of data generated by DNA chips is a powerful basis to classify various pathologies. However, constant evolution of microarray technology makes it difficult to mix data from different chip types for class prediction of limited sample populations. Affymetrix® technology provides both a quantitative fluorescence signal and a decision (detection call: absent or present based on signed-rank algorithms applied to several hybridization repeats of each gene, with a per-chip normalization. We developed a new prediction method for class belonging based on the detection call only from recent Affymetrix chip type. Biological data were obtained by hybridization on U133A, U133B and U133Plus 2.0 microarrays of purified normal B cells and cells from three independent groups of multiple myeloma (MM patients. Results After a call-based data reduction step to filter out non class-discriminative probe sets, the gene list obtained was reduced to a predictor with correction for multiple testing by iterative deletion of probe sets that sequentially improve inter-class comparisons and their significance. The error rate of the method was determined using leave-one-out and 5-fold cross-validation. It was successfully applied to (i determine a sex predictor with the normal donor group classifying gender with no error in all patient groups except for male MM samples with a Y chromosome deletion, (ii predict the immunoglobulin light and heavy chains expressed by the malignant myeloma clones of the validation group and (iii predict sex, light and heavy chain nature for every new patient. Finally, this method was shown powerful when compared to the popular classification method Prediction Analysis of Microarray (PAM. Conclusion This normalization-free method is routinely used for quality control and correction of collection errors in patient reports to clinicians. It can be easily extended to multiple class prediction suitable with

  15. Algorithmic mathematics

    CERN Document Server

    Hougardy, Stefan

    2016-01-01

    Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.

  16. An Algorithm for the Numerical Solution of the Pseudo Compressible Navier-stokes Equations Based on the Experimenting Fields Approach

    KAUST Repository

    Salama, Amgad

    2015-06-01

    In this work, the experimenting fields approach is applied to the numerical solution of the Navier-Stokes equation for incompressible viscous flow. In this work, the solution is sought for both the pressure and velocity fields in the same time. Apparently, the correct velocity and pressure fields satisfy the governing equations and the boundary conditions. In this technique a set of predefined fields are introduced to the governing equations and the residues are calculated. The flow according to these fields will not satisfy the governing equations and the boundary conditions. However, the residues are used to construct the matrix of coefficients. Although, in this setup it seems trivial constructing the global matrix of coefficients, in other setups it can be quite involved. This technique separates the solver routine from the physics routines and therefore makes easy the coding and debugging procedures. We compare with few examples that demonstrate the capability of this technique.

  17. Experiences With an Optimal Estimation Algorithm for Surface and Atmospheric Parameter Retrieval From Passive Microwave Data in the Arctic

    DEFF Research Database (Denmark)

    Scarlat, Raul Cristian; Heygster, Georg; Pedersen, Leif Toudal

    2017-01-01

    the brightness temperatures observed by a passive microwave radiometer. The retrieval method inverts the forward model and produces ensembles of the seven parameters, wind speed, integrated water vapor, liquid water path, sea and ice temperature, sea ice concentration and multiyear ice fraction. The method......We present experiences in using an integrated retrieval method for atmospheric and surface parameters in the Arctic using passive microwave data from the AMSR-E radiometer. The core of the method is a forward model which can ingest bulk data for seven geophysical parameters to reproduce...... compared with the Arctic Systems Reanalysis model data as well as columnar water vapor retrieved from satellite microwave sounders and the Remote Sensing Systems AMSR-E ocean retrieval product in order to determine the feasibility of using the same setup over pure surface with 100% and 0% sea ice cover...

  18. Total algorithms

    NARCIS (Netherlands)

    Tel, G.

    We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of

  19. The Soil Moisture Active Passive Mission (SMAP) Science Data Products: Results of Testing with Field Experiment and Algorithm Testbed Simulation Environment Data

    Science.gov (United States)

    Entekhabi, Dara; Njoku, Eni E.; O'Neill, Peggy E.; Kellogg, Kent H.; Entin, Jared K.

    2010-01-01

    Talk outline 1. Derivation of SMAP basic and applied science requirements from the NRC Earth Science Decadal Survey applications 2. Data products and latencies 3. Algorithm highlights 4. SMAP Algorithm Testbed 5. SMAP Working Groups and community engagement

  20. Poster - Thurs Eve-06: Maximizing eclipse IMRT dose accuracy by adjusting the dosimetric leaf gap parameter.

    Science.gov (United States)

    Poffenbarger, B; Audet, C

    2008-07-01

    The dosimetric leaf gap (DLG) is a parameter used by Eclipse to model the rounded leaf ends of Varian MLCs. The DLGs were determined for the Millennium (M120) and High-Definition (HD120) model MLCs and taken as the difference between measured (0.6mm diode, IBA) and nominal MLC-defined profile FWHM values. Configuring the Eclipse pencil beam algorithm with the measured DLG gave poor agreement between measured and calculated IMRT dose distributions for the HD120 but not the M120. Agreement was optimized by adjusting the DLG for the HD120; 0.3mm changes in DLG were enough to cause significant variations in field dose agreement. Optimal DLG values of 0.04cm and 0.05cm were found for the 6MV HD120 and 10MV HD120, respectively, and 0.135cm 0.175cm for the 6MV M120 and 18MV M120, respectively. Agreement between measured and calculated dose distributions worsened for the AAA algorithm indicating separate DLG values may be required. A leaf calibration software upgrade also reduced agreement by changing the physical leaf position for a given location value. The change was detected using film and the picket fence MLC-pattern which places the two banks of opposing leaves at the same position but at different times. The DLG value can be adjusted from its measured physical value to improve the dosimetric accuracy of Eclipse IMRT plans and compensate for the effects of treatment planning algorithm and varying leaf calibrations. Since leaf calibrations are variable it is important to define the dosimetric leaf gap for each accelerator and clinic. © 2008 American Association of Physicists in Medicine.

  1. “Manufactured By The Sun”: Eve Langley’s The Pea-Pickers on The Move

    Directory of Open Access Journals (Sweden)

    Nicholas Birns

    2016-06-01

    Full Text Available http://dx.doi.org/10.5007/2175-8026.2016v69n2p85 Eve Langley’s The Pea-Pickers is often seem as a quaint artifact of a now-vanished Australia. This paper seeks to rescue the contemporary relevance of this novel of two young women who go into the rural areas of Gippsland to pick peas, showing its pioneering attention to transgender concerns, the polyphonic panoply of its style and soundscape,. and its portrayal of a settler culture not anchored in a perilous identity but dynamically on the move. As so often in settler colony literature, though, rigidities on the issue of race—particularly the portrayal of the Muslim migrant Akbarah Khan—mar the canvas, and make Langley’s novel as emblematic of the constitutive problems of Australian literary history as of its artistic achievements. Just as Langley’s gender variance and personal nonconformity made her an outlier in the Austrlaia and New Zealand she lived in, so is her contribution to Australian literature an unfinished project.

  2. Children’s Day-Care Centre (EVE) and School kicked off the school year 2016-2017

    CERN Multimedia

    Staff Association

    2016-01-01

    It has been 54 years already, ever since the Nursery school was founded in March 1961, that the Staff Association together with the teachers, the managerial and the administrative staff, welcomes your children at the start of the school year. On Tuesday, 30 August 2016, the Children’s Day-Care Centre (EVE) and School opened its doors again for children between four months and six years old. The start of the school year was carried out gradually and in small groups to allow quality interaction between children, professionals and parents. This year, our structure will accommodate about 130 children divided between the nursery, the kindergarten and the school. Throughout the school year, the children will work on the theme of colours, which will be the common thread linking all our activities. Our team is comprised of 38 people: the headmistress, the deputy headmistress, 2 secretaries, 13 educators, 4 teachers, 11 teaching assistants, 2 nursery assistants and 4 canteen workers. The team is delighted...

  3. Poster — Thur Eve — 14: Improving Tissue Segmentation for Monte Carlo Dose Calculation using DECT

    Energy Technology Data Exchange (ETDEWEB)

    Di Salvio, A.; Bedwani, S.; Carrier, J-F. [Centre hospitalier de l' Université de Montréal (Canada); Bouchard, H. [National Physics Laboratory, Teddington (United Kingdom)

    2014-08-15

    Purpose: To improve Monte Carlo dose calculation accuracy through a new tissue segmentation technique with dual energy CT (DECT). Methods: Electron density (ED) and effective atomic number (EAN) can be extracted directly from DECT data with a stoichiometric calibration method. Images are acquired with Monte Carlo CT projections using the user code egs-cbct and reconstructed using an FDK backprojection algorithm. Calibration is performed using projections of a numerical RMI phantom. A weighted parameter algorithm then uses both EAN and ED to assign materials to voxels from DECT simulated images. This new method is compared to a standard tissue characterization from single energy CT (SECT) data using a segmented calibrated Hounsfield unit (HU) to ED curve. Both methods are compared to the reference numerical head phantom. Monte Carlo simulations on uniform phantoms of different tissues using dosxyz-nrc show discrepancies in depth-dose distributions. Results: Both SECT and DECT segmentation methods show similar performance assigning soft tissues. Performance is however improved with DECT in regions with higher density, such as bones, where it assigns materials correctly 8% more often than segmentation with SECT, considering the same set of tissues and simulated clinical CT images, i.e. including noise and reconstruction artifacts. Furthermore, Monte Carlo results indicate that kV photon beam depth-dose distributions can double between two tissues of density higher than muscle. Conclusions: A direct acquisition of ED and the added information of EAN with DECT data improves tissue segmentation and increases the accuracy of Monte Carlo dose calculation in kV photon beams.

  4. Poster — Thur Eve — 71: A 4D Multimodal Lung Phantom for Regmentation Evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Markel, D [McGill University, Physics, Montreal QC (Canada); Levesque, I R [McGill University, Oncology, Montreal QC (Canada); Research Institute of the McGill University Health Centre, Montreal, QC (Canada); El Naqa, I [McGill University, Physics, Montreal QC (Canada); McGill University, Oncology, Montreal QC (Canada)

    2014-08-15

    Segmentation and registration of medical imaging data are two processes that can be integrated (a process termed regmentation) to iteratively reinforce each other, potentially improving efficiency and overall accuracy. A significant challenge is presented when attempting to validate the joint process particularly with regards to minimizing geometric uncertainties associated with the ground truth while maintaining anatomical realism. This work demonstrates a 4D MRI, PET, and CT compatible tissue phantom with a known ground truth for evaluating registration and segmentation accuracy. The phantom consists of a preserved swine lung connected to an air pump via a PVC tube for inflation. Mock tumors were constructed from sea sponges contained within two vacuum-sealed compartments with catheters running into each one for injection of radiotracer solution. The phantom was scanned using a GE Discovery-ST PET/CT scanner and a 0.23T Phillips MRI, and resulted in anatomically realistic images. A bifurcation tracking algorithm was implemented to provide a ground truth for evaluating registration accuracy. This algorithm was validated using known deformations of up to 7.8 cm using a separate CT scan of a human thorax. Using the known deformation vectors to compare against, 76 bifurcation points were selected. The tracking accuracy was found to have maximum mean errors of −0.94, 0.79 and −0.57 voxels in the left-right, anterior-posterior and inferior-superior directions, respectively. A pneumatic control system is under development to match the respiratory profile of the lungs to a breathing trace from an individual patient.

  5. Black Edens, country Eves: Listening, performance, and black queer longing in country music.

    Science.gov (United States)

    Royster, Francesca T

    2017-07-03

    This article explores Black queer country music listening, performance, and fandom as a source of pleasure, nostalgia, and longing for Black listeners. Country music can be a space for alliance and community, as well as a way of accessing sometimes repressed cultural and personal histories of violence: lynching and other forms of racial terror, gender surveillance and disciplining, and continued racial and economic segregation. For many Black country music listeners and performers, the experience of being a closeted fan also fosters an experience of ideological hailing, as well as queer world-making. Royster suggests that through Black queer country music fandom and performance, fans construct risky and soulful identities. The article uses Tina Turner's solo album, Tina Turns the Country On! (1974) as an example of country music's power as a tool for resistance to racial, sexual, and class disciplining.

  6. Poster - Thur Eve - 25: In vivo dosimetric verification of intensity-modulated radiation therapy.

    Science.gov (United States)

    Chytyk-Praznik, K; Van Uytven, E; Van Beek, T; McCurdy, Bmc

    2012-07-01

    Dosimetric verification of patient treatment plans has become increasingly important due to the widespread use of complicated delivery techniques. IMRT and VMAT treatments are typically verified prior to start of the patient's course of treatment, using a point dose and/or a film measurement. Pre-treatment verification will not detect patient or machine-related errors; therefore, in vivo dosimetric verification is the only way to determine if the patient's treatment was delivered correctly. Portal images were acquired throughout the course of five prostate and six head-and-neck patient IMRT treatments. The corresponding predicted images were calculated using a previously developed portal dose image prediction algorithm, which combines a versatile fluence model with a patient scatter and EPID dose prediction model. The prostate patient image agreement was found to vary day-to-day due to rectal gas pockets and the effect of adjustable support rails on the patient couch. The head-and-neck patient images were observed to be more consistent daily, but an increased measured dose was evident at the periphery of the patient, likely due to patient weight loss. The majority of the fields agreed within 3% and 3 mm for greater than 90% of the pixels, as established by the χ-comparison. This work demonstrates the changes in patient anatomy that are detectable with the portal dose image prediction model. Prior to clinical implementation, the effect of the couch must be incorporated into the model, the image acquisition must be automatically scheduled and routine EPID QA must be undertaken to ensure the collection of high-quality EPID images. © 2012 American Association of Physicists in Medicine.

  7. Parallel algorithms for unconstrained optimizations by multisplitting

    Energy Technology Data Exchange (ETDEWEB)

    He, Qing [Arizona State Univ., Tempe, AZ (United States)

    1994-12-31

    In this paper a new parallel iterative algorithm for unconstrained optimization using the idea of multisplitting is proposed. This algorithm uses the existing sequential algorithms without any parallelization. Some convergence and numerical results for this algorithm are presented. The experiments are performed on an Intel iPSC/860 Hyper Cube with 64 nodes. It is interesting that the sequential implementation on one node shows that if the problem is split properly, the algorithm converges much faster than one without splitting.

  8. Combinatorial algorithms

    CERN Document Server

    Hu, T C

    2002-01-01

    Newly enlarged, updated second edition of a valuable text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discusses binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. 153 black-and-white illus. 23 tables.Newly enlarged, updated second edition of a valuable, widely used text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discussed are binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. New to this edition: Chapter 9

  9. Autodriver algorithm

    Directory of Open Access Journals (Sweden)

    Anna Bourmistrova

    2011-02-01

    Full Text Available The autodriver algorithm is an intelligent method to eliminate the need of steering by a driver on a well-defined road. The proposed method performs best on a four-wheel steering (4WS vehicle, though it is also applicable to two-wheel-steering (TWS vehicles. The algorithm is based on coinciding the actual vehicle center of rotation and road center of curvature, by adjusting the kinematic center of rotation. The road center of curvature is assumed prior information for a given road, while the dynamic center of rotation is the output of dynamic equations of motion of the vehicle using steering angle and velocity measurements as inputs. We use kinematic condition of steering to set the steering angles in such a way that the kinematic center of rotation of the vehicle sits at a desired point. At low speeds the ideal and actual paths of the vehicle are very close. With increase of forward speed the road and tire characteristics, along with the motion dynamics of the vehicle cause the vehicle to turn about time-varying points. By adjusting the steering angles, our algorithm controls the dynamic turning center of the vehicle so that it coincides with the road curvature center, hence keeping the vehicle on a given road autonomously. The position and orientation errors are used as feedback signals in a closed loop control to adjust the steering angles. The application of the presented autodriver algorithm demonstrates reliable performance under different driving conditions.

  10. Algorithm 865

    DEFF Research Database (Denmark)

    Gustavson, Fred G.; Reid, John K.; Wasniewski, Jerzy

    2007-01-01

    variables, and the speed is usually better than that of the LAPACK algorithm that uses full storage (n2 variables). Included are subroutines for rearranging a matrix whose upper or lower-triangular part is packed by columns to this format and for the inverse rearrangement. Also included is a kernel...

  11. Detection of algorithmic trading

    Science.gov (United States)

    Bogoev, Dimitar; Karam, Arzé

    2017-10-01

    We develop a new approach to reflect the behavior of algorithmic traders. Specifically, we provide an analytical and tractable way to infer patterns of quote volatility and price momentum consistent with different types of strategies employed by algorithmic traders, and we propose two ratios to quantify these patterns. Quote volatility ratio is based on the rate of oscillation of the best ask and best bid quotes over an extremely short period of time; whereas price momentum ratio is based on identifying patterns of rapid upward or downward movement in prices. The two ratios are evaluated across several asset classes. We further run a two-stage Artificial Neural Network experiment on the quote volatility ratio; the first stage is used to detect the quote volatility patterns resulting from algorithmic activity, while the second is used to validate the quality of signal detection provided by our measure.

  12. Algorithmic Self

    DEFF Research Database (Denmark)

    Markham, Annette

    layered set of accounts to help build our understanding of how individuals relate to their devices, search systems, and social network sites. This work extends critical analyses of the power of algorithms in implicating the social self by offering narrative accounts from multiple perspectives. It also......This paper takes an actor network theory approach to explore some of the ways that algorithms co-construct identity and relational meaning in contemporary use of social media. Based on intensive interviews with participants as well as activity logging and data tracking, the author presents a richly...... contributes an innovative method for blending actor network theory with symbolic interaction to grapple with the complexity of everyday sensemaking practices within networked global information flows....

  13. Evolutionary pattern search algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Hart, W.E.

    1995-09-19

    This paper defines a class of evolutionary algorithms called evolutionary pattern search algorithms (EPSAs) and analyzes their convergence properties. This class of algorithms is closely related to evolutionary programming, evolutionary strategie and real-coded genetic algorithms. EPSAs are self-adapting systems that modify the step size of the mutation operator in response to the success of previous optimization steps. The rule used to adapt the step size can be used to provide a stationary point convergence theory for EPSAs on any continuous function. This convergence theory is based on an extension of the convergence theory for generalized pattern search methods. An experimental analysis of the performance of EPSAs demonstrates that these algorithms can perform a level of global search that is comparable to that of canonical EAs. We also describe a stopping rule for EPSAs, which reliably terminated near stationary points in our experiments. This is the first stopping rule for any class of EAs that can terminate at a given distance from stationary points.

  14. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack

    2014-02-04

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  15. Heuristic Algorithms for Solving Bounded Diameter Minimum Spanning Tree Problem and Its Application to Genetic Algorithm Development

    OpenAIRE

    Nghia, Nguyen Duc; Binh, Huynh Thi Thanh

    2008-01-01

    We have introduced the heuristic algorithm for solving BDMST problem, called CBRC. The experiment shows that CBRC have best result than the other known heuristic algorithm for solving BDMST prolem on Euclidean instances. The best solution found by the genetic algorithm which uses best heuristic algorithm or only one heuristic algorithm for initialization the population is not better than the best solution found by the genetic algorithm which uses mixed heuristic algorithms (randomized heurist...

  16. Clinical experience with Thera DR rate-drop response pacing algorithm in carotid sinus syndrome and vasovagal syncope. The International Rate-Drop Investigators Group.

    Science.gov (United States)

    Benditt, D G; Sutton, R; Gammage, M D; Markowitz, T; Gorski, J; Nygaard, G A; Fetter, J

    1997-03-01

    This study examined the effectiveness of cardiac pacing using the Thera DR rate-drop response algorithm for prevention of recurrent symptoms in patients with carotid sinus syndrome (CSS) or vasovagal syncope. The algorithm comprises both diagnostic and treatment elements. The diagnostic element consists of a programmable "window" used to identify heart rate changes compatible with an evolving neurally mediated syncopal episode. The treatment arm consists of pacing at a selectable rate and for a programmable duration. Forty-three patients (mean age 53 +/- 20.4 years) with CSS alone (n = 8), CSS in conjunction with vasovagal syncope (n = 4), or vasovagal syncope alone (n = 31) were included. Thirty-nine had recurrent syncope, while the remaining four reported multiple presyncopal events. Prior to pacing, 40 +/- 152 syncopal episodes (range from 1 to approximately 1,000 syncopal events) over the preceding 56 +/- 84.5 months. Postpacing follow-up duration was 204 +/- 172 days. Three patients have been lost to follow-up and in one patient the algorithm was disabled. Among the remaining 39 individuals, 31 (80%) indicated absence or diminished frequency of symptoms, or less severe symptoms. Twenty-three patients (23/29, or 59%) were asymptomatic with respect to syncope or presyncope. Sixteen patients had symptom recurrences. Of these, seven experienced syncope (7/39, or 18%) and 9 (29%) had presyncope: the majority of patients with recurrences (6/7 syncope and 7/9 presyncope) were individuals with a history of vasovagal syncope. Consequently, although symptoms were observed during postpacing follow-up, they appeared to be of reduced frequency and severity. Thus, our findings suggest that a transient period of high rate pacing triggered by the Thera DR rate-drop response algorithm was beneficial in a large proportion of highly symptomatic patients with CSS or vasovagal syncope.

  17. Development and calibration of a same side kaon tagging algorithm and measurement of the B{sup 0}{sub s}- anti B{sup 0}{sub s} oscillation frequency Δm{sub s} at the LHCB experiment

    Energy Technology Data Exchange (ETDEWEB)

    Krocker, Georg Alexander

    2013-11-20

    This thesis presents a so called same side kaon tagging algorithm, which is used in the determination of the production flavour of B{sub s}{sup 0} mesons. The measurement of the B{sub s}{sup 0} - anti B{sub s}{sup 0} oscillation frequency Δm{sub s} in the decay B{sub s}{sup 0} → D{sub s}{sup -}π{sup +} is used to optimise and calibrate this algorithm. The presented studies are performed on a data set corresponding to an integrated luminosity of L=1.0 fb{sup -1} collected by the LHCb experiment in 2011. The same side kaon tagging algorithm, based on multivariant classifiers, is developed, calibrated and tested using a sample of about 26,000 reconstructed B{sub s}{sup 0} → D{sub s}{sup -}π{sup +} decays. An effective tagging power of ε{sub eff}=ε{sub tag}(1-2ω){sup 2}=2.42±0.39% is achieved. Combining the same side kaon tagging algorithm with additional flavour tagging algorithms results in a combined tagging performance of ε{sub eff} = ε{sub tag}(1 - 2ω){sup 2} = 5.13 ± 0.54%. With this combination, the B{sub s}{sup 0}- anti B{sub s}{sup 0} oscillation frequency is measured to be Δm{sub s}=17.745±0.022(stat.)±0.006(syst.) ps{sup -1}, which is the most precise measurement to date.

  18. An algorithm for reduct cardinality minimization

    KAUST Repository

    AbouEisha, Hassan M.

    2013-12-01

    This is devoted to the consideration of a new algorithm for reduct cardinality minimization. This algorithm transforms the initial table to a decision table of a special kind, simplify this table, and use a dynamic programming algorithm to finish the construction of an optimal reduct. Results of computer experiments with decision tables from UCI ML Repository are discussed. © 2013 IEEE.

  19. Parallel algorithms

    CERN Document Server

    Casanova, Henri; Robert, Yves

    2008-01-01

    ""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi

  20. HIV point of care diagnosis: preventing misdiagnosis experience from a pilot of rapid test algorithm implementation in selected communes in Vietnam.

    Science.gov (United States)

    Nguyen, Van Thi Thuy; Best, Susan; Pham, Hong Thang; Troung, Thi Xuan Lien; Hoang, Thi Thanh Ha; Wilson, Kim; Ngo, Thi Hong Hanh; Chien, Xuan; Lai, Kim Anh; Bui, Duc Duong; Kato, Masaya

    2017-08-29

    In Vietnam, HIV testing services had been available only at provincial and district health facilities, but not at the primary health facilities. Consequently, access to HIV testing services had been limited especially in rural areas. In 2012, Vietnam piloted decentralization and integration of HIV services at commune health stations (CHSs). As a part of this pilot, a three-rapid test algorithm was introduced at CHSs. The objective of this study was to assess the performance of a three-rapid test algorithm and the implementation of quality assurance measures to prevent misdiagnosis, at primary health facilities. The three-rapid test algorithm (Determine HIV-1/2, followed by ACON HIV 1/2 and DoubleCheckGold HIV 1&2 in parallel) was piloted at CHSs from August 2012 to December 2013. Commune health staff were trained to perform HIV testing. Specimens from CHSs were sent to the provincial confirmatory laboratory (PCL) for confirmatory and validation testing. Quality assurance measures were undertaken including training, competency assessment, field technical assistance, supervision and monitoring and external quality assessment (EQA). Data on HIV testing were collected from the testing logbooks at commune and provincial facilities. Descriptive analysis was conducted. Sensitivity and specificity of the rapid testing algorithm were calculated. A total of 1,373 people received HIV testing and counselling (HTC) at CHSs. Eighty people were diagnosed with HIV infection (5.8%). The 755/1244 specimens reported as HIV negative at the CHS were sent to PCL and confirmed as negative, and all 80 specimens reported as HIV positive at CHS were confirmed as positive at the PCL. Forty-nine specimens that were reactive with Determine but negative with ACON and DoubleCheckGold at the CHSs were confirmed negative at the PCL. The results show this rapid test algorithm to be 100% sensitive and 100% specific. Of 21 CHSs that received two rounds of EQA panels, 20 CHSs submitted accurate

  1. Automatic Algorithm Selection for Complex Simulation Problems

    CERN Document Server

    Ewald, Roland

    2012-01-01

    To select the most suitable simulation algorithm for a given task is often difficult. This is due to intricate interactions between model features, implementation details, and runtime environment, which may strongly affect the overall performance. An automated selection of simulation algorithms supports users in setting up simulation experiments without demanding expert knowledge on simulation. Roland Ewald analyzes and discusses existing approaches to solve the algorithm selection problem in the context of simulation. He introduces a framework for automatic simulation algorithm selection and

  2. Disainikaart : Island / Eve Arpo

    Index Scriptorium Estoniae

    Arpo, Eve

    2009-01-01

    Islandi disaineritest, disainifirmadest, arhitektidest. Katrin Olina, Margrét Hardardóttir ja Steve Christer (Studio Granda), Guđjón Samúelsson (1887-1950), Ingibjörg Hanna Bjarnadottir, Hrafnkell Birgisson, Studio Bility jt.

  3. ISH...? ISH! / Eve Osa

    Index Scriptorium Estoniae

    Osa, Eve

    1999-01-01

    23.-27. märtsini 1999. a. Frankfurdis Maini ääres toimunud 20. ehitustehnoloogia messist ISH, seal eksponeeritud vannitoa ja tualettruumi sisustusest. Osa võttis 2243 firmat 42 riigist. 22 illustratsiooni

  4. Disainikaart : Portugal / Eve Arpo

    Index Scriptorium Estoniae

    Arpo, Eve

    2009-01-01

    Portugali arhitektidest, disaineritest, disainifirmadest. Arhitektide Eduardo Souto de Moura (sünd. 1952) ja Alvaro Joaquim de Meio Siza Vieira (sünd. 1933) looming. Firmadest Corque, Mytto, Pedroso & Os-rio, TemaHome, Mood, Munna, Mambo, Delightfulli

  5. Disainikaart : Itaalia / Eve Arpo

    Index Scriptorium Estoniae

    Arpo, Eve

    2009-01-01

    Itaalia disaineritest, disainifirmadest, arhitektidest. Renzo Piano (1937), Aldo Rossi (1931-1997), Alessandro Mendini (1931), Giorgio de Chirico (1888-1978), Federico Fellini (1920-1993), Gaetano Pesce, Ferrari, Alessi, Dolce & Gabbana, Artemide

  6. Disainikaart : Hispaania / Eve Arpo

    Index Scriptorium Estoniae

    Arpo, Eve

    2009-01-01

    Hispaania tuntumad arhitektid, disainerid, kunstnikud. Antoni Gaudí (1852-1926), Pablo Picasso (1881-1973), Félix Candela (1910-1997), Marti Guixe, Santiago Calatrava, Herme Ciscar, Monica Garcia, Roger Arquer, Patricia Urquiola, Jaime Hayón

  7. Posvjashenije Eve / Mark Levin

    Index Scriptorium Estoniae

    Levin, Mark

    2000-01-01

    Kahest lavastusest : Vene Draamateatri monoetendusest Dario Fo/F.Rame "Ootan sind, kallim" Ljubov Agapovaga, lavastaja Irina Tomingas ja Moskva Vahtangovi nim. Teatri etendusest E. Schmitti "Pühendus Eevale", lavastaja Sergei Jashin

  8. Chicago arhitektuuribiennaal / Eve Komp

    Index Scriptorium Estoniae

    Komp, Eve, 1982-

    2015-01-01

    Chicagos esimest korda korraldatud arhitektuuribiennaali nimi oli "The State of the Art of Architecture", mis toimus 03. oktoober 2015.a. - 03. jaanuar 2016.a. Biennaali eesmärk oli pakkuda rahvusvahelist arhitektuurisündmust kohalikule arhitektuurikogukonnale.

  9. Trepp = Stair / Eve Arpo

    Index Scriptorium Estoniae

    Arpo, Eve

    2006-01-01

    Rakvere Vallimäe trepist. Laste vestlus trepil. Projekt: Kavakava. Autorid: Heidi Urb, Siiri Vallner. Trepi valem: Taavi Vallner. Insener: Marika Stokkeby. Projekt 2004, valmis 2005. Ill.: joonis, 7 värv. fotot

  10. Algorithm Visualization in Teaching Practice

    Science.gov (United States)

    Törley, Gábor

    2014-01-01

    This paper presents the history of algorithm visualization (AV), highlighting teaching-methodology aspects. A combined, two-group pedagogical experiment will be presented as well, which measured the efficiency and the impact on the abstract thinking of AV. According to the results, students, who learned with AV, performed better in the experiment.

  11. Short-term volcano-tectonic earthquake forecasts based on a moving mean recurrence time algorithm: the El Hierro seismo-volcanic crisis experience

    Science.gov (United States)

    García, Alicia; De la Cruz-Reyna, Servando; Marrero, José M.; Ortiz, Ramón

    2016-05-01

    Under certain conditions, volcano-tectonic (VT) earthquakes may pose significant hazards to people living in or near active volcanic regions, especially on volcanic islands; however, hazard arising from VT activity caused by localized volcanic sources is rarely addressed in the literature. The evolution of VT earthquakes resulting from a magmatic intrusion shows some orderly behaviour that may allow the occurrence and magnitude of major events to be forecast. Thus governmental decision makers can be supplied with warnings of the increased probability of larger-magnitude earthquakes on the short-term timescale. We present here a methodology for forecasting the occurrence of large-magnitude VT events during volcanic crises; it is based on a mean recurrence time (MRT) algorithm that translates the Gutenberg-Richter distribution parameter fluctuations into time windows of increased probability of a major VT earthquake. The MRT forecasting algorithm was developed after observing a repetitive pattern in the seismic swarm episodes occurring between July and November 2011 at El Hierro (Canary Islands). From then on, this methodology has been applied to the consecutive seismic crises registered at El Hierro, achieving a high success rate in the real-time forecasting, within 10-day time windows, of volcano-tectonic earthquakes.

  12. The Distributed Genetic Algorithm Revisited

    OpenAIRE

    Belding, Theodore C.

    1995-01-01

    This paper extends previous work done by Tanese on the distributed genetic algorithm (DGA). Tanese found that the DGA outperformed the canonical serial genetic algorithm (CGA) on a class of difficult, randomly-generated Walsh polynomials. This left open the question of whether the DGA would have similar success on functions that were more amenable to optimization by the CGA. In this work, experiments were done to compare the DGA's performance on the Royal Road class of fitness functions to th...

  13. Algorithmic chemistry

    Energy Technology Data Exchange (ETDEWEB)

    Fontana, W.

    1990-12-13

    In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.

  14. Wolf Pack Algorithm for Unconstrained Global Optimization

    Directory of Open Access Journals (Sweden)

    Hu-Sheng Wu

    2014-01-01

    Full Text Available The wolf pack unites and cooperates closely to hunt for the prey in the Tibetan Plateau, which shows wonderful skills and amazing strategies. Inspired by their prey hunting behaviors and distribution mode, we abstracted three intelligent behaviors, scouting, calling, and besieging, and two intelligent rules, winner-take-all generation rule of lead wolf and stronger-survive renewing rule of wolf pack. Then we proposed a new heuristic swarm intelligent method, named wolf pack algorithm (WPA. Experiments are conducted on a suit of benchmark functions with different characteristics, unimodal/multimodal, separable/nonseparable, and the impact of several distance measurements and parameters on WPA is discussed. What is more, the compared simulation experiments with other five typical intelligent algorithms, genetic algorithm, particle swarm optimization algorithm, artificial fish swarm algorithm, artificial bee colony algorithm, and firefly algorithm, show that WPA has better convergence and robustness, especially for high-dimensional functions.

  15. Spaceborne SAR Imaging Algorithm for Coherence Optimized.

    Directory of Open Access Journals (Sweden)

    Zhiwei Qiu

    Full Text Available This paper proposes SAR imaging algorithm with largest coherence based on the existing SAR imaging algorithm. The basic idea of SAR imaging algorithm in imaging processing is that output signal can have maximum signal-to-noise ratio (SNR by using the optimal imaging parameters. Traditional imaging algorithm can acquire the best focusing effect, but would bring the decoherence phenomenon in subsequent interference process. Algorithm proposed in this paper is that SAR echo adopts consistent imaging parameters in focusing processing. Although the SNR of the output signal is reduced slightly, their coherence is ensured greatly, and finally the interferogram with high quality is obtained. In this paper, two scenes of Envisat ASAR data in Zhangbei are employed to conduct experiment for this algorithm. Compared with the interferogram from the traditional algorithm, the results show that this algorithm is more suitable for SAR interferometry (InSAR research and application.

  16. Low-tube-voltage, high-tube-current multidetector abdominal CT: improved image quality and decreased radiation dose with adaptive statistical iterative reconstruction algorithm--initial clinical experience.

    Science.gov (United States)

    Marin, Daniele; Nelson, Rendon C; Schindera, Sebastian T; Richard, Samuel; Youngblood, Richard S; Yoshizumi, Terry T; Samei, Ehsan

    2010-01-01

    To investigate whether an adaptive statistical iterative reconstruction (ASIR) algorithm improves the image quality at low-tube-voltage (80-kVp), high-tube-current (675-mA) multidetector abdominal computed tomography (CT) during the late hepatic arterial phase. This prospective, single-center HIPAA-compliant study was institutional review board approved. Informed patient consent was obtained. Ten patients (six men, four women; mean age, 63 years; age range, 51-77 years) known or suspected to have hypervascular liver tumors underwent dual-energy 64-section multidetector CT. High- and low-tube-voltage CT images were acquired sequentially during the late hepatic arterial phase of contrast enhancement. Standard convolution FBP was used to reconstruct 140-kVp (protocol A) and 80-kVp (protocol B) image sets, and ASIR (protocol C) was used to reconstruct 80-kVp image sets. The mean image noise; contrast-to-noise ratio (CNR) relative to muscle for the aorta, liver, and pancreas; and effective dose with each protocol were assessed. A figure of merit (FOM) was computed to normalize the image noise and CNR for each protocol to effective dose. Repeated-measures analysis of variance with Bonferroni adjustment for multiple comparisons was used to compare differences in mean CNR, image noise, and corresponding FOM among the three protocols. The noise power spectra generated from a custom phantom with each protocol were also compared. When image noise was normalized to effective dose, protocol C, as compared with protocols A (P = .0002) and B (P = .0001), yielded an approximately twofold reduction in noise. When the CNR was normalized to effective dose, protocol C yielded significantly higher CNRs for the aorta, liver, and pancreas than did protocol A (P = .0001 for all comparisons) and a significantly higher CNR for the liver than did protocol B (P = .003). Mean effective doses were 17.5 mSv +/- 0.6 (standard error) with protocol A and 5.1 mSv +/- 0.3 with protocols B and C

  17. Subaperture stitching algorithms: A comparison

    Science.gov (United States)

    Chen, Shanyong; Xue, Shuai; Wang, Guilin; Tian, Ye

    2017-05-01

    With the research focus of subaperture stitching interferometry shifting from flat wavefronts to aspheric ones, a variety of algorithms for stitching optimization have been proposed. We try to category and compare the algorithms in this paper by their modeling of misalignment-induced subaperture aberrations which are of low orders. A simple way is to relate the aberrations to misalignment by linear approximation with small angle assumption. But it can not exactly model the induced aberrations of aspheres. In general, the induced aberrations can be fitted to free polynomials and then removed from subaperture measurements. However, it is at the risk of mixing up the surface error and the induced aberrations. The misalignment actually introduces different terms of aberrations with certain proportions. The interrelation is then determined through analytical modeling or ray tracing. The analytical model-based algorithm and the ray tracing-based algorithm both are tedious, aperture shape-related and surface type-specific. While the configuration space-based algorithm we proposed numerically calculates the surface height change under rigid body transformation, it is generally applicable to various surface types and different aperture shapes. Simulations and experiments are presented to compare the stitching results when different algorithms are applied to null cylindrical subapertures measured with a computer generated hologram. The configuration space-based algorithm shows superior flexibility and accuracy.

  18. Diagnostic algorithms in Charcot-Marie-Tooth neuropathies: experiences from a German genetic laboratory on the basis of 1206 index patients.

    Science.gov (United States)

    Rudnik-Schöneborn, S; Tölle, D; Senderek, J; Eggermann, K; Elbracht, M; Kornak, U; von der Hagen, M; Kirschner, J; Leube, B; Müller-Felber, W; Schara, U; von Au, K; Wieczorek, D; Bußmann, C; Zerres, K

    2016-01-01

    We present clinical features and genetic results of 1206 index patients and 124 affected relatives who were referred for genetic testing of Charcot-Marie-Tooth (CMT) neuropathy at the laboratory in Aachen between 2001 and 2012. Genetic detection rates were 56% in demyelinating CMT (71% of autosomal dominant (AD) CMT1/CMTX), and 17% in axonal CMT (24% of AD CMT2/CMTX). Three genetic defects (PMP22 duplication/deletion, GJB1/Cx32 or MPZ/P0 mutation) were responsible for 89.3% of demyelinating CMT index patients in whom a genetic diagnosis was achieved, and the diagnostic yield of the three main genetic defects in axonal CMT (GJB1/Cx32, MFN2, MPZ/P0 mutations) was 84.2%. De novo mutations were detected in 1.3% of PMP22 duplication, 25% of MPZ/P0, and none in GJB1/Cx32. Motor nerve conduction velocity was uniformly 40 m/s in MFN2, and more variable in GJB1/Cx32, MPZ/P0 mutations. Patients with CMT2A showed a broad clinical severity regardless of the type or position of the MFN2 mutation. Out of 75 patients, 8 patients (11%) with PMP22 deletions were categorized as CMT1 or CMT2. Diagnostic algorithms are still useful for cost-efficient mutation detection and for the interpretation of large-scale genetic data made available by next generation sequencing strategies. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  19. Measurement of the top quark mass using dilepton events and a neutrino weighting algorithm with the DOe experiment at the Tevatron (Run II)

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, J.

    2007-07-01

    Several measurements of the top quark mass in the dilepton final states with the DOe experiment are presented. The theoretical and experimental properties of the top quark are described together with a brief introduction of the Standard Model of particle physics and the physics of hadron collisions. An overview over the experimental setup is given. The Tevatron at Fermilab is presently the highest-energy hadron collider in the world with a center-of-mass energy of 1.96 TeV. There are two main experiments called CDF and DOe, A description of the components of the multipurpose DOe detector is given. The reconstruction of simulated events and data events is explained and the criteria for the identification of electrons, muons, jets, and missing transverse energy is given. The kinematics in the dilepton final state is underconstraint. Therefore, the top quark mass is extracted by the so-called Neutrino Weighting method. This method is introduced and several different approaches are described, compared, and enhanced. Results for the international summer conferences 2006 and winter 2007 are presented. The top quark mass measurement for the combination of all three dilepton channels with a dataset of 1.05 1/fb yields: mtop=172.5{+-}5.5 (stat.) {+-} 5.8 (syst.) GeV. This result is presently the most precise top quark mass measurement of the DOe experiment in the dilepton chann el. It entered the top quark mass wold average from March 2007. (orig.)

  20. Status of the DAMIC Direct Dark Matter Search Experiment

    Energy Technology Data Exchange (ETDEWEB)

    Aguilar-Arevalo, A.; et al.

    2015-09-30

    The DAMIC experiment uses fully depleted, high resistivity CCDs to search for dark matter particles. With an energy threshold $\\sim$50 eV$_{ee}$, and excellent energy and spatial resolutions, the DAMIC CCDs are well-suited to identify and suppress radioactive backgrounds, having an unrivaled sensitivity to WIMPs with masses $<$6 GeV/$c^2$. Early results motivated the construction of a 100 g detector, DAMIC100, currently being installed at SNOLAB. This contribution discusses the installation progress, new calibration efforts near the threshold, a preliminary result with 2014 data, and the prospects for physics results after one year of data taking.

  1. Algorithm Theory - SWAT 2006

    DEFF Research Database (Denmark)

    issues of theoretical algorithmics and applications in various fields including graph algorithms, computational geometry, scheduling, approximation algorithms, network algorithms, data storage and manipulation, combinatorics, sorting, searching, online algorithms, optimization, etc.......This book constitutes the refereed proceedings of the 10th Scandinavian Workshop on Algorithm Theory, SWAT 2006, held in Riga, Latvia, in July 2006. The 36 revised full papers presented together with 3 invited papers were carefully reviewed and selected from 154 submissions. The papers address all...

  2. FIREWORKS ALGORITHM FOR UNCONSTRAINED FUNCTION OPTIMIZATION PROBLEMS

    Directory of Open Access Journals (Sweden)

    Evans BAIDOO

    2017-03-01

    Full Text Available Modern real world science and engineering problems can be classified as multi-objective optimisation problems which demand for expedient and efficient stochastic algorithms to respond to the optimization needs. This paper presents an object-oriented software application that implements a firework optimization algorithm for function optimization problems. The algorithm, a kind of parallel diffuse optimization algorithm is based on the explosive phenomenon of fireworks. The algorithm presented promising results when compared to other population or iterative based meta-heuristic algorithm after it was experimented on five standard benchmark problems. The software application was implemented in Java with interactive interface which allow for easy modification and extended experimentation. Additionally, this paper validates the effect of runtime on the algorithm performance.

  3. Privacy preserving randomized gossip algorithms

    KAUST Repository

    Hanzely, Filip

    2017-06-23

    In this work we present three different randomized gossip algorithms for solving the average consensus problem while at the same time protecting the information about the initial private values stored at the nodes. We give iteration complexity bounds for all methods, and perform extensive numerical experiments.

  4. The algorithm design manual

    CERN Document Server

    Skiena, Steven S

    2008-01-01

    Explaining designing algorithms, and analyzing their efficacy and efficiency, this book covers combinatorial algorithms technology, stressing design over analysis. It presents instruction on methods for designing and analyzing computer algorithms. It contains the catalog of algorithmic resources, implementations and a bibliography

  5. Stationary algorithmic probability

    National Research Council Canada - National Science Library

    Müller, Markus

    2010-01-01

    ...,sincetheiractualvaluesdependonthechoiceoftheuniversal referencecomputer.Inthispaper,weanalyzeanaturalapproachtoeliminatethismachine- dependence. Our method is to assign algorithmic probabilities to the different...

  6. Hamiltonian Algorithm Sound Synthesis

    OpenAIRE

    大矢, 健一

    2013-01-01

    Hamiltonian Algorithm (HA) is an algorithm for searching solutions is optimization problems. This paper introduces a sound synthesis technique using Hamiltonian Algorithm and shows a simple example. "Hamiltonian Algorithm Sound Synthesis" uses phase transition effect in HA. Because of this transition effect, totally new waveforms are produced.

  7. A Modified Decision Tree Algorithm Based on Genetic Algorithm for Mobile User Classification Problem

    Science.gov (United States)

    Liu, Dong-sheng; Fan, Shu-jiang

    2014-01-01

    In order to offer mobile customers better service, we should classify the mobile user firstly. Aimed at the limitations of previous classification methods, this paper puts forward a modified decision tree algorithm for mobile user classification, which introduced genetic algorithm to optimize the results of the decision tree algorithm. We also take the context information as a classification attributes for the mobile user and we classify the context into public context and private context classes. Then we analyze the processes and operators of the algorithm. At last, we make an experiment on the mobile user with the algorithm, we can classify the mobile user into Basic service user, E-service user, Plus service user, and Total service user classes and we can also get some rules about the mobile user. Compared to C4.5 decision tree algorithm and SVM algorithm, the algorithm we proposed in this paper has higher accuracy and more simplicity. PMID:24688389

  8. The BR eigenvalue algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Geist, G.A. [Oak Ridge National Lab., TN (United States). Computer Science and Mathematics Div.; Howell, G.W. [Florida Inst. of Tech., Melbourne, FL (United States). Dept. of Applied Mathematics; Watkins, D.S. [Washington State Univ., Pullman, WA (United States). Dept. of Pure and Applied Mathematics

    1997-11-01

    The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.

  9. Algorithmically specialized parallel computers

    CERN Document Server

    Snyder, Lawrence; Gannon, Dennis B

    1985-01-01

    Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster

  10. Measurement of the Top Quark Mass using Dilepton Events and a Neutrino Weighting Algorithm with the D0 Experiment at the Tevatron (Run II)

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, Joerg [Univ. of Bonn (Germany)

    2007-01-01

    measurement of the top quark mass by the D0 experiment at Fermilab in the dilepton final states. The comparison of the measured top quark masses in different final states allows an important consistency check of the Standard Model. Inconsistent results would be a clear hint of a misinterpretation of the analyzed data set. With the exception of the Higgs boson, all particles predicted by the Standard Model have been found. The search for the Higgs boson is one of the main focuses in high energy physics. The theory section will discuss the close relationship between the physics of the Higgs boson and the top quark.

  11. Results from the GPCP algorithm intercomparison programme

    Energy Technology Data Exchange (ETDEWEB)

    Ebert, E.E.; Manton, M.J. [Bureau of Meteorology Research Center, Melbourne (Australia); Arkin, P.A. [National Centers for Environmental Prediction, Washington, DC (United States)] [and others

    1996-12-01

    Three algorithm intercomparison experiments have recently been conducted as part of the Global Precipitation Climatology Project with the goal of (a) assessing the skill of current satellite rainfall algorithms, (b) understanding the differences between them, and (c) moving toward improved algorithms. The results of these experiments are summarized and intercompared in this paper. It was found that the skill of satellite rainfall algorithms depends on the regime being analyzed, with algorithms producing very good results in the tropical western Pacific and over Japan and its surrounding waters during summer, but relatively poor rainfall estimates over western Europe during late winter. Monthly rainfall was estimated most accurately by algorithms using geostationary infrared data, but algorithms using polar data [Advanced Very High Resolution Radiometer and Special Sensor Microwave/Imager (SSM/I)] were also able to produce good monthly rainfall estimates when data from two satellites were available. In most cases, SSM/I algorithms showed significantly greater skill than IR-based algorithms in estimating instantaneous rain rates. 28 refs., 4 figs., 3 tabs.

  12. Adaptive cockroach swarm algorithm

    Science.gov (United States)

    Obagbuwa, Ibidun C.; Abidoye, Ademola P.

    2017-07-01

    An adaptive cockroach swarm optimization (ACSO) algorithm is proposed in this paper to strengthen the existing cockroach swarm optimization (CSO) algorithm. The ruthless component of CSO algorithm is modified by the employment of blend crossover predator-prey evolution method which helps algorithm prevent any possible population collapse, maintain population diversity and create adaptive search in each iteration. The performance of the proposed algorithm on 16 global optimization benchmark function problems was evaluated and compared with the existing CSO, cuckoo search, differential evolution, particle swarm optimization and artificial bee colony algorithms.

  13. HIV testing experiences and their implications for patient engagement with HIV care and treatment on the eve of 'test and treat'

    DEFF Research Database (Denmark)

    Wringe, Alison; Moshabela, Mosa; Nyamukapa, Constance

    2017-01-01

    and engage in care, some delivered static, morally charged messages regarding sexual behaviours and expectations of clinic use which discouraged future care seeking. Repeat testing was commonly reported, reflecting patients’ doubts over the accuracy of prior results and beliefs that antiretroviral therapy...

  14. "As I deeply understand the importance and greatly admire the poetry of experiment..." (on the eve of P N Lebedev's anniversary)

    Science.gov (United States)

    Shcherbakov, R. N.

    2016-02-01

    Whatever we think of the eminent Russian physicist P N Lebedev, whatever our understanding of how his work was affected by circumstances in and outside of Russia, whatever value is placed on the basic elements of his twenty-year career and personal life and of his great successes and, happily, not so great failures, and whatever the stories of his happy times and his countless misfortunes, one thing remains clear — P N Lebedev's skill and talent served well to foster the development of global science and to improve the reputation of Russia as a scientific nation.

  15. Engineering a Cache-Oblivious Sorting Algorithm

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Fagerberg, Rolf; Vinther, Kristoffer

    2007-01-01

    This paper is an algorithmic engineering study of cache-oblivious sorting. We investigate by empirical methods a number of implementation issues and parameter choices for the cache-oblivious sorting algorithm Lazy Funnelsort, and compare the final algorithm with Quicksort, the established standard...... for comparison-based sorting, as well as with recent cache-aware proposals. The main result is a carefully implemented cache-oblivious sorting algorithm, which our experiments show can be faster than the best Quicksort implementation we are able to find, already for input sizes well within the limits of RAM....... It is also at least as fast as the recent cache-aware implementations included in the test. On disk the difference is even more pronounced regarding Quicksort and the cache-aware algorithms, whereas the algorithm is slower than a careful implementation of multiway Mergesort such as TPIE....

  16. Ensemble algorithms in reinforcement learning.

    Science.gov (United States)

    Wiering, Marco A; van Hasselt, Hado

    2008-08-01

    This paper describes several ensemble methods that combine multiple different reinforcement learning (RL) algorithms in a single agent. The aim is to enhance learning speed and final performance by combining the chosen actions or action probabilities of different RL algorithms. We designed and implemented four different ensemble methods combining the following five different RL algorithms: Q-learning, Sarsa, actor-critic (AC), QV-learning, and AC learning automaton. The intuitively designed ensemble methods, namely, majority voting (MV), rank voting, Boltzmann multiplication (BM), and Boltzmann addition, combine the policies derived from the value functions of the different RL algorithms, in contrast to previous work where ensemble methods have been used in RL for representing and learning a single value function. We show experiments on five maze problems of varying complexity; the first problem is simple, but the other four maze tasks are of a dynamic or partially observable nature. The results indicate that the BM and MV ensembles significantly outperform the single RL algorithms.

  17. Poster — Thur Eve — 27: Flattening Filter Free VMAT Quality Assurance: Dose Rate Considerations for Detector Response

    Energy Technology Data Exchange (ETDEWEB)

    Viel, Francis; Duzenli, Cheryl [Department of Physics and Astronomy, University of British Columbia (Canada); British Columbia Cancer Agency, Department of Medical Physics, Vancouver Centre (Canada); Camborde, Marie-Laure; Strgar, Vincent; Horwood, Ron; Atwal, Parmveer; Gete, Ermias [British Columbia Cancer Agency, Department of Medical Physics, Vancouver Centre (Canada); Karan, Tania [Stronach Regional Cancer Centre, Newmarket, ON (Canada)

    2014-08-15

    Introduction: Radiation detector responses can be affected by dose rate. Due to higher dose per pulse and wider range of mu rates in FFF beams, detector responses should be characterized prior to implementation of QA protocols for FFF beams. During VMAT delivery, the MU rate may also vary dramatically within a treatment fraction. This study looks at the dose per pulse variation throughout a 3D volume for typical VMAT plans and the response characteristics for a variety of detectors, and makes recommendations on the design of QA protocols for FFF VMAT QA. Materials and Methods: Linac log file data and a simplified dose calculation algorithm are used to calculate dose per pulse for a variety of clinical VMAT plans, on a voxel by voxel basis, as a function of time in a cylindrical phantom. Diode and ion chamber array responses are characterized over the relevant range of dose per pulse and dose rate. Results: Dose per pulse ranges from <0.1 mGy/pulse to 1.5 mGy/pulse in a typical VMAT treatment delivery using the 10XFFF beam. Diode detector arrays demonstrate increased sensitivity to dose (+./− 3%) with increasing dose per pulse over this range. Ion chamber arrays demonstrate decreased sensitivity to dose (+/− 1%) with increasing dose rate over this range. Conclusions: QA protocols should be designed taking into consideration inherent changes in detector sensitivity with dose rate. Neglecting to account for changes in detector response with dose per pulse can lead to skewed QA results.

  18. Software For Genetic Algorithms

    Science.gov (United States)

    Wang, Lui; Bayer, Steve E.

    1992-01-01

    SPLICER computer program is genetic-algorithm software tool used to solve search and optimization problems. Provides underlying framework and structure for building genetic-algorithm application program. Written in Think C.

  19. Modified Clipped LMS Algorithm

    National Research Council Canada - National Science Library

    Lotfizad, Mojtaba; Yazdi, Hadi Sadoghi

    2005-01-01

    A new algorithm is proposed for updating the weights of an adaptive filter. The proposed algorithm is a modification of an existing method, namely, the clipped LMS, and uses a three-level quantization...

  20. Autonomous Star Tracker Algorithms

    DEFF Research Database (Denmark)

    Betto, Maurizio; Jørgensen, John Leif; Kilsgaard, Søren

    1998-01-01

    Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances.......Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances....

  1. Graph Colouring Algorithms

    DEFF Research Database (Denmark)

    Husfeldt, Thore

    2015-01-01

    This chapter presents an introduction to graph colouring algorithms. The focus is on vertex-colouring algorithms that work for general classes of graphs with worst-case performance guarantees in a sequential model of computation. The presentation aims to demonstrate the breadth of available...... techniques and is organized by algorithmic paradigm....

  2. Optimal Mixing Evolutionary Algorithms

    NARCIS (Netherlands)

    D. Thierens (Dirk); P.A.N. Bosman (Peter); N. Krasnogor

    2011-01-01

    htmlabstractA key search mechanism in Evolutionary Algorithms is the mixing or juxtaposing of partial solutions present in the parent solutions. In this paper we look at the efficiency of mixing in genetic algorithms (GAs) and estimation-of-distribution algorithms (EDAs). We compute the mixing

  3. Poster - Thurs Eve-01: Comparison of clinical IMRT plan quality and delivery accuracy: Few large segments vs many small segments.

    Science.gov (United States)

    Sawchuk, S; Karnas, S; McCune, Kent; Mulligan, M; Dar, R; Chen, J

    2008-07-01

    Commercial radiation treatment planning systems for intensity modulation use optimization algorithms that can vary multi-leaf collimator (MLC) segment sizes, segment number and the minimum number of monitor units (MU) per segment. These parameters are varied according to the treatment site, size, location, and proximity to the organs at risk. This study compares the utility of optimization using (Case A) few large segments and a higher minimum MU per segment to that of (case B) using many smaller segments with a lower minimum MU per segment. For Case A, the patient benefits from a reduced treatment time associated with fewer MUs and fewer MLC movements and an increased accuracy in dose delivery. Also, shorter treatment times may lead to fewer patient movement uncertainties. The accumulated MLC leakage dose is reduced, the patient specific quality assurance (QA) is more manageable and small field modeling inaccuracies are reduced. Pinnacle-3 (v8) plans are generated with direct machine parameter optimization (DMPO) for both scenarios. Three dimensional dose distributions and dose volume histograms are used to compare plan quality. We compare plans using few large MLC segments with those using many small MLC segments for some clinical cases. Improved plan quality is demonstrated using fewer MLC segments. Dose QAs are performed and compared for each scenario using MapCheck and film. When comparing dose delivery accuracy between different MU per segment settings, a decrease in delivery errors with minimum MU size is observed. In conclusion, few large MLC segments with larger area should be used when possible. © 2008 American Association of Physicists in Medicine.

  4. Nature-inspired optimization algorithms

    CERN Document Server

    Yang, Xin-She

    2014-01-01

    Nature-Inspired Optimization Algorithms provides a systematic introduction to all major nature-inspired algorithms for optimization. The book's unified approach, balancing algorithm introduction, theoretical background and practical implementation, complements extensive literature with well-chosen case studies to illustrate how these algorithms work. Topics include particle swarm optimization, ant and bee algorithms, simulated annealing, cuckoo search, firefly algorithm, bat algorithm, flower algorithm, harmony search, algorithm analysis, constraint handling, hybrid methods, parameter tuning

  5. Parallel sorting algorithms

    CERN Document Server

    Akl, Selim G

    1985-01-01

    Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the

  6. Digital Arithmetic: Division Algorithms

    DEFF Research Database (Denmark)

    Montuschi, Paolo; Nannarelli, Alberto

    2017-01-01

    .g., Newton–Raphson) algorithms. The first class of algorithms, the digit-recurrence type, is particularly suitable for hardware implementation as it requires modest resources and provides good performance on contemporary technology. The second class of algorithms, the multiplicative type, requires...... implement it in hardware to not compromise the overall computation performances. This entry explains the basic algorithms, suitable for hardware and software, to implement division in computer systems. Two classes of algorithms implement division or square root: digit-recurrence and multiplicative (e...

  7. Weighted Automata Algorithms

    Science.gov (United States)

    Mohri, Mehryar

    Weighted automata and transducers are widely used in modern applications in bioinformatics and text, speech, and image processing. This chapter describes several fundamental weighted automata and shortest-distance algorithms including composition, determinization, minimization, and synchronization, as well as single-source and all-pairs shortest distance algorithms over general semirings. It presents the pseudocode of these algorithms, gives an analysis of their running time complexity, and illustrates their use in some simple cases. Many other complex weighted automata and transducer algorithms used in practice can be obtained by combining these core algorithms.

  8. Modified Clipped LMS Algorithm

    Directory of Open Access Journals (Sweden)

    Lotfizad Mojtaba

    2005-01-01

    Full Text Available Abstract A new algorithm is proposed for updating the weights of an adaptive filter. The proposed algorithm is a modification of an existing method, namely, the clipped LMS, and uses a three-level quantization ( scheme that involves the threshold clipping of the input signals in the filter weight update formula. Mathematical analysis shows the convergence of the filter weights to the optimum Wiener filter weights. Also, it can be proved that the proposed modified clipped LMS (MCLMS algorithm has better tracking than the LMS algorithm. In addition, this algorithm has reduced computational complexity relative to the unmodified one. By using a suitable threshold, it is possible to increase the tracking capability of the MCLMS algorithm compared to the LMS algorithm, but this causes slower convergence. Computer simulations confirm the mathematical analysis presented.

  9. A Search of Low-Mass WIMPs with p-type Point Contact Germanium Detector in the CDEX-1 Experiment

    CERN Document Server

    Zhao, W; Kang, K J; Cheng, J P; Li, Y J; Wong, H T; Lin, S T; Chang, J P; Chen, J H; Chen, Q H; Chen, Y H; Deng, Z; Du, Q; Gong, H; Hao, X Q; He, H J; He, Q J; Huang, H X; Huang, T R; Jiang, H; Li, H B; Li, J; Li, J M; Li, X; Li, X Y; Li, Y L; Lin, F K; Liu, S K; Lü, L C; Ma, H; Ma, J L; Mao, S J; Qin, J Q; Ren, J; Ruan, X C; Sharma, V; Shen, M B; Singh, L; Singh, M K; Soma, A K; Su, J; Tang, C J; Wang, J M; Wang, L; Wang, Q; Wu, S Y; Wu, Y C; Xianyu, Z Z; Xiao, R Q; Xing, H Y; Xu, F Z; Xu, Y; Xu, X J; Xue, T; Yang, L T; Yang, S W; Yi, N; Yu, C X; Yu, H; Yu, X Z; Zeng, M; Zeng, X H; Zeng, Z; Zhang, L; Zhang, Y H; Zhao, M G; Zhou, Z Y; Zhu, J J; Zhu, W B; Zhu, X Z; Zhu, Z H

    2016-01-01

    The CDEX-1 experiment conducted a search of low-mass (< 10 GeV/c2) Weakly Interacting Massive Particles (WIMPs) dark matter at the China Jinping Underground Laboratory using a p-type point-contact germanium detector with a fiducial mass of 915 g at a physics analysis threshold of 475 eVee. We report the hardware set-up, detector characterization, data acquisition and analysis procedures of this experiment. No excess of unidentified events are observed after subtraction of known background. Using 335.6 kg-days of data, exclusion constraints on the WIMP-nucleon spin-independent and spin-dependent couplings are derived.

  10. Multidimensional Scaling Localization Algorithm in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Zhang Dongyang

    2014-02-01

    Full Text Available Due to the localization algorithm in large-scale wireless sensor network exists shortcomings both in positioning accuracy and time complexity compared to traditional localization algorithm, this paper presents a fast multidimensional scaling location algorithm. By positioning algorithm for fast multidimensional scaling, fast mapping initialization, fast mapping and coordinate transform can get schematic coordinates of node, coordinates Initialize of MDS algorithm, an accurate estimate of the node coordinates and using the PRORUSTES to analysis alignment of the coordinate and final position coordinates of nodes etc. There are four steps, and the thesis gives specific implementation steps of the algorithm. Finally, compared with stochastic algorithms and classical MDS algorithm experiment, the thesis takes application of specific examples. Experimental results show that: the proposed localization algorithm has fast multidimensional scaling positioning accuracy in ensuring certain circumstances, but also greatly improves the speed of operation.

  11. Intelligent perturbation algorithms for space scheduling optimization

    Science.gov (United States)

    Kurtzman, Clifford R.

    1991-01-01

    Intelligent perturbation algorithms for space scheduling optimization are presented in the form of the viewgraphs. The following subject areas are covered: optimization of planning, scheduling, and manifesting; searching a discrete configuration space; heuristic algorithms used for optimization; use of heuristic methods on a sample scheduling problem; intelligent perturbation algorithms are iterative refinement techniques; properties of a good iterative search operator; dispatching examples of intelligent perturbation algorithm and perturbation operator attributes; scheduling implementations using intelligent perturbation algorithms; major advances in scheduling capabilities; the prototype ISF (industrial Space Facility) experiment scheduler; optimized schedule (max revenue); multi-variable optimization; Space Station design reference mission scheduling; ISF-TDRSS command scheduling demonstration; and example task - communications check.

  12. Classification of ETM+ Remote Sensing Image Based on Hybrid Algorithm of Genetic Algorithm and Back Propagation Neural Network

    Directory of Open Access Journals (Sweden)

    Haisheng Song

    2013-01-01

    Full Text Available The back propagation neural network (BPNN algorithm can be used as a supervised classification in the processing of remote sensing image classification. But its defects are obvious: falling into the local minimum value easily, slow convergence speed, and being difficult to determine intermediate hidden layer nodes. Genetic algorithm (GA has the advantages of global optimization and being not easy to fall into local minimum value, but it has the disadvantage of poor local searching capability. This paper uses GA to generate the initial structure of BPNN. Then, the stable, efficient, and fast BP classification network is gotten through making fine adjustments on the improved BP algorithm. Finally, we use the hybrid algorithm to execute classification on remote sensing image and compare it with the improved BP algorithm and traditional maximum likelihood classification (MLC algorithm. Results of experiments show that the hybrid algorithm outperforms improved BP algorithm and MLC algorithm.

  13. “Never break them in two. Never put one over the other. Eve is Mary’s mother. Mary is the daughter of Eve”: Toni Morrison’s Womanist Gospel of Self

    Directory of Open Access Journals (Sweden)

    Claude LE FUSTEC

    2011-03-01

    characterized as outcasts. From Pecola, the alienated victim of the WASP definition of beauty to Consolata, the “revised Reverend Mother,” Morrison’s fiction appears to weave its way through the moral complexities of African American female resistance to white male rule—theologically based on the canonical reading of the Fall as supreme calamity caused by Eve, the arch-temptress and sinner—to hand authority back to the pariah and wrongdoer: the black woman. However, far from boiling down to this deconstructive strategy, Morrison’s fiction seems to oppose religious doctrine only so as to sound the ontological depth of Christianity: while challenging the theological basis of sexist and racist assumptions, Morrison poses as an authoritative spiritual force able to craft her own Gospel of Self, based on cathartic moments of revelation where mostly female characters experience a mystic sense of connectedness to self and other, time and place. An African American female variation on the New Testament’s “Kingdom within,” Morrison’s novels echo black abolitionist Sojourner Truth’s famous speech “Ain’t I a Woman?” and its call for theological revision and gender emancipation.

  14. An Adaptive Filtering Algorithm Based on Genetic Algorithm-Backpropagation Network

    Directory of Open Access Journals (Sweden)

    Kai Hu

    2013-01-01

    Full Text Available A new image filtering algorithm is proposed. GA-BPN algorithm uses genetic algorithm (GA to decide weights in a back propagation neural network (BPN. It has better global optimal characteristics than traditional optimal algorithm. In this paper, we used GA-BPN to do image noise filter researching work. Firstly, this paper uses training samples to train GA-BPN as the noise detector. Then, we utilize the well-trained GA-BPN to recognize noise pixels in target image. And at last, an adaptive weighted average algorithm is used to recover noise pixels recognized by GA-BPN. Experiment data shows that this algorithm has better performance than other filters.

  15. Cloud Particles Evolution Algorithm

    Directory of Open Access Journals (Sweden)

    Wei Li

    2015-01-01

    Full Text Available Many evolutionary algorithms have been paid attention to by the researchers and have been applied to solve optimization problems. This paper presents a new optimization method called cloud particles evolution algorithm (CPEA to solve optimization problems based on cloud formation process and phase transformation of natural substance. The cloud is assumed to have three states in the proposed algorithm. Gaseous state represents the global exploration. Liquid state represents the intermediate process from the global exploration to the local exploitation. Solid state represents the local exploitation. The cloud is composed of descript and independent particles in this algorithm. The cloud particles use phase transformation of three states to realize the global exploration and the local exploitation in the optimization process. Moreover, the cloud particles not only realize the survival of the fittest through competition mechanism but also ensure the diversity of the cloud particles by reciprocity mechanism. The effectiveness of the algorithm is validated upon different benchmark problems. The proposed algorithm is compared with a number of other well-known optimization algorithms, and the experimental results show that cloud particles evolution algorithm has a higher efficiency than some other algorithms.

  16. Explaining algorithms using metaphors

    CERN Document Server

    Forišek, Michal

    2013-01-01

    There is a significant difference between designing a new algorithm, proving its correctness, and teaching it to an audience. When teaching algorithms, the teacher's main goal should be to convey the underlying ideas and to help the students form correct mental models related to the algorithm. This process can often be facilitated by using suitable metaphors. This work provides a set of novel metaphors identified and developed as suitable tools for teaching many of the 'classic textbook' algorithms taught in undergraduate courses worldwide. Each chapter provides exercises and didactic notes fo

  17. Spectral Decomposition Algorithm (SDA)

    Data.gov (United States)

    National Aeronautics and Space Administration — Spectral Decomposition Algorithm (SDA) is an unsupervised feature extraction technique similar to PCA that was developed to better distinguish spectral features in...

  18. Shadow algorithms data miner

    CERN Document Server

    Woo, Andrew

    2012-01-01

    Digital shadow generation continues to be an important aspect of visualization and visual effects in film, games, simulations, and scientific applications. This resource offers a thorough picture of the motivations, complexities, and categorized algorithms available to generate digital shadows. From general fundamentals to specific applications, it addresses shadow algorithms and how to manage huge data sets from a shadow perspective. The book also examines the use of shadow algorithms in industrial applications, in terms of what algorithms are used and what software is applicable.

  19. Rules Extraction with an Immune Algorithm

    Directory of Open Access Journals (Sweden)

    Deqin Yan

    2007-12-01

    Full Text Available In this paper, a method of extracting rules with immune algorithms from information systems is proposed. Designing an immune algorithm is based on a sharing mechanism to extract rules. The principle of sharing and competing resources in the sharing mechanism is consistent with the relationship of sharing and rivalry among rules. In order to extract rules efficiently, a new concept of flexible confidence and rule measurement is introduced. Experiments demonstrate that the proposed method is effective.

  20. Kernel learning algorithms for face recognition

    CERN Document Server

    Li, Jun-Bao; Pan, Jeng-Shyang

    2013-01-01

    Kernel Learning Algorithms for Face Recognition covers the framework of kernel based face recognition. This book discusses the advanced kernel learning algorithms and its application on face recognition. This book also focuses on the theoretical deviation, the system framework and experiments involving kernel based face recognition. Included within are algorithms of kernel based face recognition, and also the feasibility of the kernel based face recognition method. This book provides researchers in pattern recognition and machine learning area with advanced face recognition methods and its new

  1. Poster — Thur Eve — 18: Cherenkov Emission By High-Energy Radiation Therapy Beams: A Characterization Study

    Energy Technology Data Exchange (ETDEWEB)

    Zlateva, Y.; El Naqa, I. [Medical Physics Unit, Department of Oncology, McGill University, Montreal, QC (Canada); Quitoriano, N. [Department of Mining and Materials Engineering McGill University, Montreal, QC (Canada)

    2014-08-15

    We investigate Cherenkov emission (CE) by radiotherapy beams via radiation dose-versus-CE correlation analyses, CE detection optimization by means of a spectral shift towards the near-infrared (NIR) window of biological tissue, and comparison of CE to on-board MV imaging. Dose-CE correlation was investigated via simulation and experiment. A Monte Carlo (MC) CE simulator was designed using Geant4. Experimental phantoms include: water; tissue-simulating phantom composed of water, Intralipid®, and beef blood; plastic phantom with solid water insert. The detector system comprises an optical fiber and diffraction-grating spectrometer incorporating a front/back-illuminated CCD. The NIR shift was carried out with CdSe/ZnS quantum dots (QDs), emitting at (650±10) nm. CE and MV images were acquired with a CMOS camera and electronic portal imaging device. MC and experimental studies indicate a strong linear dose-CE correlation (Pearson coefficient > 0.99). CE by an 18-MeV beam was effectively NIR-shifted in water and a tissue-simulating phantom, exhibiting a significant increase at 650 nm for QD depths up to 10 mm. CE images exhibited relative contrast superior to MV images by a factor of 30. Our work supports the potential for application of CE in radiotherapy online imaging for patient setup and treatment verification, since CE is intrinsic to the beam and non-ionizing and QDs can be used to improve CE detectability, potentially yielding image quality superior to MV imaging for the case of low-density-variability, low-optical-attenuation materials (ex: breast/oropharynx). Ongoing work involves microenvironment functionalization of QDs and application of multi-channel spectrometry for simultaneous acquisition of dosimetric and tumor oxygenation signals.

  2. QPSO-Based Adaptive DNA Computing Algorithm

    Directory of Open Access Journals (Sweden)

    Mehmet Karakose

    2013-01-01

    Full Text Available DNA (deoxyribonucleic acid computing that is a new computation model based on DNA molecules for information storage has been increasingly used for optimization and data analysis in recent years. However, DNA computing algorithm has some limitations in terms of convergence speed, adaptability, and effectiveness. In this paper, a new approach for improvement of DNA computing is proposed. This new approach aims to perform DNA computing algorithm with adaptive parameters towards the desired goal using quantum-behaved particle swarm optimization (QPSO. Some contributions provided by the proposed QPSO based on adaptive DNA computing algorithm are as follows: (1 parameters of population size, crossover rate, maximum number of operations, enzyme and virus mutation rate, and fitness function of DNA computing algorithm are simultaneously tuned for adaptive process, (2 adaptive algorithm is performed using QPSO algorithm for goal-driven progress, faster operation, and flexibility in data, and (3 numerical realization of DNA computing algorithm with proposed approach is implemented in system identification. Two experiments with different systems were carried out to evaluate the performance of the proposed approach with comparative results. Experimental results obtained with Matlab and FPGA demonstrate ability to provide effective optimization, considerable convergence speed, and high accuracy according to DNA computing algorithm.

  3. The TROPOMI surface UV algorithm

    Directory of Open Access Journals (Sweden)

    A. V. Lindfors

    2018-02-01

    Full Text Available The TROPOspheric Monitoring Instrument (TROPOMI is the only payload of the Sentinel-5 Precursor (S5P, which is a polar-orbiting satellite mission of the European Space Agency (ESA. TROPOMI is a nadir-viewing spectrometer measuring in the ultraviolet, visible, near-infrared, and the shortwave infrared that provides near-global daily coverage. Among other things, TROPOMI measurements will be used for calculating the UV radiation reaching the Earth's surface. Thus, the TROPOMI surface UV product will contribute to the monitoring of UV radiation by providing daily information on the prevailing UV conditions over the globe. The TROPOMI UV algorithm builds on the heritage of the Ozone Monitoring Instrument (OMI and the Satellite Application Facility for Atmospheric Composition and UV Radiation (AC SAF algorithms. This paper provides a description of the algorithm that will be used for estimating surface UV radiation from TROPOMI observations. The TROPOMI surface UV product includes the following UV quantities: the UV irradiance at 305, 310, 324, and 380 nm; the erythemally weighted UV; and the vitamin-D weighted UV. Each of these are available as (i daily dose or daily accumulated irradiance, (ii overpass dose rate or irradiance, and (iii local noon dose rate or irradiance. In addition, all quantities are available corresponding to actual cloud conditions and as clear-sky values, which otherwise correspond to the same conditions but assume a cloud-free atmosphere. This yields 36 UV parameters altogether. The TROPOMI UV algorithm has been tested using input based on OMI and the Global Ozone Monitoring Experiment-2 (GOME-2 satellite measurements. These preliminary results indicate that the algorithm is functioning according to expectations.

  4. Innovations in Lattice QCD Algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Konstantinos Orginos

    2006-06-25

    Lattice QCD calculations demand a substantial amount of computing power in order to achieve the high precision results needed to better understand the nature of strong interactions, assist experiment to discover new physics, and predict the behavior of a diverse set of physical systems ranging from the proton itself to astrophysical objects such as neutron stars. However, computer power alone is clearly not enough to tackle the calculations we need to be doing today. A steady stream of recent algorithmic developments has made an important impact on the kinds of calculations we can currently perform. In this talk I am reviewing these algorithms and their impact on the nature of lattice QCD calculations performed today.

  5. Algorithms for finite rings

    NARCIS (Netherlands)

    Ciocanea Teodorescu I.,

    2016-01-01

    In this thesis we are interested in describing algorithms that answer questions arising in ring and module theory. Our focus is on deterministic polynomial-time algorithms and rings and modules that are finite. The first main result of this thesis is a solution to the module isomorphism problem in

  6. 8. Algorithm Design Techniques

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 8. Algorithms - Algorithm Design Techniques. R K Shyamasundar. Series Article Volume 2 Issue 8 August 1997 pp 6-17. Fulltext. Click here to view fulltext PDF. Permanent link: http://www.ias.ac.in/article/fulltext/reso/002/08/0006-0017 ...

  7. Genetic Algorithm Parameter Analysis

    OpenAIRE

    Ernesto, BELMONT-MORENO; Instituto de Fisica, UNAM

    2000-01-01

    The energy minimizing problem of atomic cluster configuration and the 2D spin glass problem are used for testing our genetic algorithm. It is shown to be crucial to adjust the degree of mutation and the population size for the efficiency of the algorithm.

  8. Governance by algorithms

    Directory of Open Access Journals (Sweden)

    Francesca Musiani

    2013-08-01

    Full Text Available Algorithms are increasingly often cited as one of the fundamental shaping devices of our daily, immersed-in-information existence. Their importance is acknowledged, their performance scrutinised in numerous contexts. Yet, a lot of what constitutes 'algorithms' beyond their broad definition as “encoded procedures for transforming input data into a desired output, based on specified calculations” (Gillespie, 2013 is often taken for granted. This article seeks to contribute to the discussion about 'what algorithms do' and in which ways they are artefacts of governance, providing two examples drawing from the internet and ICT realm: search engine queries and e-commerce websites’ recommendations to customers. The question of the relationship between algorithms and rules is likely to occupy an increasingly central role in the study and the practice of internet governance, in terms of both institutions’ regulation of algorithms, and algorithms’ regulation of our society.

  9. Algorithms for protein design.

    Science.gov (United States)

    Gainza, Pablo; Nisonoff, Hunter M; Donald, Bruce R

    2016-08-01

    Computational structure-based protein design programs are becoming an increasingly important tool in molecular biology. These programs compute protein sequences that are predicted to fold to a target structure and perform a desired function. The success of a program's predictions largely relies on two components: first, the input biophysical model, and second, the algorithm that computes the best sequence(s) and structure(s) according to the biophysical model. Improving both the model and the algorithm in tandem is essential to improving the success rate of current programs, and here we review recent developments in algorithms for protein design, emphasizing how novel algorithms enable the use of more accurate biophysical models. We conclude with a list of algorithmic challenges in computational protein design that we believe will be especially important for the design of therapeutic proteins and protein assemblies. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Dynamic Inertia Weight Binary Bat Algorithm with Neighborhood Search

    Directory of Open Access Journals (Sweden)

    Xingwang Huang

    2017-01-01

    Full Text Available Binary bat algorithm (BBA is a binary version of the bat algorithm (BA. It has been proven that BBA is competitive compared to other binary heuristic algorithms. Since the update processes of velocity in the algorithm are consistent with BA, in some cases, this algorithm also faces the premature convergence problem. This paper proposes an improved binary bat algorithm (IBBA to solve this problem. To evaluate the performance of IBBA, standard benchmark functions and zero-one knapsack problems have been employed. The numeric results obtained by benchmark functions experiment prove that the proposed approach greatly outperforms the original BBA and binary particle swarm optimization (BPSO. Compared with several other heuristic algorithms on zero-one knapsack problems, it also verifies that the proposed algorithm is more able to avoid local minima.

  11. Ouroboros: A Tool for Building Generic, Hybrid, Divide& Conquer Algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, J R; Foster, I

    2003-05-01

    A hybrid divide and conquer algorithm is one that switches from a divide and conquer to an iterative strategy at a specified problem size. Such algorithms can provide significant performance improvements relative to alternatives that use a single strategy. However, the identification of the optimal problem size at which to switch for a particular algorithm and platform can be challenging. We describe an automated approach to this problem that first conducts experiments to explore the performance space on a particular platform and then uses the resulting performance data to construct an optimal hybrid algorithm on that platform. We implement this technique in a tool, ''Ouroboros'', that automatically constructs a high-performance hybrid algorithm from a set of registered algorithms. We present results obtained with this tool for several classical divide and conquer algorithms, including matrix multiply and sorting, and report speedups of up to six times achieved over non-hybrid algorithms.

  12. Evaluation of an aSi-EPID with flattening filter free beams: applicability to the GLAaS algorithm for portal dosimetry and first experience for pretreatment QA of RapidArc.

    Science.gov (United States)

    Nicolini, G; Clivio, A; Vanetti, E; Krauss, H; Fenoglietto, P; Cozzi, L; Fogliata, A

    2013-11-01

    To demonstrate the feasibility of portal dosimetry with an amorphous silicon mega voltage imager for flattening filter free (FFF) photon beams by means of the GLAaS methodology and to validate it for pretreatment quality assurance of volumetric modulated arc therapy (RapidArc). The GLAaS algorithm, developed for flattened beams, was applied to FFF beams of nominal energy of 6 and 10 MV generated by a Varian TrueBeam (TB). The amorphous silicon electronic portal imager [named mega voltage imager (MVI) on TB] was used to generate integrated images that were converted into matrices of absorbed dose to water. To enable GLAaS use under the increased dose-per-pulse and dose-rate conditions of the FFF beams, new operational source-detector-distance (SDD) was identified to solve detector saturation issues. Empirical corrections were defined to account for the shape of the profiles of the FFF beams to expand the original methodology of beam profile and arm backscattering correction. GLAaS for FFF beams was validated on pretreatment verification of RapidArc plans for three different TB linacs. In addition, the first pretreatment results from clinical experience on 74 arcs were reported in terms of γ analysis. MVI saturates at 100 cm SDD for FFF beams but this can be avoided if images are acquired at 150 cm for all nominal dose rates of FFF beams. Rotational stability of the gantry-imager system was tested and resulted in a minimal apparent imager displacement during rotation of 0.2 ± 0.2 mm at SDD = 150 cm. The accuracy of this approach was tested with three different Varian TrueBeam linacs from different institutes. Data were stratified per energy and machine and showed no dependence with beam quality and MLC model. The results from clinical pretreatment quality assurance, provided a gamma agreement index (GAI) in the field area for six and ten FFF beams of (99.8 ± 0.3)% and (99.5 ± 0.6)% with distance to agreement and dose difference criteria set to 3 mm/3% with 2 mm/2

  13. A decision algorithm for patch spraying

    DEFF Research Database (Denmark)

    Christensen, Svend; Heisel, Torben; Walter, Mette

    2003-01-01

    method that estimates an economic optimal herbicide dose according to site-specific weed composition and density is presented in this paper. The method was termed a ‘decision algorithm for patch spraying’ (DAPS) and was evaluated in a 5-year experiment, in Denmark. DAPS consists of a competition model......, a herbicide dose–response model and an algorithm that estimates the economically optimal doses. The experiment was designed to compare herbicide treatments with DAPS recommendations and the Danish decision support system PC-Plant Protection. The results did not show any significant grain yield difference...

  14. Algorithms in Singular

    Directory of Open Access Journals (Sweden)

    Hans Schonemann

    1996-12-01

    Full Text Available Some algorithms for singularity theory and algebraic geometry The use of Grobner basis computations for treating systems of polynomial equations has become an important tool in many areas. This paper introduces of the concept of standard bases (a generalization of Grobner bases and the application to some problems from algebraic geometry. The examples are presented as SINGULAR commands. A general introduction to Grobner bases can be found in the textbook [CLO], an introduction to syzygies in [E] and [St1]. SINGULAR is a computer algebra system for computing information about singularities, for use in algebraic geometry. The basic algorithms in SINGULAR are several variants of a general standard basis algorithm for general monomial orderings (see [GG]. This includes wellorderings (Buchberger algorithm ([B1], [B2] and tangent cone orderings (Mora algorithm ([M1], [MPT] as special cases: It is able to work with non-homogeneous and homogeneous input and also to compute in the localization of the polynomial ring in 0. Recent versions include algorithms to factorize polynomials and a factorizing Grobner basis algorithm. For a complete description of SINGULAR see [Si].

  15. Streaming Word Embeddings with the Space-Saving Algorithm

    OpenAIRE

    May, Chandler; Duh, Kevin; Van Durme, Benjamin; Lall, Ashwin

    2017-01-01

    We develop a streaming (one-pass, bounded-memory) word embedding algorithm based on the canonical skip-gram with negative sampling algorithm implemented in word2vec. We compare our streaming algorithm to word2vec empirically by measuring the cosine similarity between word pairs under each algorithm and by applying each algorithm in the downstream task of hashtag prediction on a two-month interval of the Twitter sample stream. We then discuss the results of these experiments, concluding they p...

  16. Adaptive Central Force Optimization Algorithm Based on the Stability Analysis

    Directory of Open Access Journals (Sweden)

    Weiyi Qian

    2015-01-01

    Full Text Available In order to enhance the convergence capability of the central force optimization (CFO algorithm, an adaptive central force optimization (ACFO algorithm is presented by introducing an adaptive weight and defining an adaptive gravitational constant. The adaptive weight and gravitational constant are selected based on the stability theory of discrete time-varying dynamic systems. The convergence capability of ACFO algorithm is compared with the other improved CFO algorithm and evolutionary-based algorithm using 23 unimodal and multimodal benchmark functions. Experiments results show that ACFO substantially enhances the performance of CFO in terms of global optimality and solution accuracy.

  17. Improving Polyp Detection Algorithms for CT Colonography: Pareto Front Approach.

    Science.gov (United States)

    Huang, Adam; Li, Jiang; Summers, Ronald M; Petrick, Nicholas; Hara, Amy K

    2010-03-21

    We investigated a Pareto front approach to improving polyp detection algorithms for CT colonography (CTC). A dataset of 56 CTC colon surfaces with 87 proven positive detections of 53 polyps sized 4 to 60 mm was used to evaluate the performance of a one-step and a two-step curvature-based region growing algorithm. The algorithmic performance was statistically evaluated and compared based on the Pareto optimal solutions from 20 experiments by evolutionary algorithms. The false positive rate was lower (pPareto optimization process can effectively help in fine-tuning and redesigning polyp detection algorithms.

  18. A new modified fast fractal image compression algorithm

    DEFF Research Database (Denmark)

    Salarian, Mehdi; Nadernejad, Ehsan; MiarNaimi, Hossein

    2013-01-01

    In this paper, a new fractal image compression algorithm is proposed, in which the time of the encoding process is considerably reduced. The algorithm exploits a domain pool reduction approach, along with the use of innovative predefined values for contrast scaling factor, S, instead of searching...... it across. Only the domain blocks with entropy greater than a threshold are considered to belong to the domain pool. The algorithm has been tested for some well-known images and the results have been compared with the state-of-the-art algorithms. The experiments show that our proposed algorithm has...

  19. A New Modified Firefly Algorithm

    Directory of Open Access Journals (Sweden)

    Medha Gupta

    2016-07-01

    Full Text Available Nature inspired meta-heuristic algorithms studies the emergent collective intelligence of groups of simple agents. Firefly Algorithm is one of the new such swarm-based metaheuristic algorithm inspired by the flashing behavior of fireflies. The algorithm was first proposed in 2008 and since then has been successfully used for solving various optimization problems. In this work, we intend to propose a new modified version of Firefly algorithm (MoFA and later its performance is compared with the standard firefly algorithm along with various other meta-heuristic algorithms. Numerical studies and results demonstrate that the proposed algorithm is superior to existing algorithms.

  20. Formal verification of algorithms for critical systems

    Science.gov (United States)

    Rushby, John M.; Von Henke, Friedrich

    1993-01-01

    We describe our experience with formal, machine-checked verification of algorithms for critical applications, concentrating on a Byzantine fault-tolerant algorithm for synchronizing the clocks in the replicated computers of a digital flight control system. First, we explain the problems encountered in unsynchronized systems and the necessity, and criticality, of fault-tolerant synchronization. We give an overview of one such algorithm, and of the arguments for its correctness. Next, we describe a verification of the algorithm that we performed using our EHDM system for formal specification and verification. We indicate the errors we found in the published analysis of the algorithm, and other benefits that we derived from the verification. Based on our experience, we derive some key requirements for a formal specification and verification system adequate to the task of verifying algorithms of the type considered. Finally, we summarize our conclusions regarding the benefits of formal verification in this domain, and the capabilities required of verification systems in order to realize those benefits.

  1. Exact colouring algorithm for weighted graphs applied to timetabling problems with lectures of different lengths

    NARCIS (Netherlands)

    Cangalovic, Mirjana; Schreuder, J.A.M.

    1991-01-01

    An exact algorithm is presented for determining the interval chromatic number of a weighted graph. The algorithm is based on enumeration and the Branch-and-Bound principle. Computational experiments with the application of the algorithm to random weighted graphs are given. The algorithm and its

  2. Põrand kannab sisustust / Eve Variksoo

    Index Scriptorium Estoniae

    Variksoo, Eve

    1998-01-01

    Vanadest ja uutest põrandakatetest, nagu laudpõrand, laudparkett, liistparkett, laminaatpõrand, linoleum, PVC põrandakatted, kivipõrand, korkkatted, vaipkatted, sisal, kookos, keraamilised plaadid. Põrandaküttega sobivatest ja sobimatutest materjalidest. Armsast puupõrandast.

  3. Sotsiaalkapitali roll majandusarengus / Eve Parts

    Index Scriptorium Estoniae

    Parts, Eve, 1970-

    2006-01-01

    Sotsiaalkapitali lülitamise võimalustest iseseisva lisamuutujana traditsioonilistesse kasvumudelitesse, võimalikest sisulistest mõjumehhanismidest, mõjust inimkapitalile ning institutsionaalse efektiivsuse ja sotsiaalse heaoluga seotud arengueesmärkide saavutamisele. Skeemid. Tabelid

  4. Kirikurenessanss kirikust eemalolijale / Eve Veigel

    Index Scriptorium Estoniae

    Veigel, Eve, 1961-

    2005-01-01

    Karuse Margareeta kirikust. Christian Ackermanni kantsel pärineb 1697. aastast, laelühter 16. sajandist. 18. sajandist pärinevat altarit ehib Albrecht Düreri maali koopia. Orel sai korda 1995. a. saksa orelimeistri Gerhard Schmiedi juhtimisel toimusid oreliparandamise meistrikursuste ajal. Kirik Eestis. 5 värv. ill

  5. Onu Bella kollektsiooniaed / Eve Veigel

    Index Scriptorium Estoniae

    Veigel, Eve, 1961-

    2005-01-01

    Laulja, Põlva raadiojaama Marta FM pühapäevaprogrammi "Vox Humana" juhi Onu Bella (Igor Maasik) koduaiast Tartus. Onu Bella on kasvatanud roose, okaspuuvorme, praegu tegeleb mini-dendropargi rajamisega. 12 ill

  6. At the Eve of Convergence?

    DEFF Research Database (Denmark)

    Henriksen, Lars Skov; Zimmer, Annette; Smith, Steven Rathgeb

    . Specifically, we will investigate whether and to what extent social services and health care in these three countries are affected by current changes. With a special focus on nonprofit organizations, we will particularly address the question whether a trend towards convergence of the very different welfare...

  7. Lõhe Res Publicas / Eve Heinla

    Index Scriptorium Estoniae

    Heinla, Eve, 1966-

    2005-01-01

    Tallinna linnapea Tõnis Palts on vastu Tallinna volikogu Res Publica fraktsiooni esimehe Toomas Tautsi abikaasa Kristina Tautsi tööleasumisele Lääne-Tallinna keskhaigla haldusjuhina. T. Paltsi kirjast Res Publica Tallinna piirkonna juhatusele. Lisa: Kuidas lahvatas pealinna võimutüli. Kommenteerivad: Maret Maripuu, Tõnis Palts, Vladimir Maslov

  8. Pedagoogiline praktika õpetajakoolituses / Eve Eisenschmidt

    Index Scriptorium Estoniae

    Eisenschmidt, Eve, 1963-

    2003-01-01

    Lahendused pedagoogilise praktika tõhustamiseks. Pedagoogilise praktika osatähtsuse hindamiseks üliõpilaste professionaalse enesekontseptsiooni kujunemisel ja juhendaja rolli väljaselgitamisel küsitleti Haapsalu Kolledži III kursuse klassiõpetaja eriala üliõpilasi, kes viibisid I kooliastme viienädalasel praktikal 2002.a. veebruaris-märtsis

  9. Aed nagu aare / Eve Veigel

    Index Scriptorium Estoniae

    Veigel, Eve, 1961-

    2013-01-01

    Monika Kannelmäe pälvis võistluse "Kodu kauniks 2012" finalistidiplomi oma isa Aksel Kurdi ning aiakujundaja Irma Tungla rajatud aed-dendropargi säilitamise eest Harjumaal Raasiku vallas Rätla külas

  10. A comprehensive review of swarm optimization algorithms.

    Directory of Open Access Journals (Sweden)

    Mohd Nadhir Ab Wahab

    Full Text Available Many swarm optimization algorithms have been introduced since the early 60's, Evolutionary Programming to the most recent, Grey Wolf Optimization. All of these algorithms have demonstrated their potential to solve many optimization problems. This paper provides an in-depth survey of well-known optimization algorithms. Selected algorithms are briefly explained and compared with each other comprehensively through experiments conducted using thirty well-known benchmark functions. Their advantages and disadvantages are also discussed. A number of statistical tests are then carried out to determine the significant performances. The results indicate the overall advantage of Differential Evolution (DE and is closely followed by Particle Swarm Optimization (PSO, compared with other considered approaches.

  11. A comprehensive review of swarm optimization algorithms.

    Science.gov (United States)

    Ab Wahab, Mohd Nadhir; Nefti-Meziani, Samia; Atyabi, Adham

    2015-01-01

    Many swarm optimization algorithms have been introduced since the early 60's, Evolutionary Programming to the most recent, Grey Wolf Optimization. All of these algorithms have demonstrated their potential to solve many optimization problems. This paper provides an in-depth survey of well-known optimization algorithms. Selected algorithms are briefly explained and compared with each other comprehensively through experiments conducted using thirty well-known benchmark functions. Their advantages and disadvantages are also discussed. A number of statistical tests are then carried out to determine the significant performances. The results indicate the overall advantage of Differential Evolution (DE) and is closely followed by Particle Swarm Optimization (PSO), compared with other considered approaches.

  12. Detection of Illegitimate Emails using Boosting Algorithm

    DEFF Research Database (Denmark)

    Nizamani, Sarwat; Memon, Nasrullah; Wiil, Uffe Kock

    2011-01-01

    In this paper, we report on experiments to detect illegitimate emails using boosting algorithm. We call an email illegitimate if it is not useful for the receiver or for the society. We have divided the problem into two major areas of illegitimate email detection: suspicious email detection...... and spam email detection. For our desired task, we have applied a boosting technique. With the use of boosting we can achieve high accuracy of traditional classification algorithms. When using boosting one has to choose a suitable weak learner as well as the number of boosting iterations. In this paper, we...... propose suitable weak learners and parameter settings for the boosting algorithm for the desired task. We have initially analyzed the problem using base learners. Then we have applied boosting algorithm with suitable weak learners and parameter settings such as the number of boosting iterations. We...

  13. Recursive forgetting algorithms

    DEFF Research Database (Denmark)

    Parkum, Jens; Poulsen, Niels Kjølstad; Holst, Jan

    1992-01-01

    In the first part of the paper, a general forgetting algorithm is formulated and analysed. It contains most existing forgetting schemes as special cases. Conditions are given ensuring that the basic convergence properties will hold. In the second part of the paper, the results are applied to a sp...... to a specific algorithm with selective forgetting. Here, the forgetting is non-uniform in time and space. The theoretical analysis is supported by a simulation example demonstrating the practical performance of this algorithm......In the first part of the paper, a general forgetting algorithm is formulated and analysed. It contains most existing forgetting schemes as special cases. Conditions are given ensuring that the basic convergence properties will hold. In the second part of the paper, the results are applied...

  14. Unsupervised learning algorithms

    CERN Document Server

    Aydin, Kemal

    2016-01-01

    This book summarizes the state-of-the-art in unsupervised learning. The contributors discuss how with the proliferation of massive amounts of unlabeled data, unsupervised learning algorithms, which can automatically discover interesting and useful patterns in such data, have gained popularity among researchers and practitioners. The authors outline how these algorithms have found numerous applications including pattern recognition, market basket analysis, web mining, social network analysis, information retrieval, recommender systems, market research, intrusion detection, and fraud detection. They present how the difficulty of developing theoretically sound approaches that are amenable to objective evaluation have resulted in the proposal of numerous unsupervised learning algorithms over the past half-century. The intended audience includes researchers and practitioners who are increasingly using unsupervised learning algorithms to analyze their data. Topics of interest include anomaly detection, clustering,...

  15. Diagnostic Algorithm Benchmarking

    Science.gov (United States)

    Poll, Scott

    2011-01-01

    A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.

  16. Quantum algorithmic information theory

    OpenAIRE

    Svozil, Karl

    1995-01-01

    The agenda of quantum algorithmic information theory, ordered `top-down,' is the quantum halting amplitude, followed by the quantum algorithmic information content, which in turn requires the theory of quantum computation. The fundamental atoms processed by quantum computation are the quantum bits which are dealt with in quantum information theory. The theory of quantum computation will be based upon a model of universal quantum computer whose elementary unit is a two-port interferometer capa...

  17. Optimization algorithms and applications

    CERN Document Server

    Arora, Rajesh Kumar

    2015-01-01

    Choose the Correct Solution Method for Your Optimization ProblemOptimization: Algorithms and Applications presents a variety of solution techniques for optimization problems, emphasizing concepts rather than rigorous mathematical details and proofs. The book covers both gradient and stochastic methods as solution techniques for unconstrained and constrained optimization problems. It discusses the conjugate gradient method, Broyden-Fletcher-Goldfarb-Shanno algorithm, Powell method, penalty function, augmented Lagrange multiplier method, sequential quadratic programming, method of feasible direc

  18. Image Segmentation Algorithms Overview

    OpenAIRE

    Yuheng, Song; Hao, Yan

    2017-01-01

    The technology of image segmentation is widely used in medical image processing, face recognition pedestrian detection, etc. The current image segmentation techniques include region-based segmentation, edge detection segmentation, segmentation based on clustering, segmentation based on weakly-supervised learning in CNN, etc. This paper analyzes and summarizes these algorithms of image segmentation, and compares the advantages and disadvantages of different algorithms. Finally, we make a predi...

  19. Genetic algorithm in chemistry.

    OpenAIRE

    da Costa, PA; Poppi, RJ

    1999-01-01

    Genetic algorithm is an optimization technique based on Darwin evolution theory. In last years its application in chemistry is increasing significantly due the special characteristics for optimization of complex systems. The basic principles and some further modifications implemented to improve its performance are presented, as well as a historical development. A numerical example of a function optimization is also shown to demonstrate how the algorithm works in an optimization process. Final...

  20. Adaptive Genetic Algorithm

    OpenAIRE

    Jakobović, Domagoj; Golub, Marin

    1999-01-01

    In this paper we introduce an adaptive, 'self-contained' genetic algorithm (GA) with steady-state selection. This variant of GA utilizes empirically based methods for calculating its control parameters. The adaptive algorithm estimates the percentage of the population to be replaced with new individuals (generation gap). It chooses the solutions for crossover and varies the number of mutations, ail regarding the current population state. The state of the population is evaluated by observing s...

  1. Fluid Genetic Algorithm (FGA)

    OpenAIRE

    Jafari-Marandi, Ruholla; Smith, Brian K.

    2017-01-01

    Genetic Algorithm (GA) has been one of the most popular methods for many challenging optimization problems when exact approaches are too computationally expensive. A review of the literature shows extensive research attempting to adapt and develop the standard GA. Nevertheless, the essence of GA which consists of concepts such as chromosomes, individuals, crossover, mutation, and others rarely has been the focus of recent researchers. In this paper method, Fluid Genetic Algorithm (FGA), some ...

  2. Algorithmic Relative Complexity

    OpenAIRE

    Daniele Cerra; Mihai Datcu

    2011-01-01

    Information content and compression are tightly related concepts that can be addressed through both classical and algorithmic information theories, on the basis of Shannon entropy and Kolmogorov complexity, respectively. The definition of several entities in Kolmogorov’s framework relies upon ideas from classical information theory, and these two approaches share many common traits. In this work, we expand the relations between these two frameworks by introducing algorithmic cross-complexity ...

  3. RFID Location Algorithm

    Directory of Open Access Journals (Sweden)

    Wang Zi Min

    2016-01-01

    Full Text Available With the development of social services, people’s living standards improve further requirements, there is an urgent need for a way to adapt to the complex situation of the new positioning technology. In recent years, RFID technology have a wide range of applications in all aspects of life and production, such as logistics tracking, car alarm, security and other items. The use of RFID technology to locate, it is a new direction in the eyes of the various research institutions and scholars. RFID positioning technology system stability, the error is small and low-cost advantages of its location algorithm is the focus of this study.This article analyzes the layers of RFID technology targeting methods and algorithms. First, RFID common several basic methods are introduced; Secondly, higher accuracy to political network location method; Finally, LANDMARC algorithm will be described. Through this it can be seen that advanced and efficient algorithms play an important role in increasing RFID positioning accuracy aspects.Finally, the algorithm of RFID location technology are summarized, pointing out the deficiencies in the algorithm, and put forward a follow-up study of the requirements, the vision of a better future RFID positioning technology.

  4. Breast cancer screening in the era of density notification legislation: summary of 2014 Massachusetts experience and suggestion of an evidence-based management algorithm by multi-disciplinary expert panel.

    Science.gov (United States)

    Freer, Phoebe E; Slanetz, Priscilla J; Haas, Jennifer S; Tung, Nadine M; Hughes, Kevin S; Armstrong, Katrina; Semine, A Alan; Troyan, Susan L; Birdwell, Robyn L

    2015-09-01

    Stemming from breast density notification legislation in Massachusetts effective 2015, we sought to develop a collaborative evidence-based approach to density notification that could be used by practitioners across the state. Our goal was to develop an evidence-based consensus management algorithm to help patients and health care providers follow best practices to implement a coordinated, evidence-based, cost-effective, sustainable practice and to standardize care in recommendations for supplemental screening. We formed the Massachusetts Breast Risk Education and Assessment Task Force (MA-BREAST) a multi-institutional, multi-disciplinary panel of expert radiologists, surgeons, primary care physicians, and oncologists to develop a collaborative approach to density notification legislation. Using evidence-based data from the Institute for Clinical and Economic Review, the Cochrane review, National Comprehensive Cancer Network guidelines, American Cancer Society recommendations, and American College of Radiology appropriateness criteria, the group collaboratively developed an evidence-based best-practices algorithm. The expert consensus algorithm uses breast density as one element in the risk stratification to determine the need for supplemental screening. Women with dense breasts and otherwise low risk (20% lifetime) should consider supplemental screening MRI in addition to routine mammography regardless of breast density. We report the development of the multi-disciplinary collaborative approach to density notification. We propose a risk stratification algorithm to assess personal level of risk to determine the need for supplemental screening for an individual woman.

  5. Formal Specification and Validation of a Hybrid Connectivity Restoration Algorithm for Wireless Sensor and Actor Networks

    Directory of Open Access Journals (Sweden)

    Nazir Ahmad Zafar

    2012-08-01

    Full Text Available Maintaining inter-actor connectivity is extremely crucial in mission-critical applications of Wireless Sensor and Actor Networks (WSANs, as actors have to quickly plan optimal coordinated responses to detected events. Failure of a critical actor partitions the inter-actor network into disjoint segments besides leaving a coverage hole, and thus hinders the network operation. This paper presents a Partitioning detection and Connectivity Restoration (PCR algorithm to tolerate critical actor failure. As part of pre-failure planning, PCR determines critical/non-critical actors based on localized information and designates each critical node with an appropriate backup (preferably non-critical. The pre-designated backup detects the failure of its primary actor and initiates a post-failure recovery process that may involve coordinated multi-actor relocation. To prove the correctness, we construct a formal specification of PCR using Z notation. We model WSAN topology as a dynamic graph and transform PCR to corresponding formal specification using Z notation. Formal specification is analyzed and validated using the Z Eves tool. Moreover, we simulate the specification to quantitatively analyze the efficiency of PCR. Simulation results confirm the effectiveness of PCR and the results shown that it outperforms contemporary schemes found in the literature.

  6. An empirical study on SAJQ (Sorting Algorithm for Join Queries

    Directory of Open Access Journals (Sweden)

    Hassan I. Mathkour

    2010-06-01

    Full Text Available Most queries that applied on database management systems (DBMS depend heavily on the performance of the used sorting algorithm. In addition to have an efficient sorting algorithm, as a primary feature, stability of such algorithms is a major feature that is needed in performing DBMS queries. In this paper, we study a new Sorting Algorithm for Join Queries (SAJQ that has both advantages of being efficient and stable. The proposed algorithm takes the advantage of using the m-way-merge algorithm in enhancing its time complexity. SAJQ performs the sorting operation in a time complexity of O(nlogm, where n is the length of the input array and m is number of sub-arrays used in sorting. An unsorted input array of length n is arranged into m sorted sub-arrays. The m-way-merge algorithm merges the sorted m sub-arrays into the final output sorted array. The proposed algorithm keeps the stability of the keys intact. An analytical proof has been conducted to prove that, in the worst case, the proposed algorithm has a complexity of O(nlogm. Also, a set of experiments has been performed to investigate the performance of the proposed algorithm. The experimental results have shown that the proposed algorithm outperforms other Stable–Sorting algorithms that are designed for join-based queries.

  7. HLRF-BFGS-Based Algorithm for Inverse Reliability Analysis

    Directory of Open Access Journals (Sweden)

    Rakul Bharatwaj Ramesh

    2017-01-01

    Full Text Available This study proposes an algorithm to solve inverse reliability problems with a single unknown parameter. The proposed algorithm is based on an existing algorithm, the inverse first-order reliability method (inverse-FORM, which uses the Hasofer Lind Rackwitz Fiessler (HLRF algorithm. The initial algorithm analyzed in this study was developed by modifying the HLRF algorithm in inverse-FORM using the Broyden-Fletcher-Goldarb-Shanno (BFGS update formula completely. Based on numerical experiments, this modification was found to be more efficient than inverse-FORM when applied to most of the limit state functions considered in this study, as it requires comparatively a smaller number of iterations to arrive at the solution. However, to achieve this higher computational efficiency, this modified algorithm sometimes compromised the accuracy of the final solution. To overcome this drawback, a hybrid method by using both the algorithms, original HLRF algorithm and the modified algorithm with BFGS update formula, is proposed. This hybrid algorithm achieves better computational efficiency, compared to inverse-FORM, without compromising the accuracy of the final solution. Comparative numerical examples are provided to demonstrate the improved performance of this hybrid algorithm over that of inverse-FORM in terms of accuracy and efficiency.

  8. Development of Navigation Control Algorithm for AGV Using D* search Algorithm

    Directory of Open Access Journals (Sweden)

    Jeong Geun Kim

    2013-06-01

    Full Text Available In this paper, we present a navigation control algorithm for Automatic Guided Vehicles (AGV that move in industrial environments including static and moving obstacles using D* algorithm. This algorithm has ability to get paths planning in unknown, partially known and changing environments efficiently. To apply the D* search algorithm, the grid map represent the known environment is generated. By using the laser scanner LMS-151 and laser navigation sensor NAV-200, the grid map is updated according to the changing of environment and obstacles. When the AGV finds some new map information such as new unknown obstacles, it adds the information to its map and re-plans a new shortest path from its current coordinates to the given goal coordinates. It repeats the process until it reaches the goal coordinates. This algorithm is verified through simulation and experiment. The simulation and experimental results show that the algorithm can be used to move the AGV successfully to reach the goal position while it avoids unknown moving and static obstacles. [Keywords— navigation control algorithm; Automatic Guided Vehicles (AGV; D* search algorithm

  9. Fireworks algorithm for mean-VaR/CVaR models

    Science.gov (United States)

    Zhang, Tingting; Liu, Zhifeng

    2017-10-01

    Intelligent algorithms have been widely applied to portfolio optimization problems. In this paper, we introduce a novel intelligent algorithm, named fireworks algorithm, to solve the mean-VaR/CVaR model for the first time. The results show that, compared with the classical genetic algorithm, fireworks algorithm not only improves the optimization accuracy and the optimization speed, but also makes the optimal solution more stable. We repeat our experiments at different confidence levels and different degrees of risk aversion, and the results are robust. It suggests that fireworks algorithm has more advantages than genetic algorithm in solving the portfolio optimization problem, and it is feasible and promising to apply it into this field.

  10. Two New PRP Conjugate Gradient Algorithms for Minimization Optimization Models.

    Science.gov (United States)

    Yuan, Gonglin; Duan, Xiabin; Liu, Wenjie; Wang, Xiaoliang; Cui, Zengru; Sheng, Zhou

    2015-01-01

    Two new PRP conjugate Algorithms are proposed in this paper based on two modified PRP conjugate gradient methods: the first algorithm is proposed for solving unconstrained optimization problems, and the second algorithm is proposed for solving nonlinear equations. The first method contains two aspects of information: function value and gradient value. The two methods both possess some good properties, as follows: 1) βk ≥ 0 2) the search direction has the trust region property without the use of any line search method 3) the search direction has sufficient descent property without the use of any line search method. Under some suitable conditions, we establish the global convergence of the two algorithms. We conduct numerical experiments to evaluate our algorithms. The numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems and that the second algorithm is effective for solving large-scale nonlinear equations.

  11. Two New PRP Conjugate Gradient Algorithms for Minimization Optimization Models.

    Directory of Open Access Journals (Sweden)

    Gonglin Yuan

    Full Text Available Two new PRP conjugate Algorithms are proposed in this paper based on two modified PRP conjugate gradient methods: the first algorithm is proposed for solving unconstrained optimization problems, and the second algorithm is proposed for solving nonlinear equations. The first method contains two aspects of information: function value and gradient value. The two methods both possess some good properties, as follows: 1 βk ≥ 0 2 the search direction has the trust region property without the use of any line search method 3 the search direction has sufficient descent property without the use of any line search method. Under some suitable conditions, we establish the global convergence of the two algorithms. We conduct numerical experiments to evaluate our algorithms. The numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems and that the second algorithm is effective for solving large-scale nonlinear equations.

  12. An Image Encryption Algorithm Based on Information Hiding

    Science.gov (United States)

    Ge, Xin; Lu, Bin; Liu, Fenlin; Gong, Daofu

    Aiming at resolving the conflict between security and efficiency in the design of chaotic image encryption algorithms, an image encryption algorithm based on information hiding is proposed based on the “one-time pad” idea. A random parameter is introduced to ensure a different keystream for each encryption, which has the characteristics of “one-time pad”, improving the security of the algorithm rapidly without significant increase in algorithm complexity. The random parameter is embedded into the ciphered image with information hiding technology, which avoids negotiation for its transport and makes the application of the algorithm easier. Algorithm analysis and experiments show that the algorithm is secure against chosen plaintext attack, differential attack and divide-and-conquer attack, and has good statistical properties in ciphered images.

  13. Photovoltaic Cells Mppt Algorithm and Design of Controller Monitoring System

    Science.gov (United States)

    Meng, X. Z.; Feng, H. B.

    2017-10-01

    This paper combined the advantages of each maximum power point tracking (MPPT) algorithm, put forward a kind of algorithm with higher speed and higher precision, based on this algorithm designed a maximum power point tracking controller with ARM. The controller, communication technology and PC software formed a control system. Results of the simulation and experiment showed that the process of maximum power tracking was effective, and the system was stable.

  14. Execution of VHDL Models Using Parallel Discrete Event Simulation Algorithms

    OpenAIRE

    Ashenden, Peter J.; Henry Detmold; McKeen, Wayne S.

    1994-01-01

    In this paper, we discuss the use of parallel discrete event simulation (PDES) algorithms for execution of hardware models written in VHDL. We survey central event queue, conservative distributed and optimistic distributed PDES algorithms, and discuss aspects of the semantics of VHDL and VHDL-92 that affect the use of these algorithms in a VHDL simulator. Next, we describe an experiment performed as part of the Vsim Project at the University of Adelaide, in which a simulation kernel using the...

  15. Research on laser marking speed optimization by using genetic algorithm.

    Science.gov (United States)

    Wang, Dongyun; Yu, Qiwei; Zhang, Yu

    2015-01-01

    Laser Marking Machine is the most common coding equipment on product packaging lines. However, the speed of laser marking has become a bottleneck of production. In order to remove this bottleneck, a new method based on a genetic algorithm is designed. On the basis of this algorithm, a controller was designed and simulations and experiments were performed. The results show that using this algorithm could effectively improve laser marking efficiency by 25%.

  16. Agency and Algorithms

    Directory of Open Access Journals (Sweden)

    Hanns Holger Rutz

    2016-11-01

    Full Text Available Although the concept of algorithms has been established a long time ago, their current topicality indicates a shift in the discourse. Classical definitions based on logic seem to be inadequate to describe their aesthetic capabilities. New approaches stress their involvement in material practices as well as their incompleteness. Algorithmic aesthetics can no longer be tied to the static analysis of programs, but must take into account the dynamic and experimental nature of coding practices. It is suggested that the aesthetic objects thus produced articulate something that could be called algorithmicity or the space of algorithmic agency. This is the space or the medium – following Luhmann’s form/medium distinction – where human and machine undergo mutual incursions. In the resulting coupled “extimate” writing process, human initiative and algorithmic speculation cannot be clearly divided out any longer. An observation is attempted of defining aspects of such a medium by drawing a trajectory across a number of sound pieces. The operation of exchange between form and medium I call reconfiguration and it is indicated by this trajectory. 

  17. Quantum Search Algorithms

    Science.gov (United States)

    Korepin, Vladimir E.; Xu, Ying

    This article reviews recent progress in quantum database search algorithms. The subject is presented in a self-contained and pedagogical way. The problem of searching a large database (a Hilbert space) for a target item is performed by the famous Grover algorithm which locates the target item with high probability and a quadratic speed-up compared with the corresponding classical algorithm. If the database is partitioned into blocks and one is searching for the block containing the target item instead of the target item itself, then the problem is referred to as partial search. Partial search trades accuracy for speed and the most efficient version is the Grover-Radhakrishnan-Korepin (GRK) algorithm. The target block can be further partitioned into sub-blocks so that GRK's can be performed in a sequence called a hierarchy. We study the Grover search and GRK partial search in detail and prove that a GRK hierarchy is less efficient than a direct GRK partial search. Both the Grover search and the GRK partial search can be generalized to the case with several target items (or target blocks for a GRK). The GRK partial search algorithm can also be represented in terms of group theory.

  18. Distribution agnostic structured sparsity recovery algorithms

    KAUST Repository

    Al-Naffouri, Tareq Y.

    2013-05-01

    We present an algorithm and its variants for sparse signal recovery from a small number of its measurements in a distribution agnostic manner. The proposed algorithm finds Bayesian estimate of a sparse signal to be recovered and at the same time is indifferent to the actual distribution of its non-zero elements. Termed Support Agnostic Bayesian Matching Pursuit (SABMP), the algorithm also has the capability of refining the estimates of signal and required parameters in the absence of the exact parameter values. The inherent feature of the algorithm of being agnostic to the distribution of the data grants it the flexibility to adapt itself to several related problems. Specifically, we present two important extensions to this algorithm. One extension handles the problem of recovering sparse signals having block structures while the other handles multiple measurement vectors to jointly estimate the related unknown signals. We conduct extensive experiments to show that SABMP and its variants have superior performance to most of the state-of-the-art algorithms and that too at low-computational expense. © 2013 IEEE.

  19. Optimisation of nonlinear motion cueing algorithm based on genetic algorithm

    Science.gov (United States)

    Asadi, Houshyar; Mohamed, Shady; Rahim Zadeh, Delpak; Nahavandi, Saeid

    2015-04-01

    Motion cueing algorithms (MCAs) are playing a significant role in driving simulators, aiming to deliver the most accurate human sensation to the simulator drivers compared with a real vehicle driver, without exceeding the physical limitations of the simulator. This paper provides the optimisation design of an MCA for a vehicle simulator, in order to find the most suitable washout algorithm parameters, while respecting all motion platform physical limitations, and minimising human perception error between real and simulator driver. One of the main limitations of the classical washout filters is that it is attuned by the worst-case scenario tuning method. This is based on trial and error, and is effected by driving and programmers experience, making this the most significant obstacle to full motion platform utilisation. This leads to inflexibility of the structure, production of false cues and makes the resulting simulator fail to suit all circumstances. In addition, the classical method does not take minimisation of human perception error and physical constraints into account. Production of motion cues and the impact of different parameters of classical washout filters on motion cues remain inaccessible for designers for this reason. The aim of this paper is to provide an optimisation method for tuning the MCA parameters, based on nonlinear filtering and genetic algorithms. This is done by taking vestibular sensation error into account between real and simulated cases, as well as main dynamic limitations, tilt coordination and correlation coefficient. Three additional compensatory linear blocks are integrated into the MCA, to be tuned in order to modify the performance of the filters successfully. The proposed optimised MCA is implemented in MATLAB/Simulink software packages. The results generated using the proposed method show increased performance in terms of human sensation, reference shape tracking and exploiting the platform more efficiently without reaching

  20. Lightning detection and exposure algorithms for smartphones

    Science.gov (United States)

    Wang, Haixin; Shao, Xiaopeng; Wang, Lin; Su, Laili; Huang, Yining

    2015-05-01

    This study focuses on the key theory of lightning detection, exposure and the experiments. Firstly, the algorithm based on differential operation between two adjacent frames is selected to remove the lightning background information and extract lighting signal, and the threshold detection algorithm is applied to achieve the purpose of precise detection of lightning. Secondly, an algorithm is proposed to obtain scene exposure value, which can automatically detect external illumination status. Subsequently, a look-up table could be built on the basis of the relationships between the exposure value and average image brightness to achieve rapid automatic exposure. Finally, based on a USB 3.0 industrial camera including a CMOS imaging sensor, a set of hardware test platform is established and experiments are carried out on this platform to verify the performances of the proposed algorithms. The algorithms can effectively and fast capture clear lightning pictures such as special nighttime scenes, which will provide beneficial supporting to the smartphone industry, since the current exposure methods in smartphones often lost capture or induce overexposed or underexposed pictures.

  1. The Retina Algorithm

    CERN Multimedia

    CERN. Geneva; PUNZI, Giovanni

    2015-01-01

    Charge particle reconstruction is one of the most demanding computational tasks found in HEP, and it becomes increasingly important to perform it in real time. We envision that HEP would greatly benefit from achieving a long-term goal of making track reconstruction happen transparently as part of the detector readout ("detector-embedded tracking"). We describe here a track-reconstruction approach based on a massively parallel pattern-recognition algorithm, inspired by studies of the processing of visual images by the brain as it happens in nature ('RETINA algorithm'). It turns out that high-quality tracking in large HEP detectors is possible with very small latencies, when this algorithm is implemented in specialized processors, based on current state-of-the-art, high-speed/high-bandwidth digital devices.

  2. Algorithms in invariant theory

    CERN Document Server

    Sturmfels, Bernd

    2008-01-01

    J. Kung and G.-C. Rota, in their 1984 paper, write: "Like the Arabian phoenix rising out of its ashes, the theory of invariants, pronounced dead at the turn of the century, is once again at the forefront of mathematics". The book of Sturmfels is both an easy-to-read textbook for invariant theory and a challenging research monograph that introduces a new approach to the algorithmic side of invariant theory. The Groebner bases method is the main tool by which the central problems in invariant theory become amenable to algorithmic solutions. Students will find the book an easy introduction to this "classical and new" area of mathematics. Researchers in mathematics, symbolic computation, and computer science will get access to a wealth of research ideas, hints for applications, outlines and details of algorithms, worked out examples, and research problems.

  3. Handbook of Memetic Algorithms

    CERN Document Server

    Cotta, Carlos; Moscato, Pablo

    2012-01-01

    Memetic Algorithms (MAs) are computational intelligence structures combining multiple and various operators in order to address optimization problems.  The combination and interaction amongst operators evolves and promotes the diffusion of the most successful units and generates an algorithmic behavior which can handle complex objective functions and hard fitness landscapes.   “Handbook of Memetic Algorithms” organizes, in a structured way, all the the most important results in the field of MAs since their earliest definition until now.  A broad review including various algorithmic solutions as well as successful applications is included in this book. Each class of optimization problems, such as constrained optimization, multi-objective optimization, continuous vs combinatorial problems, uncertainties, are analysed separately and, for each problem,  memetic recipes for tackling the difficulties are given with some successful examples. Although this book contains chapters written by multiple authors, ...

  4. A Subspace Algorithm

    DEFF Research Database (Denmark)

    Vissing, S.; Hededal, O.

    An algorithm is presented for computing the m smallest eigenvalues and corresponding eigenvectors of the generalized eigenvalue problem (A - λB)Φ = 0 where A and B are real n x n symmetric matrices. In an iteration scheme the matrices A and B are projected simultaneously onto an m-dimensional sub......An algorithm is presented for computing the m smallest eigenvalues and corresponding eigenvectors of the generalized eigenvalue problem (A - λB)Φ = 0 where A and B are real n x n symmetric matrices. In an iteration scheme the matrices A and B are projected simultaneously onto an m......-dimensional subspace in order to establish and solve a symmetric generalized eigenvalue problem in the subspace. The algorithm is described in pseudo code and implemented in the C programming language for lower triangular matrices A and B. The implementation includes procedures for selecting initial iteration vectors...

  5. Named Entity Linking Algorithm

    Directory of Open Access Journals (Sweden)

    M. F. Panteleev

    2017-01-01

    Full Text Available In the tasks of processing text in natural language, Named Entity Linking (NEL represents the task to define and link some entity, which is found in the text, with some entity in the knowledge base (for example, Dbpedia. Currently, there is a diversity of approaches to solve this problem, but two main classes can be identified: graph-based approaches and machine learning-based ones. Graph and Machine Learning approaches-based algorithm is proposed accordingly to the stated assumptions about the interrelations of named entities in a sentence and in general.In the case of graph-based approaches, it is necessary to solve the problem of identifying an optimal set of the related entities according to some metric that characterizes the distance between these entities in a graph built on some knowledge base. Due to limitations in processing power, to solve this task directly is impossible. Therefore, its modification is proposed. Based on the algorithms of machine learning, an independent solution cannot be built due to small volumes of training datasets relevant to NEL task. However, their use can contribute to improving the quality of the algorithm. The adaptation of the Latent Dirichlet Allocation model is proposed in order to obtain a measure of the compatibility of attributes of various entities encountered in one context.The efficiency of the proposed algorithm was experimentally tested. A test dataset was independently generated. On its basis the performance of the model was compared using the proposed algorithm with the open source product DBpedia Spotlight, which solves the NEL problem.The mockup, based on the proposed algorithm, showed a low speed as compared to DBpedia Spotlight. However, the fact that it has shown higher accuracy, stipulates the prospects for work in this direction.The main directions of development were proposed in order to increase the accuracy of the system and its productivity.

  6. Algorithmization in Learning and Instruction.

    Science.gov (United States)

    Landa, L. N.

    An introduction to the theory of algorithms reviews the theoretical issues of teaching algorithms, the logical and psychological problems of devising algorithms of identification, and the selection of efficient algorithms; and then relates all of these to the classroom teaching process. It also descirbes some major research on the effectiveness of…

  7. Algorithms for Reinforcement Learning

    CERN Document Server

    Szepesvari, Csaba

    2010-01-01

    Reinforcement learning is a learning paradigm concerned with learning to control a system so as to maximize a numerical performance measure that expresses a long-term objective. What distinguishes reinforcement learning from supervised learning is that only partial feedback is given to the learner about the learner's predictions. Further, the predictions may have long term effects through influencing the future state of the controlled system. Thus, time plays a special role. The goal in reinforcement learning is to develop efficient learning algorithms, as well as to understand the algorithms'

  8. Wireless communications algorithmic techniques

    CERN Document Server

    Vitetta, Giorgio; Colavolpe, Giulio; Pancaldi, Fabrizio; Martin, Philippa A

    2013-01-01

    This book introduces the theoretical elements at the basis of various classes of algorithms commonly employed in the physical layer (and, in part, in MAC layer) of wireless communications systems. It focuses on single user systems, so ignoring multiple access techniques. Moreover, emphasis is put on single-input single-output (SISO) systems, although some relevant topics about multiple-input multiple-output (MIMO) systems are also illustrated.Comprehensive wireless specific guide to algorithmic techniquesProvides a detailed analysis of channel equalization and channel coding for wi

  9. Algorithm for structure constants

    CERN Document Server

    Paiva, F M

    2011-01-01

    In a $n$-dimensional Lie algebra, random numerical values are assigned by computer to $n(n-1)$ especially selected structure constants. An algorithm is then created, which calculates without ambiguity the remaining constants, obeying the Jacobi conditions. Differently from others, this algorithm is suitable even for poor personal computer. ------------- En $n$-dimensia algebro de Lie, hazardaj numeraj valoroj estas asignitaj per komputilo al $n(n-1)$ speciale elektitaj konstantoj de strukturo. Tiam algoritmo estas kreita, kalkulante senambigue la ceterajn konstantojn, obeante kondicxojn de Jacobi. Malsimile al aliaj algoritmoj, tiu cxi tauxgas ecx por malpotenca komputilo.

  10. Secondary Vertex Finder Algorithm

    CERN Document Server

    Heer, Sebastian; The ATLAS collaboration

    2017-01-01

    If a jet originates from a b-quark, a b-hadron is formed during the fragmentation process. In its dominant decay modes, the b-hadron decays into a c-hadron via the electroweak interaction. Both b- and c-hadrons have lifetimes long enough, to travel a few millimetres before decaying. Thus displaced vertices from b- and subsequent c-hadron decays provide a strong signature for a b-jet. Reconstructing these secondary vertices (SV) and their properties is the aim of this algorithm. The performance of this algorithm is studied with tt̄ events, requiring at least one lepton, simulated at 13 TeV.

  11. Parallel Algorithms and Patterns

    Energy Technology Data Exchange (ETDEWEB)

    Robey, Robert W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-16

    This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.

  12. Network-Oblivious Algorithms

    DEFF Research Database (Denmark)

    Bilardi, Gianfranco; Pietracaprina, Andrea; Pucci, Geppino

    2016-01-01

    A framework is proposed for the design and analysis of network-oblivious algorithms, namely algorithms that can run unchanged, yet efficiently, on a variety of machines characterized by different degrees of parallelism and communication capabilities. The framework prescribes that a network...... in the latter model implies optimality in the decomposable bulk synchronous parallel model, which is known to effectively describe a wide and significant class of parallel platforms. The proposed framework can be regarded as an attempt to port the notion of obliviousness, well established in the context...

  13. A Generalized Jacobi Algorithm

    DEFF Research Database (Denmark)

    Vissing, S.; Krenk, S.

    An algorithm is developed for the generalized eigenvalue problem (A - λB)φ = O where A and B are real symmetric matrices. The matrices A and B are diagonalized simultaneously by a series of generalized Jacobi transformations and all eigenvalues and eigenvectors are obtained. A criterion expressed...... in terms of the transformation parameters is used to omit transformations leading to very small changes. The algorithm is described in pseudo code for lower triangular matrices A and B and implemented in the programming Language C....

  14. Neural-Network-Biased Genetic Algorithms for Materials Design: Evolutionary Algorithms That Learn.

    Science.gov (United States)

    Patra, Tarak K; Meenakshisundaram, Venkatesh; Hung, Jui-Hsiang; Simmons, David S

    2017-02-13

    Machine learning has the potential to dramatically accelerate high-throughput approaches to materials design, as demonstrated by successes in biomolecular design and hard materials design. However, in the search for new soft materials exhibiting properties and performance beyond those previously achieved, machine learning approaches are frequently limited by two shortcomings. First, because they are intrinsically interpolative, they are better suited to the optimization of properties within the known range of accessible behavior than to the discovery of new materials with extremal behavior. Second, they require large pre-existing data sets, which are frequently unavailable and prohibitively expensive to produce. Here we describe a new strategy, the neural-network-biased genetic algorithm (NBGA), for combining genetic algorithms, machine learning, and high-throughput computation or experiment to discover materials with extremal properties in the absence of pre-existing data. Within this strategy, predictions from a progressively constructed artificial neural network are employed to bias the evolution of a genetic algorithm, with fitness evaluations performed via direct simulation or experiment. In effect, this strategy gives the evolutionary algorithm the ability to "learn" and draw inferences from its experience to accelerate the evolutionary process. We test this algorithm against several standard optimization problems and polymer design problems and demonstrate that it matches and typically exceeds the efficiency and reproducibility of standard approaches including a direct-evaluation genetic algorithm and a neural-network-evaluated genetic algorithm. The success of this algorithm in a range of test problems indicates that the NBGA provides a robust strategy for employing informatics-accelerated high-throughput methods to accelerate materials design in the absence of pre-existing data.

  15. Disrupting the Dissertation: Linked Data, Enhanced Publication and Algorithmic Culture

    Science.gov (United States)

    Tracy, Frances; Carmichael, Patrick

    2017-01-01

    This article explores how the three aspects of Striphas' notion of algorithmic culture (information, crowds and algorithms) might influence and potentially disrupt established educational practices. We draw on our experience of introducing semantic web and linked data technologies into higher education settings, focussing on extended student…

  16. A New Algorithm for System of Integral Equations

    Directory of Open Access Journals (Sweden)

    Abdujabar Rasulov

    2014-01-01

    Full Text Available We develop a new algorithm to solve the system of integral equations. In this new method no need to use matrix weights. Beacause of it, we reduce computational complexity considerable. Using the new algorithm it is also possible to solve an initial boundary value problem for system of parabolic equations. To verify the efficiency, the results of computational experiments are given.

  17. An Algorithm for Term Conflation Based on Tree Structures.

    Science.gov (United States)

    Diaz, Irene; Morato, Jorge; Llorens, Juan

    2002-01-01

    Presents a new stemming algorithm based on tree structures that improves relevance in information retrieval by conflation, grouping similar words into a single term. Highlights include the normalization process used in automatic thesaurus construction; theoretical aspects; the normalization algorithm; and experiments with English and Spanish. (LRW)

  18. New Optimization Algorithms in Physics

    CERN Document Server

    Hartmann, Alexander K

    2004-01-01

    Many physicists are not aware of the fact that they can solve their problems by applying optimization algorithms. Since the number of such algorithms is steadily increasing, many new algorithms have not been presented comprehensively until now. This presentation of recently developed algorithms applied in physics, including demonstrations of how they work and related results, aims to encourage their application, and as such the algorithms selected cover concepts and methods from statistical physics to optimization problems emerging in theoretical computer science.

  19. A Survey of Gaussian Convolution Algorithms

    Directory of Open Access Journals (Sweden)

    Pascal Getreuer

    2013-12-01

    Full Text Available Gaussian convolution is a common operation and building block for algorithms in signal and image processing. Consequently, its efficient computation is important, and many fast approximations have been proposed. In this survey, we discuss approximate Gaussian convolution based on finite impulse response filters, DFT and DCT based convolution, box filters, and several recursive filters. Since boundary handling is sometimes overlooked in the original works, we pay particular attention to develop it here. We perform numerical experiments to compare the speed and quality of the algorithms.

  20. Speckle imaging algorithms for planetary imaging

    Energy Technology Data Exchange (ETDEWEB)

    Johansson, E. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    I will discuss the speckle imaging algorithms used to process images of the impact sites of the collision of comet Shoemaker-Levy 9 with Jupiter. The algorithms use a phase retrieval process based on the average bispectrum of the speckle image data. High resolution images are produced by estimating the Fourier magnitude and Fourier phase of the image separately, then combining them and inverse transforming to achieve the final result. I will show raw speckle image data and high-resolution image reconstructions from our recent experiment at Lick Observatory.

  1. Automatic design of decision-tree algorithms with evolutionary algorithms.

    Science.gov (United States)

    Barros, Rodrigo C; Basgalupp, Márcio P; de Carvalho, André C P L F; Freitas, Alex A

    2013-01-01

    This study reports the empirical analysis of a hyper-heuristic evolutionary algorithm that is capable of automatically designing top-down decision-tree induction algorithms. Top-down decision-tree algorithms are of great importance, considering their ability to provide an intuitive and accurate knowledge representation for classification problems. The automatic design of these algorithms seems timely, given the large literature accumulated over more than 40 years of research in the manual design of decision-tree induction algorithms. The proposed hyper-heuristic evolutionary algorithm, HEAD-DT, is extensively tested using 20 public UCI datasets and 10 microarray gene expression datasets. The algorithms automatically designed by HEAD-DT are compared with traditional decision-tree induction algorithms, such as C4.5 and CART. Experimental results show that HEAD-DT is capable of generating algorithms which are significantly more accurate than C4.5 and CART.

  2. An Effective Hybrid Butterfly Optimization Algorithm with Artificial Bee Colony for Numerical Optimization

    Directory of Open Access Journals (Sweden)

    Sankalap Arora

    2017-08-01

    Full Text Available In this paper, a new hybrid optimization algorithm which combines the standard Butterfly Optimization Algorithm (BOA with Artificial Bee Colony (ABC algorithm is proposed. The proposed algorithm used the advantages of both the algorithms in order to balance the trade-off between exploration and exploitation. Experiments have been conducted on the proposed algorithm using ten benchmark problems having a broad range of dimensions and diverse complexities. The simulation results demonstrate that the convergence speed and accuracy of the proposed algorithm in finding optimal solutions is significantly better than BOA and ABC.

  3. Randomized Filtering Algorithms

    DEFF Research Database (Denmark)

    Katriel, Irit; Van Hentenryck, Pascal

    2008-01-01

    of AllDifferent and is generalization, the Global Cardinality Constraint. The first delayed filtering scheme is a Monte Carlo algorithm: its running time is superior, in the worst case, to that of enforcing are consistency after every domain event, while its filtering effectiveness is analyzed...

  4. From Story to Algorithm.

    Science.gov (United States)

    Ball, Stanley

    1986-01-01

    Presents a developmental taxonomy which promotes sequencing activities to enhance the potential of matching these activities with learner needs and readiness, suggesting that the order commonly found in the classroom needs to be inverted. The proposed taxonomy (story, skill, and algorithm) involves problem-solving emphasis in the classroom. (JN)

  5. Combinatorial Algorithms I,

    Science.gov (United States)

    1982-05-01

    recurrence as g, =gn- + g,,-, for n > 2, g,=1, go=0. For this recurrence, we know the solution (see 1.4.2.; g, is the n-th Fibonacci number F,) and...blocks which overlap with a new path. The algorithm is now straightforward. Whenever find-path outputs the first path P of a new bridg c we determine in

  6. Learning Lightness Algorithms

    Science.gov (United States)

    Hurlbert, Anya C.; Poggio, Tomaso A.

    1989-03-01

    Lightness algorithms, which recover surface reflectance from the image irradiance signal in individual color channels, provide one solution to the computational problem of color constancy. We compare three methods for constructing (or "learning") lightness algorithms from examples in a Mondrian world: optimal linear estimation, backpropagation (BP) on a two-layer network, and optimal polynomial estimation. In each example, the input data (image irradiance) is paired with the desired output (surface reflectance). Optimal linear estimation produces a lightness operator that is approximately equivalent to a center-surround, or bandpass, filter and which resembles a new lightness algorithm recently proposed by Land. This technique is based on the assumption that the operator that transforms input into output is linear, which is true for a certain class of early vision algorithms that may therefore be synthesized in a similar way from examples. Although the backpropagation net performs slightly better on new input data than the estimated linear operator, the optimal polynomial operator of order two performs marginally better than both.

  7. Optimal Quadratic Programming Algorithms

    CERN Document Server

    Dostal, Zdenek

    2009-01-01

    Quadratic programming (QP) is one technique that allows for the optimization of a quadratic function in several variables in the presence of linear constraints. This title presents various algorithms for solving large QP problems. It is suitable as an introductory text on quadratic programming for graduate students and researchers

  8. Efficient graph algorithms

    Indian Academy of Sciences (India)

    Outline of the talk. Introduction. Computing connectivities between all pairs of vertices. All pairs shortest paths/distances. Optimal bipartite matching . – p.2/30 .... Efficient Algorithm. The time taken for this computation on any input should be bounded by a small polynomial in the input size. . – p.6/30 ...

  9. Introduction to Algorithms

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 1; Issue 9. Introduction to Algorithms Turtle Graphics. R K Shyamasundar. Series Article Volume 1 Issue 9 September 1996 pp 14-24. Fulltext. Click here to view fulltext PDF. Permanent link: http://www.ias.ac.in/article/fulltext/reso/001/09/0014-0024 ...

  10. Adaptive Thinning Algorithm

    NARCIS (Netherlands)

    Theije, P.A.M. de

    2002-01-01

    A new adaptive method is presented to display large amounts of data on, for example, a computer screen. The algorithm reduces a set of N samples to a single value, using the statistics of the background and cormparing the true peak value in the set of N samples to the expected peak value of this

  11. Comprehensive eye evaluation algorithm

    Science.gov (United States)

    Agurto, C.; Nemeth, S.; Zamora, G.; Vahtel, M.; Soliz, P.; Barriga, S.

    2016-03-01

    In recent years, several research groups have developed automatic algorithms to detect diabetic retinopathy (DR) in individuals with diabetes (DM), using digital retinal images. Studies have indicated that diabetics have 1.5 times the annual risk of developing primary open angle glaucoma (POAG) as do people without DM. Moreover, DM patients have 1.8 times the risk for age-related macular degeneration (AMD). Although numerous investigators are developing automatic DR detection algorithms, there have been few successful efforts to create an automatic algorithm that can detect other ocular diseases, such as POAG and AMD. Consequently, our aim in the current study was to develop a comprehensive eye evaluation algorithm that not only detects DR in retinal images, but also automatically identifies glaucoma suspects and AMD by integrating other personal medical information with the retinal features. The proposed system is fully automatic and provides the likelihood of each of the three eye disease. The system was evaluated in two datasets of 104 and 88 diabetic cases. For each eye, we used two non-mydriatic digital color fundus photographs (macula and optic disc centered) and, when available, information about age, duration of diabetes, cataracts, hypertension, gender, and laboratory data. Our results show that the combination of multimodal features can increase the AUC by up to 5%, 7%, and 8% in the detection of AMD, DR, and glaucoma respectively. Marked improvement was achieved when laboratory results were combined with retinal image features.

  12. Diagnostic Algorithm for Encephalitis

    Directory of Open Access Journals (Sweden)

    J Gordon Millichap

    2013-12-01

    Full Text Available Diagnostic algorithm for initial evaluation of encephalitis in children is proposed with a consensus statement from the International Encephalitis Consortium, a committee begun in 2010 to serve as a practical aid to clinicians evaluating patients with suspected encephalitis.

  13. Algorithmic information theory

    NARCIS (Netherlands)

    Grünwald, P.D.; Vitányi, P.M.B.; Adriaans, P.; van Benthem, J.

    2008-01-01

    We introduce algorithmic information theory, also known as the theory of Kolmogorov complexity. We explain the main concepts of this quantitative approach to defining 'information'. We discuss the extent to which Kolmogorov's and Shannon's information theory have a common purpose, and where they are

  14. Algorithmic information theory

    NARCIS (Netherlands)

    Grünwald, P.D.; Vitányi, P.M.B.

    2008-01-01

    We introduce algorithmic information theory, also known as the theory of Kolmogorov complexity. We explain the main concepts of this quantitative approach to defining `information'. We discuss the extent to which Kolmogorov's and Shannon's information theory have a common purpose, and where they are

  15. Genetic algorithm eclipse mapping

    OpenAIRE

    Halevin, A. V.

    2008-01-01

    In this paper we analyse capabilities of eclipse mapping technique, based on genetic algorithm optimization. To model of accretion disk we used the "fire-flies" conception. This model allows us to reconstruct the distribution of radiating medium in the disk using less number of free parameters than in other methods. Test models show that we can achieve good approximation without optimizing techniques.

  16. de Casteljau's Algorithm Revisited

    DEFF Research Database (Denmark)

    Gravesen, Jens

    1998-01-01

    It is demonstrated how all the basic properties of Bezier curves can be derived swiftly and efficiently without any reference to the Bernstein polynomials and essentially with only geometric arguments. This is achieved by viewing one step in de Casteljau's algorithm as an operator (the de Casteljau...

  17. Fast parallel algorithm for slicing STL based on pipeline

    Science.gov (United States)

    Ma, Xulong; Lin, Feng; Yao, Bo

    2016-05-01

    In Additive Manufacturing field, the current researches of data processing mainly focus on a slicing process of large STL files or complicated CAD models. To improve the efficiency and reduce the slicing time, a parallel algorithm has great advantages. However, traditional algorithms can't make full use of multi-core CPU hardware resources. In the paper, a fast parallel algorithm is presented to speed up data processing. A pipeline mode is adopted to design the parallel algorithm. And the complexity of the pipeline algorithm is analyzed theoretically. To evaluate the performance of the new algorithm, effects of threads number and layers number are investigated by a serial of experiments. The experimental results show that the threads number and layers number are two remarkable factors to the speedup ratio. The tendency of speedup versus threads number reveals a positive relationship which greatly agrees with the Amdahl's law, and the tendency of speedup versus layers number also keeps a positive relationship agreeing with Gustafson's law. The new algorithm uses topological information to compute contours with a parallel method of speedup. Another parallel algorithm based on data parallel is used in experiments to show that pipeline parallel mode is more efficient. A case study at last shows a suspending performance of the new parallel algorithm. Compared with the serial slicing algorithm, the new pipeline parallel algorithm can make full use of the multi-core CPU hardware, accelerate the slicing process, and compared with the data parallel slicing algorithm, the new slicing algorithm in this paper adopts a pipeline parallel model, and a much higher speedup ratio and efficiency is achieved.

  18. Python algorithms mastering basic algorithms in the Python language

    CERN Document Server

    Hetland, Magnus Lie

    2014-01-01

    Python Algorithms, Second Edition explains the Python approach to algorithm analysis and design. Written by Magnus Lie Hetland, author of Beginning Python, this book is sharply focused on classical algorithms, but it also gives a solid understanding of fundamental algorithmic problem-solving techniques. The book deals with some of the most important and challenging areas of programming and computer science in a highly readable manner. It covers both algorithmic theory and programming practice, demonstrating how theory is reflected in real Python programs. Well-known algorithms and data struc

  19. The Slice Algorithm For Irreducible Decomposition of Monomial Ideals

    DEFF Research Database (Denmark)

    Roune, Bjarke Hammersholt

    2009-01-01

    Irreducible decomposition of monomial ideals has an increasing number of applications from biology to pure math. This paper presents the Slice Algorithm for computing irreducible decompositions, Alexander duals and socles of monomial ideals. The paper includes experiments showing good performance...

  20. Study on Control Algorithm for Continuous Segments Trajectory Interpolation

    Institute of Scientific and Technical Information of China (English)

    SHI Chuan; YE Peiqing; LV Qiang

    2006-01-01

    In CNC machining, the complexity of the part contour causes a series of problems including the repeated start-stop of the motor, low machining efficiency, and poor machining quality. To relieve those problems, a new interpolation algorithm was put forward to realize the interpolation control of continuous sections trajectory. The relevant error analysis of the algorithm was also studied. The feasibility of the algorithm was proved by machining experiment using a laser machine to carve the interpolation trajectory in the CNC system GT100. This algorithm effectively improved the machining efficiency and the contour quality.

  1. A Double Evolutionary Pool Memetic Algorithm for Examination Timetabling Problems

    Directory of Open Access Journals (Sweden)

    Yu Lei

    2014-01-01

    Full Text Available A double evolutionary pool memetic algorithm is proposed to solve the examination timetabling problem. To improve the performance of the proposed algorithm, two evolutionary pools, that is, the main evolutionary pool and the secondary evolutionary pool, are employed. The genetic operators have been specially designed to fit the examination timetabling problem. A simplified version of the simulated annealing strategy is designed to speed the convergence of the algorithm. A clonal mechanism is introduced to preserve population diversity. Extensive experiments carried out on 12 benchmark examination timetabling instances show that the proposed algorithm is able to produce promising results for the uncapacitated examination timetabling problem.

  2. A new branch and bound algorithm for minimax ratios problems

    Directory of Open Access Journals (Sweden)

    Zhao Yingfeng

    2017-06-01

    Full Text Available This study presents an efficient branch and bound algorithm for globally solving the minimax fractional programming problem (MFP. By introducing an auxiliary variable, an equivalent problem is firstly constructed and the convex relaxation programming problem is then established by utilizing convexity and concavity of functions in the problem. Other than usual branch and bound algorithm, an adapted partition skill and a practical reduction technique performed only in an unidimensional interval are incorporated into the algorithm scheme to significantly improve the computational performance. The global convergence is proved. Finally, some comparative experiments and a randomized numerical test are carried out to demonstrate the efficiency and robustness of the proposed algorithm.

  3. A novel optical granulometry algorithm for ore particles

    Directory of Open Access Journals (Sweden)

    Junhao Y.

    2010-01-01

    Full Text Available This paper proposes a novel algorithm to detect the particle size distribution of ores with irregular shapes and dim edges. This optical granulometry algorithm is particularly suitable for blast furnace process control, so its result can be used directly as a reliable basis for control system dynamics optimization. The paper explains the algorithm and its concept, as well as its method, which consists of five steps to detect ore granularity and distribution. A series of comparative experiments under industrial environments proved that this novel algorithm, compared with conventional ones, improves the accuracy of granulometry.

  4. Nonequilibrium molecular dynamics theory, algorithms and applications

    CERN Document Server

    Todd, Billy D

    2017-01-01

    Written by two specialists with over twenty-five years of experience in the field, this valuable text presents a wide range of topics within the growing field of nonequilibrium molecular dynamics (NEMD). It introduces theories which are fundamental to the field - namely, nonequilibrium statistical mechanics and nonequilibrium thermodynamics - and provides state-of-the-art algorithms and advice for designing reliable NEMD code, as well as examining applications for both atomic and molecular fluids. It discusses homogenous and inhomogenous flows and pays considerable attention to highly confined fluids, such as nanofluidics. In addition to statistical mechanics and thermodynamics, the book covers the themes of temperature and thermodynamic fluxes and their computation, the theory and algorithms for homogenous shear and elongational flows, response theory and its applications, heat and mass transport algorithms, applications in molecular rheology, highly confined fluids (nanofluidics), the phenomenon of slip and...

  5. SAW Classification Algorithm for Chinese Text Classification

    Directory of Open Access Journals (Sweden)

    Xiaoli Guo

    2015-02-01

    Full Text Available Considering the explosive growth of data, the increased amount of text data’s effect on the performance of text categorization forward the need for higher requirements, such that the existing classification method cannot be satisfied. Based on the study of existing text classification technology and semantics, this paper puts forward a kind of Chinese text classification oriented SAW (Structural Auxiliary Word algorithm. The algorithm uses the special space effect of Chinese text where words have an implied correlation between text information mining and text categorization for high-correlation matching. Experiments show that SAW classification algorithm on the premise of ensuring precision in classification, significantly improve the classification precision and recall, obviously improving the performance of information retrieval, and providing an effective means of data use in the era of big data information extraction.

  6. An enhanced dynamic hash TRIE algorithm for lexicon search

    Science.gov (United States)

    Yang, Lai; Xu, Lida; Shi, Zhongzhi

    2012-11-01

    Information retrieval (IR) is essential to enterprise systems along with growing orders, customers and materials. In this article, an enhanced dynamic hash TRIE (eDH-TRIE) algorithm is proposed that can be used in a lexicon search in Chinese, Japanese and Korean (CJK) segmentation and in URL identification. In particular, the eDH-TRIE algorithm is suitable for Unicode retrieval. The Auto-Array algorithm and Hash-Array algorithm are proposed to handle the auxiliary memory allocation; the former changes its size on demand without redundant restructuring, and the latter replaces linked lists with arrays, saving the overhead of memory. Comparative experiments show that the Auto-Array algorithm and Hash-Array algorithm have better spatial performance; they can be used in a multitude of situations. The eDH-TRIE is evaluated for both speed and storage and compared with the naïve DH-TRIE algorithms. The experiments show that the eDH-TRIE algorithm performs better. These algorithms reduce memory overheads and speed up IR.

  7. A MEDLINE categorization algorithm

    Science.gov (United States)

    Darmoni, Stefan J; Névéol, Aurelie; Renard, Jean-Marie; Gehanno, Jean-Francois; Soualmia, Lina F; Dahamna, Badisse; Thirion, Benoit

    2006-01-01

    Background Categorization is designed to enhance resource description by organizing content description so as to enable the reader to grasp quickly and easily what are the main topics discussed in it. The objective of this work is to propose a categorization algorithm to classify a set of scientific articles indexed with the MeSH thesaurus, and in particular those of the MEDLINE bibliographic database. In a large bibliographic database such as MEDLINE, finding materials of particular interest to a specialty group, or relevant to a particular audience, can be difficult. The categorization refines the retrieval of indexed material. In the CISMeF terminology, metaterms can be considered as super-concepts. They were primarily conceived to improve recall in the CISMeF quality-controlled health gateway. Methods The MEDLINE categorization algorithm (MCA) is based on semantic links existing between MeSH terms and metaterms on the one hand and between MeSH subheadings and metaterms on the other hand. These links are used to automatically infer a list of metaterms from any MeSH term/subheading indexing. Medical librarians manually select the semantic links. Results The MEDLINE categorization algorithm lists the medical specialties relevant to a MEDLINE file by decreasing order of their importance. The MEDLINE categorization algorithm is available on a Web site. It can run on any MEDLINE file in a batch mode. As an example, the top 3 medical specialties for the set of 60 articles published in BioMed Central Medical Informatics & Decision Making, which are currently indexed in MEDLINE are: information science, organization and administration and medical informatics. Conclusion We have presented a MEDLINE categorization algorithm in order to classify the medical specialties addressed in any MEDLINE file in the form of a ranked list of relevant specialties. The categorization method introduced in this paper is based on the manual indexing of resources with MeSH (terms

  8. THE APPROACHING TRAIN DETECTION ALGORITHM

    Directory of Open Access Journals (Sweden)

    S. V. Bibikov

    2015-09-01

    Full Text Available The paper deals with detection algorithm for rail vibroacoustic waves caused by approaching train on the background of increased noise. The urgency of algorithm development for train detection in view of increased rail noise, when railway lines are close to roads or road intersections is justified. The algorithm is based on the method of weak signals detection in a noisy environment. The information statistics ultimate expression is adjusted. We present the results of algorithm research and testing of the train approach alarm device that implements the proposed algorithm. The algorithm is prepared for upgrading the train approach alarm device “Signalizator-P".

  9. Human resource recommendation algorithm based on ensemble learning and Spark

    Science.gov (United States)

    Cong, Zihan; Zhang, Xingming; Wang, Haoxiang; Xu, Hongjie

    2017-08-01

    Aiming at the problem of “information overload” in the human resources industry, this paper proposes a human resource recommendation algorithm based on Ensemble Learning. The algorithm considers the characteristics and behaviours of both job seeker and job features in the real business circumstance. Firstly, the algorithm uses two ensemble learning methods-Bagging and Boosting. The outputs from both learning methods are then merged to form user interest model. Based on user interest model, job recommendation can be extracted for users. The algorithm is implemented as a parallelized recommendation system on Spark. A set of experiments have been done and analysed. The proposed algorithm achieves significant improvement in accuracy, recall rate and coverage, compared with recommendation algorithms such as UserCF and ItemCF.

  10. Bayesian network structure learning using chaos hybrid genetic algorithm

    Science.gov (United States)

    Shen, Jiajie; Lin, Feng; Sun, Wei; Chang, KC

    2012-06-01

    A new Bayesian network (BN) learning method using a hybrid algorithm and chaos theory is proposed. The principles of mutation and crossover in genetic algorithm and the cloud-based adaptive inertia weight were incorporated into the proposed simple particle swarm optimization (sPSO) algorithm to achieve better diversity, and improve the convergence speed. By means of ergodicity and randomicity of chaos algorithm, the initial network structure population is generated by using chaotic mapping with uniform search under structure constraints. When the algorithm converges to a local minimal, a chaotic searching is started to skip the local minima and to identify a potentially better network structure. The experiment results show that this algorithm can be effectively used for BN structure learning.

  11. An Improved Ant Algorithm for Grid Task Scheduling Strategy

    Science.gov (United States)

    Wei, Laizhi; Zhang, Xiaobin; Li, Yun; Li, Yujie

    Task scheduling is an important factor that directly influences the performance and efficiency of the system. Grid resources are usually distributed in different geographic locations, belonging to different organizations and resources' properties are vastly different, in order to complete efficiently, intelligently task scheduling, the choice of scheduling strategy is essential. This paper proposes an improved ant algorithm for grid task scheduling strategy, by introducing a new type pheromone and a new node redistribution selection rule. On the one hand, the algorithm can track performances of resources and tag it. On the other hand, add algorithm to deal with task scheduling unsuccessful situations that improve the algorithm's robustness and the successful probability of task allocation and reduce unnecessary overhead of system, shortening the total time to complete tasks. The data obtained from simulation experiment shows that use this algorithm to resolve schedule problem better than traditional ant algorithm.

  12. Robust face recognition algorithm for identifition of disaster victims

    Science.gov (United States)

    Gevaert, Wouter J. R.; de With, Peter H. N.

    2013-02-01

    We present a robust face recognition algorithm for the identification of occluded, injured and mutilated faces with a limited training set per person. In such cases, the conventional face recognition methods fall short due to specific aspects in the classification. The proposed algorithm involves recursive Principle Component Analysis for reconstruction of afiected facial parts, followed by a feature extractor based on Gabor wavelets and uniform multi-scale Local Binary Patterns. As a classifier, a Radial Basis Neural Network is employed. In terms of robustness to facial abnormalities, tests show that the proposed algorithm outperforms conventional face recognition algorithms like, the Eigenfaces approach, Local Binary Patterns and the Gabor magnitude method. To mimic real-life conditions in which the algorithm would have to operate, specific databases have been constructed and merged with partial existing databases and jointly compiled. Experiments on these particular databases show that the proposed algorithm achieves recognition rates beyond 95%.

  13. Study on Fast MUSIC Algorithm with Typical Array

    Directory of Open Access Journals (Sweden)

    Zhang Xing-liang

    2012-06-01

    Full Text Available Because MUSIC (MUltiple SIgnal Classification algorithm needs a large number of multiplications and trigonometric function evaluations, it is weak in the real time processing. This paper is aim at resolving above problem. Firstly, by analyzing the structural features of the uniform circular array and the uniform linear array, some properties of steering vector are extracted. Then, the properties of Hermite matrix are employed to decompose the complex multiplication, and then two real vectors are constructed to reduce the number of multiplications. Finally, with the properties of steering vector, a new algorithm based on look-up-table is proposed. The new algorithm neither has any trigonometric function evaluation, nor requires much memory space. The result of simulation experiments shows that the new algorithm raises the rate of MUSIC algorithm more than 50 times, while ensures the same estimated results. Therefore, the new algorithm has a wide application prospect.

  14. An improved edge detection algorithm for depth map inpainting

    Science.gov (United States)

    Chen, Weihai; Yue, Haosong; Wang, Jianhua; Wu, Xingming

    2014-04-01

    Three-dimensional (3D) measurement technology has been widely used in many scientific and engineering areas. The emergence of Kinect sensor makes 3D measurement much easier. However the depth map captured by Kinect sensor has some invalid regions, especially at object boundaries. These missing regions should be filled firstly. This paper proposes a depth-assisted edge detection algorithm and improves existing depth map inpainting algorithm using extracted edges. In the proposed algorithm, both color image and raw depth data are used to extract initial edges. Then the edges are optimized and are utilized to assist depth map inpainting. Comparative experiments demonstrate that the proposed edge detection algorithm can extract object boundaries and inhibit non-boundary edges caused by textures on object surfaces. The proposed depth inpainting algorithm can predict missing depth values successfully and has better performance than existing algorithm around object boundaries.

  15. Flocking algorithm for autonomous flying robots.

    Science.gov (United States)

    Virágh, Csaba; Vásárhelyi, Gábor; Tarcai, Norbert; Szörényi, Tamás; Somorjai, Gergő; Nepusz, Tamás; Vicsek, Tamás

    2014-06-01

    Animal swarms displaying a variety of typical flocking patterns would not exist without the underlying safe, optimal and stable dynamics of the individuals. The emergence of these universal patterns can be efficiently reconstructed with agent-based models. If we want to reproduce these patterns with artificial systems, such as autonomous aerial robots, agent-based models can also be used in their control algorithms. However, finding the proper algorithms and thus understanding the essential characteristics of the emergent collective behaviour requires thorough and realistic modeling of the robot and also the environment. In this paper, we first present an abstract mathematical model of an autonomous flying robot. The model takes into account several realistic features, such as time delay and locality of communication, inaccuracy of the on-board sensors and inertial effects. We present two decentralized control algorithms. One is based on a simple self-propelled flocking model of animal collective motion, the other is a collective target tracking algorithm. Both algorithms contain a viscous friction-like term, which aligns the velocities of neighbouring agents parallel to each other. We show that this term can be essential for reducing the inherent instabilities of such a noisy and delayed realistic system. We discuss simulation results on the stability of the control algorithms, and perform real experiments to show the applicability of the algorithms on a group of autonomous quadcopters. In our case, bio-inspiration works in two ways. On the one hand, the whole idea of trying to build and control a swarm of robots comes from the observation that birds tend to flock to optimize their behaviour as a group. On the other hand, by using a realistic simulation framework and studying the group behaviour of autonomous robots we can learn about the major factors influencing the flight of bird flocks.

  16. Evolutionary Algorithms Performance Comparison For Optimizing Unimodal And Multimodal Test Functions

    Directory of Open Access Journals (Sweden)

    Dr. Hanan A.R. Akkar

    2015-08-01

    Full Text Available Many evolutionary algorithms have been presented in the last few decades some of these algorithms were sufficiently tested and used in many researches and papers such as Particle Swarm Optimization PSO Genetic Algorithm GA and Differential Evolution Algorithm DEA. Other recently proposed algorithms were unknown and rarely used such as Stochastic Fractal Search SFS Symbiotic Organisms Search SOS and Grey Wolf Optimizer GWO. This paper trying to made a fair comprehensive comparison for the performance of these well-known algorithms and other less prevalent and recently proposed algorithms by using a variety of famous test functions that have multiple different characteristics through applying two experiments for each algorithm according to the used test function the first experiments carried out with the standard search space limits of the proposed test functions while the second experiment multiple ten times the maximum and minimum limits of the test functions search space recording the Average Mean Absolute Error AMAE Overall Algorithm Efficiency OAE Algorithms Stability AS Overall Algorithm Stability OAS each algorithm required Average Processing Time APT and Overall successful optimized test function Processing Time OPT for both of the experiments and with ten epochs each with 100 iterations for each algorithm.

  17. Genetic Algorithm for Optimization: Preprocessor and Algorithm

    Science.gov (United States)

    Sen, S. K.; Shaykhian, Gholam A.

    2006-01-01

    Genetic algorithm (GA) inspired by Darwin's theory of evolution and employed to solve optimization problems - unconstrained or constrained - uses an evolutionary process. A GA has several parameters such the population size, search space, crossover and mutation probabilities, and fitness criterion. These parameters are not universally known/determined a priori for all problems. Depending on the problem at hand, these parameters need to be decided such that the resulting GA performs the best. We present here a preprocessor that achieves just that, i.e., it determines, for a specified problem, the foregoing parameters so that the consequent GA is a best for the problem. We stress also the need for such a preprocessor both for quality (error) and for cost (complexity) to produce the solution. The preprocessor includes, as its first step, making use of all the information such as that of nature/character of the function/system, search space, physical/laboratory experimentation (if already done/available), and the physical environment. It also includes the information that can be generated through any means - deterministic/nondeterministic/graphics. Instead of attempting a solution of the problem straightway through a GA without having/using the information/knowledge of the character of the system, we would do consciously a much better job of producing a solution by using the information generated/created in the very first step of the preprocessor. We, therefore, unstintingly advocate the use of a preprocessor to solve a real-world optimization problem including NP-complete ones before using the statistically most appropriate GA. We also include such a GA for unconstrained function optimization problems.

  18. Genetic algorithm essentials

    CERN Document Server

    Kramer, Oliver

    2017-01-01

    This book introduces readers to genetic algorithms (GAs) with an emphasis on making the concepts, algorithms, and applications discussed as easy to understand as possible. Further, it avoids a great deal of formalisms and thus opens the subject to a broader audience in comparison to manuscripts overloaded by notations and equations. The book is divided into three parts, the first of which provides an introduction to GAs, starting with basic concepts like evolutionary operators and continuing with an overview of strategies for tuning and controlling parameters. In turn, the second part focuses on solution space variants like multimodal, constrained, and multi-objective solution spaces. Lastly, the third part briefly introduces theoretical tools for GAs, the intersections and hybridizations with machine learning, and highlights selected promising applications.

  19. Fluid Genetic Algorithm (FGA

    Directory of Open Access Journals (Sweden)

    Ruholla Jafari-Marandi

    2017-04-01

    Full Text Available Genetic Algorithm (GA has been one of the most popular methods for many challenging optimization problems when exact approaches are too computationally expensive. A review of the literature shows extensive research attempting to adapt and develop the standard GA. Nevertheless, the essence of GA which consists of concepts such as chromosomes, individuals, crossover, mutation, and others rarely has been the focus of recent researchers. In this paper method, Fluid Genetic Algorithm (FGA, some of these concepts are changed, removed, and furthermore, new concepts are introduced. The performance of GA and FGA are compared through seven benchmark functions. FGA not only shows a better success rate and better convergence control, but it can be applied to a wider range of problems including multi-objective and multi-level problems. Also, the application of FGA for a real engineering problem, Quadric Assignment Problem (AQP, is shown and experienced.

  20. Partitional clustering algorithms

    CERN Document Server

    2015-01-01

    This book summarizes the state-of-the-art in partitional clustering. Clustering, the unsupervised classification of patterns into groups, is one of the most important tasks in exploratory data analysis. Primary goals of clustering include gaining insight into, classifying, and compressing data. Clustering has a long and rich history that spans a variety of scientific disciplines including anthropology, biology, medicine, psychology, statistics, mathematics, engineering, and computer science. As a result, numerous clustering algorithms have been proposed since the early 1950s. Among these algorithms, partitional (nonhierarchical) ones have found many applications, especially in engineering and computer science. This book provides coverage of consensus clustering, constrained clustering, large scale and/or high dimensional clustering, cluster validity, cluster visualization, and applications of clustering. Examines clustering as it applies to large and/or high-dimensional data sets commonly encountered in reali...

  1. DAL Algorithms and Python

    CERN Document Server

    Aydemir, Bahar

    2017-01-01

    The Trigger and Data Acquisition (TDAQ) system of the ATLAS detector at the Large Hadron Collider (LHC) at CERN is composed of a large number of distributed hardware and software components. TDAQ system consists of about 3000 computers and more than 25000 applications which, in a coordinated manner, provide the data-taking functionality of the overall system. There is a number of online services required to configure, monitor and control the ATLAS data taking. In particular, the configuration service is used to provide configuration of above components. The configuration of the ATLAS data acquisition system is stored in XML-based object database named OKS. DAL (Data Access Library) allowing to access it's information by C++, Java and Python clients in a distributed environment. Some information has quite complicated structure, so it's extraction requires writing special algorithms. Algorithms available on C++ programming language and partially reimplemented on Java programming language. The goal of the projec...

  2. Hydrological Cycle Algorithm for Continuous Optimization Problems

    Directory of Open Access Journals (Sweden)

    Ahmad Wedyan

    2017-01-01

    Full Text Available A new nature-inspired optimization algorithm called the Hydrological Cycle Algorithm (HCA is proposed based on the continuous movement of water in nature. In the HCA, a collection of water drops passes through various hydrological water cycle stages, such as flow, evaporation, condensation, and precipitation. Each stage plays an important role in generating solutions and avoiding premature convergence. The HCA shares information by direct and indirect communication among the water drops, which improves solution quality. Similarities and differences between HCA and other water-based algorithms are identified, and the implications of these differences on overall performance are discussed. A new topological representation for problems with a continuous domain is proposed. In proof-of-concept experiments, the HCA is applied on a variety of benchmarked continuous numerical functions. The results were found to be competitive in comparison to a number of other algorithms and validate the effectiveness of HCA. Also demonstrated is the ability of HCA to escape from local optima solutions and converge to global solutions. Thus, HCA provides an alternative approach to tackling various types of multimodal continuous optimization problems as well as an overall framework for water-based particle algorithms in general.

  3. KERNEL MAD ALGORITHM FOR RELATIVE RADIOMETRIC NORMALIZATION

    Directory of Open Access Journals (Sweden)

    Y. Bai

    2016-06-01

    Full Text Available The multivariate alteration detection (MAD algorithm is commonly used in relative radiometric normalization. This algorithm is based on linear canonical correlation analysis (CCA which can analyze only linear relationships among bands. Therefore, we first introduce a new version of MAD in this study based on the established method known as kernel canonical correlation analysis (KCCA. The proposed method effectively extracts the non-linear and complex relationships among variables. We then conduct relative radiometric normalization experiments on both the linear CCA and KCCA version of the MAD algorithm with the use of Landsat-8 data of Beijing, China, and Gaofen-1(GF-1 data derived from South China. Finally, we analyze the difference between the two methods. Results show that the KCCA-based MAD can be satisfactorily applied to relative radiometric normalization, this algorithm can well describe the nonlinear relationship between multi-temporal images. This work is the first attempt to apply a KCCA-based MAD algorithm to relative radiometric normalization.

  4. General lossless planar coupler design algorithms.

    Science.gov (United States)

    Vance, Rod

    2015-08-01

    This paper reviews and extends two classes of algorithms for the design of planar couplers with any unitary transfer matrix as design goals. Such couplers find use in optical sensing for fading free interferometry, coherent optical network demodulation, and also for quantum state preparation in quantum optical experiments and technology. The two classes are (1) "atomic coupler algorithms" decomposing a unitary transfer matrix into a planar network of 2×2 couplers, and (2) "Lie theoretic algorithms" concatenating unit cell devices with variable phase delay sets that form canonical coordinates for neighborhoods in the Lie group U(N), so that the concatenations realize any transfer matrix in U(N). As well as review, this paper gives (1) a Lie theoretic proof existence proof showing that both classes of algorithms work and (2) direct proofs of the efficacy of the "atomic coupler" algorithms. The Lie theoretic proof strengthens former results. 5×5 couplers designed by both methods are compared by Monte Carlo analysis, which would seem to imply atomic rather than Lie theoretic methods yield designs more resilient to manufacturing imperfections.

  5. Boosting foundations and algorithms

    CERN Document Server

    Schapire, Robert E

    2012-01-01

    Boosting is an approach to machine learning based on the idea of creating a highly accurate predictor by combining many weak and inaccurate "rules of thumb." A remarkably rich theory has evolved around boosting, with connections to a range of topics, including statistics, game theory, convex optimization, and information geometry. Boosting algorithms have also enjoyed practical success in such fields as biology, vision, and speech processing. At various times in its history, boosting has been perceived as mysterious, controversial, even paradoxical.

  6. Combinatory CPU Scheduling Algorithm

    OpenAIRE

    Saeeda Bibi; Farooque Azam; Yasir Chaudhry

    2010-01-01

    Central Processing Unit (CPU) plays a significant role in computer system by transferring its control among different processes. As CPU is a central component, hence it must be used efficiently. Operating system performs an essential task that is known as CPU scheduling for efficient utilization of CPU. CPU scheduling has strong effect on resource utilization as well as overall performance of the system. In this paper, a new CPU scheduling algorithm called Combinatory is proposed that combine...

  7. KAM Tori Construction Algorithms

    Science.gov (United States)

    Wiesel, W.

    In this paper we evaluate and compare two algorithms for the calculation of KAM tori in Hamiltonian systems. The direct fitting of a torus Fourier series to a numerically integrated trajectory is the first method, while an accelerated finite Fourier transform is the second method. The finite Fourier transform, with Hanning window functions, is by far superior in both computational loading and numerical accuracy. Some thoughts on applications of KAM tori are offered.

  8. A new genetic algorithm

    OpenAIRE

    Cerf, Raphaël

    1996-01-01

    Here is a new genetic algorithm. It is built by randomly perturbing a two operator crossover-selection scheme. Three conditions of biological relevance are imposed on the crossover. A new selection mechanism is used, which has the decisive advantage of preserving the diversity of the individuals in the population. The attractors of the unperturbed process are particular equifitness subsets of populations endowed with a rich structure. The random vanishing perturbations are t...

  9. Parallel Genetic Algorithm System

    OpenAIRE

    Nagaraju Sangepu; Vikram, K.

    2010-01-01

    Genetic Algorithm (GA) is a popular technique to find the optimum of transformation, because of its simple implementation procedure. In image processing GAs are used as a parameter-search-for procedure, this processing requires very high performance of the computer. Recently, parallel processing used to reduce the time by distributing the appropriate amount of work to each computer in the clustering system. The processing time reduces with the number of dedicated computers. Parallel implement...

  10. An efficient algorithm for function optimization: modified stem cells algorithm

    Science.gov (United States)

    Taherdangkoo, Mohammad; Paziresh, Mahsa; Yazdi, Mehran; Bagheri, Mohammad

    2013-03-01

    In this paper, we propose an optimization algorithm based on the intelligent behavior of stem cell swarms in reproduction and self-organization. Optimization algorithms, such as the Genetic Algorithm (GA), Particle Swarm Optimization (PSO) algorithm, Ant Colony Optimization (ACO) algorithm and Artificial Bee Colony (ABC) algorithm, can give solutions to linear and non-linear problems near to the optimum for many applications; however, in some case, they can suffer from becoming trapped in local optima. The Stem Cells Algorithm (SCA) is an optimization algorithm inspired by the natural behavior of stem cells in evolving themselves into new and improved cells. The SCA avoids the local optima problem successfully. In this paper, we have made small changes in the implementation of this algorithm to obtain improved performance over previous versions. Using a series of benchmark functions, we assess the performance of the proposed algorithm and compare it with that of the other aforementioned optimization algorithms. The obtained results prove the superiority of the Modified Stem Cells Algorithm (MSCA).

  11. An Algorithmic Diversity Diet?

    DEFF Research Database (Denmark)

    Sørensen, Jannick Kirk; Schmidt, Jan-Hinrik

    2016-01-01

    With the growing influence of personalized algorithmic recommender systems on the exposure of media content to users, the relevance of discussing the diversity of recommendations increases, particularly as far as public service media (PSM) is concerned. An imagined implementation of a diversity d...... of a diversity diet system generates questions not just about editorial power, personal freedom and techno-paternalism, but also about the embedded politics of recommender systems as well as the human skills affiliated with PSM editorial work and the nature of PSM content.......With the growing influence of personalized algorithmic recommender systems on the exposure of media content to users, the relevance of discussing the diversity of recommendations increases, particularly as far as public service media (PSM) is concerned. An imagined implementation of a diversity...... diet system however triggers not only the classic discussion of the reach – distinctiveness balance for PSM, but also shows that ‘diversity’ is understood very differently in algorithmic recommender system communities than it is editorially and politically in the context of PSM. The design...

  12. NEUTRON ALGORITHM VERIFICATION TESTING

    Energy Technology Data Exchange (ETDEWEB)

    COWGILL,M.; MOSBY,W.; ARGONNE NATIONAL LABORATORY-WEST

    2000-07-19

    Active well coincidence counter assays have been performed on uranium metal highly enriched in {sup 235}U. The data obtained in the present program, together with highly enriched uranium (HEU) metal data obtained in other programs, have been analyzed using two approaches, the standard approach and an alternative approach developed at BNL. Analysis of the data with the standard approach revealed that the form of the relationship between the measured reals and the {sup 235}U mass varied, being sometimes linear and sometimes a second-order polynomial. In contrast, application of the BNL algorithm, which takes into consideration the totals, consistently yielded linear relationships between the totals-corrected reals and the {sup 235}U mass. The constants in these linear relationships varied with geometric configuration and level of enrichment. This indicates that, when the BNL algorithm is used, calibration curves can be established with fewer data points and with more certainty than if a standard algorithm is used. However, this potential advantage has only been established for assays of HEU metal. In addition, the method is sensitive to the stability of natural background in the measurement facility.

  13. Large scale tracking algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Ross L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Love, Joshua Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Melgaard, David Kennett [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Karelitz, David B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pitts, Todd Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Zollweg, Joshua David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Anderson, Dylan Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Nandy, Prabal [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Whitlow, Gary L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bender, Daniel A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Byrne, Raymond Harry [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  14. Foundations of genetic algorithms 1991

    CERN Document Server

    1991-01-01

    Foundations of Genetic Algorithms 1991 (FOGA 1) discusses the theoretical foundations of genetic algorithms (GA) and classifier systems.This book compiles research papers on selection and convergence, coding and representation, problem hardness, deception, classifier system design, variation and recombination, parallelization, and population divergence. Other topics include the non-uniform Walsh-schema transform; spurious correlations and premature convergence in genetic algorithms; and variable default hierarchy separation in a classifier system. The grammar-based genetic algorithm; condition

  15. Parallel Architectures and Bioinspired Algorithms

    CERN Document Server

    Pérez, José; Lanchares, Juan

    2012-01-01

    This monograph presents examples of best practices when combining bioinspired algorithms with parallel architectures. The book includes recent work by leading researchers in the field and offers a map with the main paths already explored and new ways towards the future. Parallel Architectures and Bioinspired Algorithms will be of value to both specialists in Bioinspired Algorithms, Parallel and Distributed Computing, as well as computer science students trying to understand the present and the future of Parallel Architectures and Bioinspired Algorithms.

  16. Essential algorithms a practical approach to computer algorithms

    CERN Document Server

    Stephens, Rod

    2013-01-01

    A friendly and accessible introduction to the most useful algorithms Computer algorithms are the basic recipes for programming. Professional programmers need to know how to use algorithms to solve difficult programming problems. Written in simple, intuitive English, this book describes how and when to use the most practical classic algorithms, and even how to create new algorithms to meet future needs. The book also includes a collection of questions that can help readers prepare for a programming job interview. Reveals methods for manipulating common data structures s

  17. Multisensor estimation: New distributed algorithms

    Directory of Open Access Journals (Sweden)

    Plataniotis K. N.

    1997-01-01

    Full Text Available The multisensor estimation problem is considered in this paper. New distributed algorithms, which are able to locally process the information and which deliver identical results to those generated by their centralized counterparts are presented. The algorithms can be used to provide robust and computationally efficient solutions to the multisensor estimation problem. The proposed distributed algorithms are theoretically interesting and computationally attractive.

  18. Parallel Implementation of the Katsevich's FBP Algorithm

    Directory of Open Access Journals (Sweden)

    2006-01-01

    Full Text Available For spiral cone-beam CT, parallel computing is an effective approach to resolving the problem of heavy computation burden. It is well known that the major computation time is spent in the backprojection step for either filtered-backprojection (FBP or backprojected-filtration (BPF algorithms. By the cone-beam cover method [1], the backprojection procedure is driven by cone-beam projections, and every cone-beam projection can be backprojected independently. Basing on this fact, we develop a parallel implementation of Katsevich's FBP algorithm. We do all the numerical experiments on a Linux cluster. In one typical experiment, the sequential reconstruction time is 781.3 seconds, while the parallel reconstruction time is 25.7 seconds with 32 processors.

  19. A Novel Adaptive Algorithm Addresses Potential Problems of Blind Algorithm

    Directory of Open Access Journals (Sweden)

    Muhammad Yasin

    2016-01-01

    Full Text Available A hybrid algorithm called constant modulus least mean square (CMLMS algorithm is proposed in order to address the potential problems existing with constant modulus algorithm (CMA about its convergence. It is a two-stage adaptive filtering algorithm and based on least mean square (LMS algorithm followed by CMA. A hybrid algorithm is theoretically developed and the same is verified through MatLab Software. Theoretical model is verified through simulation and its performance is evaluated in smart antenna in presence of a cochannel interfering signal and additive white Gaussian noise (AWGN of zero mean. This is also tested in Rayleigh fading channel using digital modulation technique for Bit Error Rate (BER. Finally, a few computer simulations are presented in order to substantiate the theoretical findings with respect to proposed model. Corresponding results obtained with the use of only CMA and LMS algorithms are also presented for further comparison.

  20. Methodology and algorithms for railway crew management

    Energy Technology Data Exchange (ETDEWEB)

    Goncalves, R.; Gomide, F. [State Univ. of Campinas, SP (Brazil). Faculty of Electrical and Computer Engineering; Lagrimante, R. [MRS Logistica S.A, Juiz de Fora, MG (Brazil)

    2000-07-01

    Crew management problems are highly important for many transportation systems such as airlines, railways and public bus transportation. Despite recent advances, scheduling methodologies and decision support systems still need improvement, especially their computational efficiency, practical feasibility and use. This paper briefly overview classic crew management approaches. It discusses various practical issues concerning classic methods and suggests a new approach and algorithms. Computational results and experience with actual data and real world situations are also reported. (orig.)

  1. Opposite Degree Algorithm and Its Applications

    Directory of Open Access Journals (Sweden)

    Xiao-Guang Yue

    2015-12-01

    Full Text Available The opposite (Opposite Degree, referred to as OD algorithm is an intelligent algorithm proposed by Yue Xiaoguang et al. Opposite degree algorithm is mainly based on the concept of opposite degree, combined with the idea of design of neural network and genetic algorithm and clustering analysis algorithm. The OD algorithm is divided into two sub algorithms, namely: opposite degree - numerical computation (OD-NC algorithm and opposite degree - Classification computation (OD-CC algorithm.

  2. CURE-SMOTE algorithm and hybrid algorithm for feature selection and parameter optimization based on random forests.

    Science.gov (United States)

    Ma, Li; Fan, Suohai

    2017-03-14

    The random forests algorithm is a type of classifier with prominent universality, a wide application range, and robustness for avoiding overfitting. But there are still some drawbacks to random forests. Therefore, to improve the performance of random forests, this paper seeks to improve imbalanced data processing, feature selection and parameter optimization. We propose the CURE-SMOTE algorithm for the imbalanced data classification problem. Experiments on imbalanced UCI data reveal that the combination of Clustering Using Representatives (CURE) enhances the original synthetic minority oversampling technique (SMOTE) algorithms effectively compared with the classification results on the original data using random sampling, Borderline-SMOTE1, safe-level SMOTE, C-SMOTE, and k-means-SMOTE. Additionally, the hybrid RF (random forests) algorithm has been proposed for feature selection and parameter optimization, which uses the minimum out of bag (OOB) data error as its objective function. Simulation results on binary and higher-dimensional data indicate that the proposed hybrid RF algorithms, hybrid genetic-random forests algorithm, hybrid particle swarm-random forests algorithm and hybrid fish swarm-random forests algorithm can achieve the minimum OOB error and show the best generalization ability. The training set produced from the proposed CURE-SMOTE algorithm is closer to the original data distribution because it contains minimal noise. Thus, better classification results are produced from this feasible and effective algorithm. Moreover, the hybrid algorithm's F-value, G-mean, AUC and OOB scores demonstrate that they surpass the performance of the original RF algorithm. Hence, this hybrid algorithm provides a new way to perform feature selection and parameter optimization.

  3. Improved underwater image enhancement algorithms based on partial differential equations (PDEs)

    OpenAIRE

    Nnolim, U. A.

    2017-01-01

    The experimental results of improved underwater image enhancement algorithms based on partial differential equations (PDEs) are presented in this report. This second work extends the study of previous work and incorporating several improvements into the revised algorithm. Experiments show the evidence of the improvements when compared to previously proposed approaches and other conventional algorithms found in the literature.

  4. Algorithms for Global Positioning

    DEFF Research Database (Denmark)

    Borre, Kai; Strang, Gilbert

    and replaces the authors' previous work, Linear Algebra, Geodesy, and GPS (1997). An initial discussion of the basic concepts, characteristics and technical aspects of different satellite systems is followed by the necessary mathematical content which is presented in a detailed and self-contained fashion....... At the heart of the matter are the positioning algorithms on which GPS technology relies, the discussion of which will affirm the mathematical contents of the previous chapters. Numerous ready-to-use MATLAB codes are included for the reader. This comprehensive guide will be invaluable for engineers...... and academic researchers who wish to master the theory and practical application of GPS technology....

  5. Experimental Analysis of Algorithms.

    Science.gov (United States)

    1987-12-01

    0.82. 0.82, and 0.8 for n a 32000, -V --V ° 31 a b 0 140 I I- - - .7 .6 1 .9 15. a xii 2-:EpySaebo ayn 640, n 180. hseetiae d otispr onienebeas000men d...the algorithm implemented as a C macro. The 55.e!ement array Rand was initialized by 55 calls to the BSD Unix 4 1 system random number generator, a...linear congruential generator producing integers in the range (0 2’:- 1]. #define Maxrand (1 << 30) int Rand [55]; int KJ; #define RAND (X) X Rand [K

  6. Modular Regularization Algorithms

    DEFF Research Database (Denmark)

    Jacobsen, Michael

    2004-01-01

    an iterative method. The parameter choice method is also used to demonstrate the implementation of the standard-form transformation. We have implemented a simple preconditioner aimed at the preconditioning of the general-form Tikhonov problem and demonstrate its simplicity and effciency. The steps taken...... and used to set up the illposed problems in the toolbox. Hereby, we are able to write regularization algorithms that automatically exploit structure in the ill-posed problem without being rewritten explicitly. We explain how to implement a stopping criteria for a parameter choice method based upon...

  7. An elegant Lambert algorithm

    Science.gov (United States)

    Battin, R. H.; Vaughan, R. M.

    1983-10-01

    A fundamental problem in astrodynamics is concerned with the determination of an orbit, having a specified flight time and connecting two position vectors. The present investigation is concerned with a new method for solving this problem, which is frequently referred to as Lambert's problem. This method parallels closely Gauss' classical method but is superior to it. The new algorithm converges rapidly for any given geometry and time of flight. Since there is no need to be concerned with specific starting values for different input parameters, this method appears to be a very attractive alternative to Newton-Raphson schemes for most space guidance applications.

  8. The No-Prop algorithm: a new learning algorithm for multilayer neural networks.

    Science.gov (United States)

    Widrow, Bernard; Greenblatt, Aaron; Kim, Youngsik; Park, Dookun

    2013-01-01

    A new learning algorithm for multilayer neural networks that we have named No-Propagation (No-Prop) is hereby introduced. With this algorithm, the weights of the hidden-layer neurons are set and fixed with random values. Only the weights of the output-layer neurons are trained, using steepest descent to minimize mean square error, with the LMS algorithm of Widrow and Hoff. The purpose of introducing nonlinearity with the hidden layers is examined from the point of view of Least Mean Square Error Capacity (LMS Capacity), which is defined as the maximum number of distinct patterns that can be trained into the network with zero error. This is shown to be equal to the number of weights of each of the output-layer neurons. The No-Prop algorithm and the Back-Prop algorithm are compared. Our experience with No-Prop is limited, but from the several examples presented here, it seems that the performance regarding training and generalization of both algorithms is essentially the same when the number of training patterns is less than or equal to LMS Capacity. When the number of training patterns exceeds Capacity, Back-Prop is generally the better performer. But equivalent performance can be obtained with No-Prop by increasing the network Capacity by increasing the number of neurons in the hidden layer that drives the output layer. The No-Prop algorithm is much simpler and easier to implement than Back-Prop. Also, it converges much faster. It is too early to definitively say where to use one or the other of these algorithms. This is still a work in progress. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. Formal Specification and Validation of a Hybrid Connectivity Restoration Algorithm for Wireless Sensor and Actor Networks †

    Science.gov (United States)

    Imran, Muhammad; Zafar, Nazir Ahmad

    2012-01-01

    Maintaining inter-actor connectivity is extremely crucial in mission-critical applications of Wireless Sensor and Actor Networks (WSANs), as actors have to quickly plan optimal coordinated responses to detected events. Failure of a critical actor partitions the inter-actor network into disjoint segments besides leaving a coverage hole, and thus hinders the network operation. This paper presents a Partitioning detection and Connectivity Restoration (PCR) algorithm to tolerate critical actor failure. As part of pre-failure planning, PCR determines critical/non-critical actors based on localized information and designates each critical node with an appropriate backup (preferably non-critical). The pre-designated backup detects the failure of its primary actor and initiates a post-failure recovery process that may involve coordinated multi-actor relocation. To prove the correctness, we construct a formal specification of PCR using Z notation. We model WSAN topology as a dynamic graph and transform PCR to corresponding formal specification using Z notation. Formal specification is analyzed and validated using the Z Eves tool. Moreover, we simulate the specification to quantitatively analyze the efficiency of PCR. Simulation results confirm the effectiveness of PCR and the results shown that it outperforms contemporary schemes found in the literature.

  10. An Approximation Algorithm for the Facility Location Problem with Lexicographic Minimax Objective

    Directory of Open Access Journals (Sweden)

    Ľuboš Buzna

    2014-01-01

    Full Text Available We present a new approximation algorithm to the discrete facility location problem providing solutions that are close to the lexicographic minimax optimum. The lexicographic minimax optimum is a concept that allows to find equitable location of facilities serving a large number of customers. The algorithm is independent of general purpose solvers and instead uses algorithms originally designed to solve the p-median problem. By numerical experiments, we demonstrate that our algorithm allows increasing the size of solvable problems and provides high-quality solutions. The algorithm found an optimal solution for all tested instances where we could compare the results with the exact algorithm.

  11. Direction of Radio Finding via MUSIC (Multiple Signal Classification) Algorithm for Hardware Design System

    Science.gov (United States)

    Zhang, Zheng

    2017-10-01

    Concept of radio direction finding systems, which use radio direction finding is based on digital signal processing algorithms. Thus, the radio direction finding system becomes capable to locate and track signals by the both. Performance of radio direction finding significantly depends on effectiveness of digital signal processing algorithms. The algorithm uses the Direction of Arrival (DOA) algorithms to estimate the number of incidents plane waves on the antenna array and their angle of incidence. This manuscript investigates implementation of the DOA algorithms (MUSIC) on the uniform linear array in the presence of white noise. The experiment results exhibit that MUSIC algorithm changed well with the radio direction.

  12. Research on Algorithm of Indoor Positioning System Based on Low Energy Bluetooth 4.0

    Directory of Open Access Journals (Sweden)

    Zhang De-Yi

    2017-01-01

    Full Text Available This paper mainly analyzes and compares several well-known algorithms for indoor positioning. By many nodes jump tests, relation between RSSI and distance and relation between LQI and packet error rate can be obtained. Then based on the relations, LQI confidence coefficient is classified. By comparing these different kinds of algorithms, this paper perfects the algorithm of fuzzy fingerprint indoor positioning, which can shorten the time required to process off-line data. At last, this algorithm was verified by indoor and outdoor experiments, by which we can know that this algorithm greatly lower RSSI error. And it is concluded that this algorithm has a better positioning accuracy.

  13. Contour Error Map Algorithm

    Science.gov (United States)

    Merceret, Francis; Lane, John; Immer, Christopher; Case, Jonathan; Manobianco, John

    2005-01-01

    The contour error map (CEM) algorithm and the software that implements the algorithm are means of quantifying correlations between sets of time-varying data that are binarized and registered on spatial grids. The present version of the software is intended for use in evaluating numerical weather forecasts against observational sea-breeze data. In cases in which observational data come from off-grid stations, it is necessary to preprocess the observational data to transform them into gridded data. First, the wind direction is gridded and binarized so that D(i,j;n) is the input to CEM based on forecast data and d(i,j;n) is the input to CEM based on gridded observational data. Here, i and j are spatial indices representing 1.25-km intervals along the west-to-east and south-to-north directions, respectively; and n is a time index representing 5-minute intervals. A binary value of D or d = 0 corresponds to an offshore wind, whereas a value of D or d = 1 corresponds to an onshore wind. CEM includes two notable subalgorithms: One identifies and verifies sea-breeze boundaries; the other, which can be invoked optionally, performs an image-erosion function for the purpose of attempting to eliminate river-breeze contributions in the wind fields.

  14. Online Planning Algorithm

    Science.gov (United States)

    Rabideau, Gregg R.; Chien, Steve A.

    2010-01-01

    AVA v2 software selects goals for execution from a set of goals that oversubscribe shared resources. The term goal refers to a science or engineering request to execute a possibly complex command sequence, such as image targets or ground-station downlinks. Developed as an extension to the Virtual Machine Language (VML) execution system, the software enables onboard and remote goal triggering through the use of an embedded, dynamic goal set that can oversubscribe resources. From the set of conflicting goals, a subset must be chosen that maximizes a given quality metric, which in this case is strict priority selection. A goal can never be pre-empted by a lower priority goal, and high-level goals can be added, removed, or updated at any time, and the "best" goals will be selected for execution. The software addresses the issue of re-planning that must be performed in a short time frame by the embedded system where computational resources are constrained. In particular, the algorithm addresses problems with well-defined goal requests without temporal flexibility that oversubscribes available resources. By using a fast, incremental algorithm, goal selection can be postponed in a "just-in-time" fashion allowing requests to be changed or added at the last minute. Thereby enabling shorter response times and greater autonomy for the system under control.

  15. Fatigue evaluation algorithms: Review

    Energy Technology Data Exchange (ETDEWEB)

    Passipoularidis, V.A.; Broendsted, P.

    2009-11-15

    A progressive damage fatigue simulator for variable amplitude loads named FADAS is discussed in this work. FADAS (Fatigue Damage Simulator) performs ply by ply stress analysis using classical lamination theory and implements adequate stiffness discount tactics based on the failure criterion of Puck, to model the degradation caused by failure events in ply level. Residual strength is incorporated as fatigue damage accumulation metric. Once the typical fatigue and static properties of the constitutive ply are determined,the performance of an arbitrary lay-up under uniaxial and/or multiaxial load time series can be simulated. The predictions are validated against fatigue life data both from repeated block tests at a single stress ratio as well as against spectral fatigue using the WISPER, WISPERX and NEW WISPER load sequences on a Glass/Epoxy multidirectional laminate typical of a wind turbine rotor blade construction. Two versions of the algorithm, the one using single-step and the other using incremental application of each load cycle (in case of ply failure) are implemented and compared. Simulation results confirm the ability of the algorithm to take into account load sequence effects. In general, FADAS performs well in predicting life under both spectral and block loading fatigue. (author)

  16. Algorithmic Relative Complexity

    Directory of Open Access Journals (Sweden)

    Daniele Cerra

    2011-04-01

    Full Text Available Information content and compression are tightly related concepts that can be addressed through both classical and algorithmic information theories, on the basis of Shannon entropy and Kolmogorov complexity, respectively. The definition of several entities in Kolmogorov’s framework relies upon ideas from classical information theory, and these two approaches share many common traits. In this work, we expand the relations between these two frameworks by introducing algorithmic cross-complexity and relative complexity, counterparts of the cross-entropy and relative entropy (or Kullback-Leibler divergence found in Shannon’s framework. We define the cross-complexity of an object x with respect to another object y as the amount of computational resources needed to specify x in terms of y, and the complexity of x related to y as the compression power which is lost when adopting such a description for x, compared to the shortest representation of x. Properties of analogous quantities in classical information theory hold for these new concepts. As these notions are incomputable, a suitable approximation based upon data compression is derived to enable the application to real data, yielding a divergence measure applicable to any pair of strings. Example applications are outlined, involving authorship attribution and satellite image classification, as well as a comparison to similar established techniques.

  17. Algorithm for Public Electric Transport Schedule Control for Intelligent Embedded Devices

    Science.gov (United States)

    Alps, Ivars; Potapov, Andrey; Gorobetz, Mikhail; Levchenkov, Anatoly

    2010-01-01

    In this paper authors present heuristics algorithm for precise schedule fulfilment in city traffic conditions taking in account traffic lights. The algorithm is proposed for programmable controller. PLC is proposed to be installed in electric vehicle to control its motion speed and signals of traffic lights. Algorithm is tested using real controller connected to virtual devices and real functional models of real tram devices. Results of experiments show high precision of public transport schedule fulfilment using proposed algorithm.

  18. Research on Routing Algorithm Based on Limitation Arrangement Principle in Mathematics

    Directory of Open Access Journals (Sweden)

    Jianhui Lv

    2014-01-01

    Full Text Available Since the research on information consistency of the whole network under OSPF protocol has been insufficient in recent years, an algorithm based on limitation arrangement principle for routing decision is proposed and it is a permutation and combination problem in mathematical area. The most fundamental function of this algorithm is to accomplish the information consistency of the whole network at a relatively fast speed. Firstly, limitation arrangement principle algorithm is proposed and proved. Secondly, LAP routing algorithm in single link network and LAP routing algorithm in single link network with multiloops are designed. Finally, simulation experiments are worked by VC6.0 and NS2, which proves that LAPSN algorithm and LAPSNM algorithm can solve the problem of information consistency of the whole network under OSPF protocol and LAPSNM algorithm is superior to Dijkstra algorithm.

  19. Applications of algorithmic differentiation to phase retrieval algorithms.

    Science.gov (United States)

    Jurling, Alden S; Fienup, James R

    2014-07-01

    In this paper, we generalize the techniques of reverse-mode algorithmic differentiation to include elementary operations on multidimensional arrays of complex numbers. We explore the application of the algorithmic differentiation to phase retrieval error metrics and show that reverse-mode algorithmic differentiation provides a framework for straightforward calculation of gradients of complicated error metrics without resorting to finite differences or laborious symbolic differentiation.

  20. Verification of the Solar Dynamics Observatory High Gain Antenna Pointing Algorithm Using Flight Data

    Science.gov (United States)

    Bourkland, Kristin L.; Liu, Kuo-Chia

    2011-01-01

    The Solar Dynamics Observatory (SDO) is a NASA spacecraft designed to study the Sun. It was launched on February 11, 2010 into a geosynchronous orbit, and uses a suite of attitude sensors and actuators to finely point the spacecraft at the Sun. SDO has three science instruments: the Atmospheric Imaging Assembly (AIA), the Helioseismic and Magnetic Imager (HMI), and the Extreme Ultraviolet Variability Experiment (EVE). SDO uses two High Gain Antennas (HGAs) to send science data to a dedicated ground station in White Sands, New Mexico. In order to meet the science data capture budget, the HGAs must be able to transmit data to the ground for a very large percentage of the time. Each HGA is a dual-axis antenna driven by stepper motors. Both antennas transmit data at all times, but only a single antenna is required in order to meet the transmission rate requirement. For portions of the year, one antenna or the other has an unobstructed view of the White Sands ground station. During other periods, however, the view from both antennas to the Earth is blocked for different portions of the day. During these times of blockage, the two HGAs take turns pointing to White Sands, with the other antenna pointing out to space. The HGAs handover White Sands transmission responsibilities to the unblocked antenna. There are two handover seasons per year, each lasting about 72 days, where the antennas hand off control every twelve hours. The non-tracking antenna slews back to the ground station by following a ground commanded trajectory and arrives approximately 5 minutes before the formerly tracking antenna slews away to point out into space. The SDO Attitude Control System (ACS) runs at 5 Hz, and the HGA Gimbal Control Electronics (GCE) run at 200 Hz. There are 40 opportunities for the gimbals to step each ACS cycle, with a hardware limitation of no more than one step every three GCE cycles. The ACS calculates the desired gimbal motion for tracking the ground station or for slewing

  1. a Distributed Polygon Retrieval Algorithm Using Mapreduce

    Science.gov (United States)

    Guo, Q.; Palanisamy, B.; Karimi, H. A.

    2015-07-01

    The burst of large-scale spatial terrain data due to the proliferation of data acquisition devices like 3D laser scanners poses challenges to spatial data analysis and computation. Among many spatial analyses and computations, polygon retrieval is a fundamental operation which is often performed under real-time constraints. However, existing sequential algorithms fail to meet this demand for larger sizes of terrain data. Motivated by the MapReduce programming model, a well-adopted large-scale parallel data processing technique, we present a MapReduce-based polygon retrieval algorithm designed with the objective of reducing the IO and CPU loads of spatial data processing. By indexing the data based on a quad-tree approach, a significant amount of unneeded data is filtered in the filtering stage and it reduces the IO overhead. The indexed data also facilitates querying the relationship between the terrain data and query area in shorter time. The results of the experiments performed in our Hadoop cluster demonstrate that our algorithm performs significantly better than the existing distributed algorithms.

  2. An investigation of messy genetic algorithms

    Science.gov (United States)

    Goldberg, David E.; Deb, Kalyanmoy; Korb, Bradley

    1990-01-01

    Genetic algorithms (GAs) are search procedures based on the mechanics of natural selection and natural genetics. They combine the use of string codings or artificial chromosomes and populations with the selective and juxtapositional power of reproduction and recombination to motivate a surprisingly powerful search heuristic in many problems. Despite their empirical success, there has been a long standing objection to the use of GAs in arbitrarily difficult problems. A new approach was launched. Results to a 30-bit, order-three-deception problem were obtained using a new type of genetic algorithm called a messy genetic algorithm (mGAs). Messy genetic algorithms combine the use of variable-length strings, a two-phase selection scheme, and messy genetic operators to effect a solution to the fixed-coding problem of standard simple GAs. The results of the study of mGAs in problems with nonuniform subfunction scale and size are presented. The mGA approach is summarized, both its operation and the theory of its use. Experiments on problems of varying scale, varying building-block size, and combined varying scale and size are presented.

  3. A graph spectrum based geometric biclustering algorithm.

    Science.gov (United States)

    Wang, Doris Z; Yan, Hong

    2013-01-21

    Biclustering is capable of performing simultaneous clustering on two dimensions of a data matrix and has many applications in pattern classification. For example, in microarray experiments, a subset of genes is co-expressed in a subset of conditions, and biclustering algorithms can be used to detect the coherent patterns in the data for further analysis of function. In this paper, we present a graph spectrum based geometric biclustering (GSGBC) algorithm. In the geometrical view, biclusters can be seen as different linear geometrical patterns in high dimensional spaces. Based on this, the modified Hough transform is used to find the Hough vector (HV) corresponding to sub-bicluster patterns in 2D spaces. A graph can be built regarding each HV as a node. The graph spectrum is utilized to identify the eigengroups in which the sub-biclusters are grouped naturally to produce larger biclusters. Through a comparative study, we find that the GSGBC achieves as good a result as GBC and outperforms other kinds of biclustering algorithms. Also, compared with the original geometrical biclustering algorithm, it reduces the computing time complexity significantly. We also show that biologically meaningful biclusters can be identified by our method from real microarray gene expression data. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. Algorithm aversion: people erroneously avoid algorithms after seeing them err.

    Science.gov (United States)

    Dietvorst, Berkeley J; Simmons, Joseph P; Massey, Cade

    2015-02-01

    Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. In 5 studies, participants either saw an algorithm make forecasts, a human make forecasts, both, or neither. They then decided whether to tie their incentives to the future predictions of the algorithm or the human. Participants who saw the algorithm perform were less confident in it, and less likely to choose it over an inferior human forecaster. This was true even among those who saw the algorithm outperform the human.

  5. Training Feedforward Neural Networks Using Symbiotic Organisms Search Algorithm.

    Science.gov (United States)

    Wu, Haizhou; Zhou, Yongquan; Luo, Qifang; Basset, Mohamed Abdel

    2016-01-01

    Symbiotic organisms search (SOS) is a new robust and powerful metaheuristic algorithm, which stimulates the symbiotic interaction strategies adopted by organisms to survive and propagate in the ecosystem. In the supervised learning area, it is a challenging task to present a satisfactory and efficient training algorithm for feedforward neural networks (FNNs). In this paper, SOS is employed as a new method for training FNNs. To investigate the performance of the aforementioned method, eight different datasets selected from the UCI machine learning repository are employed for experiment and the results are compared among seven metaheuristic algorithms. The results show that SOS performs better than other algorithms for training FNNs in terms of converging speed. It is also proven that an FNN trained by the method of SOS has better accuracy than most algorithms compared.

  6. Watermarking Algorithms for 3D NURBS Graphic Data

    Directory of Open Access Journals (Sweden)

    Jae Jun Lee

    2004-10-01

    Full Text Available Two watermarking algorithms for 3D nonuniform rational B-spline (NURBS graphic data are proposed: one is appropriate for the steganography, and the other for watermarking. Instead of directly embedding data into the parameters of NURBS, the proposed algorithms embed data into the 2D virtual images extracted by parameter sampling of 3D model. As a result, the proposed steganography algorithm can embed information into more places of the surface than the conventional algorithm, while preserving the data size of the model. Also, any existing 2D watermarking technique can be used for the watermarking of 3D NURBS surfaces. From the experiment, it is found that the algorithm for the watermarking is robust to the attacks on weights, control points, and knots. It is also found to be robust to the remodeling of NURBS models.

  7. Algorithms for Finding Small Attractors in Boolean Networks

    Directory of Open Access Journals (Sweden)

    Hayashida Morihiro

    2007-01-01

    Full Text Available A Boolean network is a model used to study the interactions between different genes in genetic regulatory networks. In this paper, we present several algorithms using gene ordering and feedback vertex sets to identify singleton attractors and small attractors in Boolean networks. We analyze the average case time complexities of some of the proposed algorithms. For instance, it is shown that the outdegree-based ordering algorithm for finding singleton attractors works in time for , which is much faster than the naive time algorithm, where is the number of genes and is the maximum indegree. We performed extensive computational experiments on these algorithms, which resulted in good agreement with theoretical results. In contrast, we give a simple and complete proof for showing that finding an attractor with the shortest period is NP-hard.

  8. The Verification of Hybrid Image Deformation algorithm for PIV

    Directory of Open Access Journals (Sweden)

    Novotný Jan

    2016-06-01

    Full Text Available The aim of this paper was to test a newly designed algorithm for more accurate calculation of the image displacement of seeding particles when taking measurement using the Particle Image Velocimetry method. The proposed algorithm is based on modification of a classical iterative approach using a three-point subpixel interpolation and method using relative deformation of individual areas for accurate detection of signal peak position. The first part briefly describes the tested algorithm together with the results of the performed synthetic tests. The other part describes the measurement setup and the overall layout of the experiment. Subsequently, a comparison of results of the classical iterative scheme and our designed algorithm is carried out. The conclusion discusses the benefits of the tested algorithm, its advantages and disadvantages.

  9. Study on torque algorithm of switched reluctance motor

    Directory of Open Access Journals (Sweden)

    Xiaoguang LI

    2016-12-01

    Full Text Available To solve the torque ripple problem of switched reluctance motor under the traditional control method, a direct torque control method for switched reluctance motor is proposed. Direct torque algorithm controls flux magnitude and direction by querying appropriate voltage vector in switch list. Taking torque as direct control variable can reduce the torque ripple of the motor, which broadens the application fields of switched reluctance motor. Starting with the theory of direct torque algorithm, based on MATLAB/Simulink platform, direct torque control and chopped current control system simulation model are designed. Under the condition that switched reluctance motor model and its load are consistent, it is compared with chopped current algorithm. At last, the feasibility of direct torque algorithm is verified through the platform of hardware experiments. It demonstrates that using direct torque algorithm can make the torque ripple be controlled effectively, which provides a wider application field for the switched reluctance motor.

  10. A novel hybrid self-adaptive bat algorithm.

    Science.gov (United States)

    Fister, Iztok; Fong, Simon; Brest, Janez; Fister, Iztok

    2014-01-01

    Nature-inspired algorithms attract many researchers worldwide for solving the hardest optimization problems. One of the newest members of this extensive family is the bat algorithm. To date, many variants of this algorithm have emerged for solving continuous as well as combinatorial problems. One of the more promising variants, a self-adaptive bat algorithm, has recently been proposed that enables a self-adaptation of its control parameters. In this paper, we have hybridized this algorithm using different DE strategies and applied these as a local search heuristics for improving the current best solution directing the swarm of a solution towards the better regions within a search space. The results of exhaustive experiments were promising and have encouraged us to invest more efforts into developing in this direction.

  11. Congested Link Inference Algorithms in Dynamic Routing IP Network

    Directory of Open Access Journals (Sweden)

    Yu Chen

    2017-01-01

    Full Text Available The performance descending of current congested link inference algorithms is obviously in dynamic routing IP network, such as the most classical algorithm CLINK. To overcome this problem, based on the assumptions of Markov property and time homogeneity, we build a kind of Variable Structure Discrete Dynamic Bayesian (VSDDB network simplified model of dynamic routing IP network. Under the simplified VSDDB model, based on the Bayesian Maximum A Posteriori (BMAP and Rest Bayesian Network Model (RBNM, we proposed an Improved CLINK (ICLINK algorithm. Considering the concurrent phenomenon of multiple link congestion usually happens, we also proposed algorithm CLILRS (Congested Link Inference algorithm based on Lagrangian Relaxation Subgradient to infer the set of congested links. We validated our results by the experiments of analogy, simulation, and actual Internet.

  12. Quicksort algorithm again revisited

    Directory of Open Access Journals (Sweden)

    Charles Knessl

    1999-12-01

    Full Text Available We consider the standard Quicksort algorithm that sorts n distinct keys with all possible n! orderings of keys being equally likely. Equivalently, we analyze the total path length L(n in a randomly built binary search tree. Obtaining the limiting distribution of L(n is still an outstanding open problem. In this paper, we establish an integral equation for the probability density of the number of comparisons L(n. Then, we investigate the large deviations of L(n. We shall show that the left tail of the limiting distribution is much ``thinner'' (i.e., double exponential than the right tail (which is only exponential. Our results contain some constants that must be determined numerically. We use formal asymptotic methods of applied mathematics such as the WKB method and matched asymptotics.

  13. Fighting Censorship with Algorithms

    Science.gov (United States)

    Mahdian, Mohammad

    In countries such as China or Iran where Internet censorship is prevalent, users usually rely on proxies or anonymizers to freely access the web. The obvious difficulty with this approach is that once the address of a proxy or an anonymizer is announced for use to the public, the authorities can easily filter all traffic to that address. This poses a challenge as to how proxy addresses can be announced to users without leaking too much information to the censorship authorities. In this paper, we formulate this question as an interesting algorithmic problem. We study this problem in a static and a dynamic model, and give almost tight bounds on the number of proxy servers required to give access to n people k of whom are adversaries. We will also discuss how trust networks can be used in this context.

  14. Algorithmic Reflections on Choreography

    Directory of Open Access Journals (Sweden)

    Pablo Ventura

    2016-11-01

    Full Text Available In 1996, Pablo Ventura turned his attention to the choreography software Life Forms to find out whether the then-revolutionary new tool could lead to new possibilities of expression in contemporary dance. During the next 2 decades, he devised choreographic techniques and custom software to create dance works that highlight the operational logic of computers, accompanied by computer-generated dance and media elements. This article provides a firsthand account of how Ventura’s engagement with algorithmic concepts guided and transformed his choreographic practice. The text describes the methods that were developed to create computer-aided dance choreographies. Furthermore, the text illustrates how choreography techniques can be applied to correlate formal and aesthetic aspects of movement, music, and video. Finally, the text emphasizes how Ventura’s interest in the wider conceptual context has led him to explore with choreographic means fundamental issues concerning the characteristics of humans and machines and their increasingly profound interdependencies.

  15. The Research and Application of SURF Algorithm Based on Feature Point Selection Algorithm

    Directory of Open Access Journals (Sweden)

    Zhang Fang Hu

    2014-04-01

    Full Text Available As the pixel information of depth image is derived from the distance information, when implementing SURF algorithm with KINECT sensor for static sign language recognition, there can be some mismatched pairs in palm area. This paper proposes a feature point selection algorithm, by filtering the SURF feature points step by step based on the number of feature points within adaptive radius r and the distance between the two points, it not only greatly improves the recognition rate, but also ensures the robustness under the environmental factors, such as skin color, illumination intensity, complex background, angle and scale changes. The experiment results show that the improved SURF algorithm can effectively improve the recognition rate, has a good robustness.

  16. Optimizations of Patch Antenna Arrays Using Genetic Algorithms Supported by the Multilevel Fast Multipole Algorithm

    Directory of Open Access Journals (Sweden)

    C. Onol

    2014-12-01

    Full Text Available We present optimizations of patch antenna arrays using genetic algorithms and highly accurate full-wave solutions of the corresponding radiation problems with the multilevel fast multipole algorithm (MLFMA. Arrays of finite extent are analyzed by using MLFMA, which accounts for all mutual couplings between array elements efficiently and accurately. Using the superposition principle, the number of solutions required for the optimization of an array is reduced to the number of array elements, without resorting to any periodicity and similarity assumptions. Based on numerical experiments, genetic optimizations are improved by considering alternative mutation, crossover, and elitism mechanisms. We show that the developed optimization environment based on genetic algorithms and MLFMA provides efficient and effective optimizations of antenna excitations, which cannot be obtained with array-factor approaches, even for relatively simple arrays with identical elements.

  17. Multisensor data fusion algorithm development

    Energy Technology Data Exchange (ETDEWEB)

    Yocky, D.A.; Chadwick, M.D.; Goudy, S.P.; Johnson, D.K.

    1995-12-01

    This report presents a two-year LDRD research effort into multisensor data fusion. We approached the problem by addressing the available types of data, preprocessing that data, and developing fusion algorithms using that data. The report reflects these three distinct areas. First, the possible data sets for fusion are identified. Second, automated registration techniques for imagery data are analyzed. Third, two fusion techniques are presented. The first fusion algorithm is based on the two-dimensional discrete wavelet transform. Using test images, the wavelet algorithm is compared against intensity modulation and intensity-hue-saturation image fusion algorithms that are available in commercial software. The wavelet approach outperforms the other two fusion techniques by preserving spectral/spatial information more precisely. The wavelet fusion algorithm was also applied to Landsat Thematic Mapper and SPOT panchromatic imagery data. The second algorithm is based on a linear-regression technique. We analyzed the technique using the same Landsat and SPOT data.

  18. An overview of smart grid routing algorithms

    Science.gov (United States)

    Wang, Junsheng; OU, Qinghai; Shen, Haijuan

    2017-08-01

    This paper summarizes the typical routing algorithm in smart grid by analyzing the communication business and communication requirements of intelligent grid. Mainly from the two kinds of routing algorithm is analyzed, namely clustering routing algorithm and routing algorithm, analyzed the advantages and disadvantages of two kinds of typical routing algorithm in routing algorithm and applicability.

  19. Algebraic Approach to Algorithmic Logic

    OpenAIRE

    Bancerek Grzegorz

    2014-01-01

    We introduce algorithmic logic - an algebraic approach according to [25]. It is done in three stages: propositional calculus, quantifier calculus with equality, and finally proper algorithmic logic. For each stage appropriate signature and theory are defined. Propositional calculus and quantifier calculus with equality are explored according to [24]. A language is introduced with language signature including free variables, substitution, and equality. Algorithmic logic requires a bialgebra st...

  20. Genetic K-Means Algorithm

    OpenAIRE

    Krishna, K; Murty, Narasimha M

    1999-01-01

    In this paper, we propose a novel hybrid genetic algorithm (GA) that finds a globally optimal partition of a given data into a specified number of clusters. GAs used earlier in clustering employ either an expensive crossover operator to generate valid child chromosomes from parent chromosomes or a costly fitness function or both. To circumvent these expensive operations, we hybridize GA with a classical gradient descent algorithm used in clustering viz., K-means algorithm. Hence, the name gen...

  1. Diversity-Guided Evolutionary Algorithms

    DEFF Research Database (Denmark)

    Ursem, Rasmus Kjær

    2002-01-01

    Population diversity is undoubtably a key issue in the performance of evolutionary algorithms. A common hypothesis is that high diversity is important to avoid premature convergence and to escape local optima. Various diversity measures have been used to analyze algorithms, but so far few...... algorithms have used a measure to guide the search. The diversity-guided evolutionary algorithm (DGEA) uses the wellknown distance-to-average-point measure to alternate between phases of exploration (mutation) and phases of exploitation (recombination and selection). The DGEA showed remarkable results...

  2. Machine Learning an algorithmic perspective

    CERN Document Server

    Marsland, Stephen

    2009-01-01

    Traditional books on machine learning can be divided into two groups - those aimed at advanced undergraduates or early postgraduates with reasonable mathematical knowledge and those that are primers on how to code algorithms. The field is ready for a text that not only demonstrates how to use the algorithms that make up machine learning methods, but also provides the background needed to understand how and why these algorithms work. Machine Learning: An Algorithmic Perspective is that text.Theory Backed up by Practical ExamplesThe book covers neural networks, graphical models, reinforcement le

  3. FRAMEWORK FOR COMPARING SEGMENTATION ALGORITHMS

    Directory of Open Access Journals (Sweden)

    G. Sithole

    2015-05-01

    Full Text Available The notion of a ‘Best’ segmentation does not exist. A segmentation algorithm is chosen based on the features it yields, the properties of the segments (point sets it generates, and the complexity of its algorithm. The segmentation is then assessed based on a variety of metrics such as homogeneity, heterogeneity, fragmentation, etc. Even after an algorithm is chosen its performance is still uncertain because the landscape/scenarios represented in a point cloud have a strong influence on the eventual segmentation. Thus selecting an appropriate segmentation algorithm is a process of trial and error. Automating the selection of segmentation algorithms and their parameters first requires methods to evaluate segmentations. Three common approaches for evaluating segmentation algorithms are ‘goodness methods’, ‘discrepancy methods’ and ‘benchmarks’. Benchmarks are considered the most comprehensive method of evaluation. This paper shortcomings in current benchmark methods are identified and a framework is proposed that permits both a visual and numerical evaluation of segmentations for different algorithms, algorithm parameters and evaluation metrics. The concept of the framework is demonstrated on a real point cloud. Current results are promising and suggest that it can be used to predict the performance of segmentation algorithms.

  4. Recent results on howard's algorithm

    DEFF Research Database (Denmark)

    Miltersen, P.B.

    2012-01-01

    Howard’s algorithm is a fifty-year old generally applicable algorithm for sequential decision making in face of uncertainty. It is routinely used in practice in numerous application areas that are so important that they usually go by their acronyms, e.g., OR, AI, and CAV. While Howard’s algorithm...... is generally recognized as fast in practice, until recently, its worst case time complexity was poorly understood. However, a surge of results since 2009 has led us to a much more satisfactory understanding of the worst case time complexity of the algorithm in the various settings in which it applies...

  5. Palm Print Edge Extraction Using Fractional Differential Algorithm

    OpenAIRE

    Chunmei Chi; Feng Gao

    2013-01-01

    Algorithm based on fractional difference was used for the edge extraction of thenar palm print image. Based on fractional order difference function which was deduced from classical fractional differential G-L definition, three filter templates were constructed to extract thenar palm print edge. The experiment results showed that this algorithm can reduce noise and detect rich edge details and has higher SNR than traditional methods.

  6. ARBITER: Adaptive rate-based intelligent HTTP streaming algorithm

    OpenAIRE

    Zahran, Ahmed H.; Sreenan, Cormac J.

    2016-01-01

    Dynamic Adaptive streaming over HTTP (DASH) is widely used by content providers for video delivery and dominates traffic on cellular networks. The inherent variability in both video bitrate and network bandwidth negatively impacts the user Quality of Experience (QoE), motivating the design of better DASH-compliant adaptation algorithms. In this paper we present ARBITER, a novel streaming adaptation algorithm that explicitly integrates the variations in both video and network dynamics in its a...

  7. A Chinese text classification system based on Naive Bayes algorithm

    Directory of Open Access Journals (Sweden)

    Cui Wei

    2016-01-01

    Full Text Available In this paper, aiming at the characteristics of Chinese text classification, using the ICTCLAS(Chinese lexical analysis system of Chinese academy of sciences for document segmentation, and for data cleaning and filtering the Stop words, using the information gain and document frequency feature selection algorithm to document feature selection. Based on this, based on the Naive Bayesian algorithm implemented text classifier , and use Chinese corpus of Fudan University has carried on the experiment and analysis on the system.

  8. Reliable exterior orientation by a robust anisotropic orthogonal Procrustes Algorithm

    OpenAIRE

    Fusiello, A; Maset, E; Crosilla, F

    2013-01-01

    The paper presents a robust version of a recent anisotropic orthogonal Procrustes algorithm that has been proposed to solve the socalled camera exterior orientation problem in computer vision and photogrammetry. In order to identify outliers, that are common in visual data, we propose an algorithm based on Least Median of Squares to detect a minimal outliers-free sample, and a Forward Search procedure, used to augment the inliers set one sample at a time. Experiments with synthetic d...

  9. Comparison of greedy algorithms for α-decision tree construction

    KAUST Repository

    Alkhalid, Abdulaziz

    2011-01-01

    A comparison among different heuristics that are used by greedy algorithms which constructs approximate decision trees (α-decision trees) is presented. The comparison is conducted using decision tables based on 24 data sets from UCI Machine Learning Repository [2]. Complexity of decision trees is estimated relative to several cost functions: depth, average depth, number of nodes, number of nonterminal nodes, and number of terminal nodes. Costs of trees built by greedy algorithms are compared with minimum costs calculated by an algorithm based on dynamic programming. The results of experiments assign to each cost function a set of potentially good heuristics that minimize it. © 2011 Springer-Verlag.

  10. Application of the Apriori algorithm for adverse drug reaction detection.

    Science.gov (United States)

    Kuo, M H; Kushniruk, A W; Borycki, E M; Greig, D

    2009-01-01

    The objective of this research is to assess the suitability of the Apriori association analysis algorithm for the detection of adverse drug reactions (ADR) in health care data. The Apriori algorithm is used to perform association analysis on the characteristics of patients, the drugs they are taking, their primary diagnosis, co-morbid conditions, and the ADRs or adverse events (AE) they experience. This analysis produces association rules that indicate what combinations of medications and patient characteristics lead to ADRs. A simple data set is used to demonstrate the feasibility and effectiveness of the algorithm.

  11. Improved Runtime Analysis of the Simple Genetic Algorithm

    DEFF Research Database (Denmark)

    Oliveto, Pietro S.; Witt, Carsten

    2013-01-01

    A runtime analysis of the Simple Genetic Algorithm (SGA) for the OneMax problem has recently been presented proving that the algorithm requires exponential time with overwhelming probability. This paper presents an improved analysis which overcomes some limitations of our previous one. Firstly...... improvement towards the reusability of the techniques in future systematic analyses of GAs. Finally, we consider the more natural SGA using selection with replacement rather than without replacement although the results hold for both algorithmic versions. Experiments are presented to explore the limits...

  12. Improved time complexity analysis of the Simple Genetic Algorithm

    DEFF Research Database (Denmark)

    Oliveto, Pietro S.; Witt, Carsten

    2015-01-01

    A runtime analysis of the Simple Genetic Algorithm (SGA) for the OneMax problem has recently been presented proving that the algorithm with population size μ≤n1/8−ε requires exponential time with overwhelming probability. This paper presents an improved analysis which overcomes some limitations...... this is a major improvement towards the reusability of the techniques in future systematic analyses of GAs. Finally, we consider the more natural SGA using selection with replacement rather than without replacement although the results hold for both algorithmic versions. Experiments are presented to explore...

  13. A Global Optimization Algorithm for Sum of Linear Ratios Problem

    Directory of Open Access Journals (Sweden)

    Yuelin Gao

    2013-01-01

    Full Text Available We equivalently transform the sum of linear ratios programming problem into bilinear programming problem, then by using the linear characteristics of convex envelope and concave envelope of double variables product function, linear relaxation programming of the bilinear programming problem is given, which can determine the lower bound of the optimal value of original problem. Therefore, a branch and bound algorithm for solving sum of linear ratios programming problem is put forward, and the convergence of the algorithm is proved. Numerical experiments are reported to show the effectiveness of the proposed algorithm.

  14. Parameter estimation for chaotic systems using improved bird swarm algorithm

    Science.gov (United States)

    Xu, Chuangbiao; Yang, Renhuan

    2017-12-01

    Parameter estimation of chaotic systems is an important problem in nonlinear science and has aroused increasing interest of many research fields, which can be basically reduced to a multidimensional optimization problem. In this paper, an improved boundary bird swarm algorithm is used to estimate the parameters of chaotic systems. This algorithm can combine the good global convergence and robustness of the bird swarm algorithm and the exploitation capability of improved boundary learning strategy. Experiments are conducted on the Lorenz system and the coupling motor system. Numerical simulation results reveal the effectiveness and with desirable performance of IBBSA for parameter estimation of chaotic systems.

  15. Restart-Based Genetic Algorithm for the Quadratic Assignment Problem

    Science.gov (United States)

    Misevicius, Alfonsas

    The power of genetic algorithms (GAs) has been demonstrated for various domains of the computer science, including combinatorial optimization. In this paper, we propose a new conceptual modification of the genetic algorithm entitled a "restart-based genetic algorithm" (RGA). An effective implementation of RGA for a well-known combinatorial optimization problem, the quadratic assignment problem (QAP), is discussed. The results obtained from the computational experiments on the QAP instances from the publicly available library QAPLIB show excellent performance of RGA. This is especially true for the real-life like QAPs.

  16. Award 1 Title: Acoustic Communications 2011 Experiment: Deployment Support and Post Experiment Data Handling and Analysis. Award 2 Title: Exploiting Structured Dependencies in the Design of Adaptive Algorithms for Underwater Communication Award. 3 Title: Coupled Research in Ocean Acoustics and Signal Processing for the Next Generation of Underwater Acoustic Communication Systems

    Science.gov (United States)

    2015-09-30

    Exploiting Structured Dependencies in the Design of Adaptive Algorithms for Underwater Communication Award #3 Title Coupled Research in Ocean Acoustics...and Signal Processing for theNext Generation of Underwater Acoustic Communication Systems James Preisig Woods Hole Oceanographic Institution...141-0079, N00014-14C-0230 LONG-TERM GOALS A high performance, versatile, and reliable underwater communications capability is of strategic

  17. A New Augmentation Based Algorithm for Extracting Maximal Chordal Subgraphs.

    Science.gov (United States)

    Bhowmick, Sanjukta; Chen, Tzu-Yi; Halappanavar, Mahantesh

    2015-02-01

    A graph is chordal if every cycle of length greater than three contains an edge between non-adjacent vertices. Chordal graphs are of interest both theoretically, since they admit polynomial time solutions to a range of NP-hard graph problems, and practically, since they arise in many applications including sparse linear algebra, computer vision, and computational biology. A maximal chordal subgraph is a chordal subgraph that is not a proper subgraph of any other chordal subgraph. Existing algorithms for computing maximal chordal subgraphs depend on dynamically ordering the vertices, which is an inherently sequential process and therefore limits the algorithms' parallelizability. In this paper we explore techniques to develop a scalable parallel algorithm for extracting a maximal chordal subgraph. We demonstrate that an earlier attempt at developing a parallel algorithm may induce a non-optimal vertex ordering and is therefore not guaranteed to terminate with a maximal chordal subgraph. We then give a new algorithm that first computes and then repeatedly augments a spanning chordal subgraph. After proving that the algorithm terminates with a maximal chordal subgraph, we then demonstrate that this algorithm is more amenable to parallelization and that the parallel version also terminates with a maximal chordal subgraph. That said, the complexity of the new algorithm is higher than that of the previous parallel algorithm, although the earlier algorithm computes a chordal subgraph which is not guaranteed to be maximal. We experimented with our augmentation-based algorithm on both synthetic and real-world graphs. We provide scalability results and also explore the effect of different choices for the initial spanning chordal subgraph on both the running time and on the number of edges in the maximal chordal subgraph.

  18. Patch Based Multiple Instance Learning Algorithm for Object Tracking.

    Science.gov (United States)

    Wang, Zhenjie; Wang, Lijia; Zhang, Hua

    2017-01-01

    To deal with the problems of illumination changes or pose variations and serious partial occlusion, patch based multiple instance learning (P-MIL) algorithm is proposed. The algorithm divides an object into many blocks. Then, the online MIL algorithm is applied on each block for obtaining strong classifier. The algorithm takes account of both the average classification score and classification scores of all the blocks for detecting the object. In particular, compared with the whole object based MIL algorithm, the P-MIL algorithm detects the object according to the unoccluded patches when partial occlusion occurs. After detecting the object, the learning rates for updating weak classifiers' parameters are adaptively tuned. The classifier updating strategy avoids overupdating and underupdating the parameters. Finally, the proposed method is compared with other state-of-the-art algorithms on several classical videos. The experiment results illustrate that the proposed method performs well especially in case of illumination changes or pose variations and partial occlusion. Moreover, the algorithm realizes real-time object tracking.

  19. Genetic Algorithms for Multiple-Choice Problems

    Science.gov (United States)

    Aickelin, Uwe

    2010-04-01

    This thesis investigates the use of problem-specific knowledge to enhance a genetic algorithm approach to multiple-choice optimisation problems.It shows that such information can significantly enhance performance, but that the choice of information and the way it is included are important factors for success.Two multiple-choice problems are considered.The first is constructing a feasible nurse roster that considers as many requests as possible.In the second problem, shops are allocated to locations in a mall subject to constraints and maximising the overall income.Genetic algorithms are chosen for their well-known robustness and ability to solve large and complex discrete optimisation problems.However, a survey of the literature reveals room for further research into generic ways to include constraints into a genetic algorithm framework.Hence, the main theme of this work is to balance feasibility and cost of solutions.In particular, co-operative co-evolution with hierarchical sub-populations, problem structure exploiting repair schemes and indirect genetic algorithms with self-adjusting decoder functions are identified as promising approaches.The research starts by applying standard genetic algorithms to the problems and explaining the failure of such approaches due to epistasis.To overcome this, problem-specific information is added in a variety of ways, some of which are designed to increase the number of feasible solutions found whilst others are intended to improve the quality of such solutions.As well as a theoretical discussion as to the underlying reasons for using each operator,extensive computational experiments are carried out on a variety of data.These show that the indirect approach relies less on problem structure and hence is easier to implement and superior in solution quality.

  20. CUDT: a CUDA based decision tree algorithm.

    Science.gov (United States)

    Lo, Win-Tsung; Chang, Yue-Shan; Sheu, Ruey-Kai; Chiu, Chun-Chieh; Yuan, Shyan-Ming

    2014-01-01

    Decision tree is one of the famous classification methods in data mining. Many researches have been proposed, which were focusing on improving the performance of decision tree. However, those algorithms are developed and run on traditional distributed systems. Obviously the latency could not be improved while processing huge data generated by ubiquitous sensing node in the era without new technology help. In order to improve data processing latency in huge data mining, in this paper, we design and implement a new parallelized decision tree algorithm on a CUDA (compute unified device architecture), which is a GPGPU solution provided by NVIDIA. In the proposed system, CPU is responsible for flow control while the GPU is responsible for computation. We have conducted many experiments to evaluate system performance of CUDT and made a comparison with traditional CPU version. The results show that CUDT is 5 ∼ 55 times faster than Weka-j48 and is 18 times speedup than SPRINT for large data set.

  1. Algorithm for the Stochastic Generalized Transportation Problem

    Directory of Open Access Journals (Sweden)

    Marcin Anholcer

    2012-01-01

    Full Text Available The equalization method for the stochastic generalized transportation problem has been presented. The algorithm allows us to find the optimal solution to the problem of minimizing the expected total cost in the generalized transportation problem with random demand. After a short introduction and literature review, the algorithm is presented. It is a version of the method proposed by the author for the nonlinear generalized transportation problem. It is shown that this version of the method generates a sequence of solutions convergent to the KKT point. This guarantees the global optimality of the obtained solution, as the expected cost functions are convex and twice differentiable. The computational experiments performed for test problems of reasonable size show that the method is fast. (original abstract

  2. Fireworks Algorithm with Enhanced Fireworks Interaction.

    Science.gov (United States)

    Zhang, Bei; Zheng, Yu-Jun; Zhang, Min-Xia; Chen, Sheng-Yong

    2017-01-01

    As a relatively new metaheuristic in swarm intelligence, fireworks algorithm (FWA) has exhibited promising performance on a wide range of optimization problems. This paper aims to improve FWA by enhancing fireworks interaction in three aspects: 1) Developing a new Gaussian mutation operator to make sparks learn from more exemplars; 2) Integrating the regular explosion operator of FWA with the migration operator of biogeography-based optimization (BBO) to increase information sharing; 3) Adopting a new population selection strategy that enables high-quality solutions to have high probabilities of entering the next generation without incurring high computational cost. The combination of the three strategies can significantly enhance fireworks interaction and thus improve solution diversity and suppress premature convergence. Numerical experiments on the CEC 2015 single-objective optimization test problems show the effectiveness of the proposed algorithm. The application to a high-speed train scheduling problem also demonstrates its feasibility in real-world optimization problems.

  3. Algoritmo para el tratamiento del neumotórax traumático: experiencia de 10 años Algorithm for treatment of traumatic pneumothorax: ten-years experience

    Directory of Open Access Journals (Sweden)

    Gimel Sosa Martín

    2010-12-01

    diseases. The objective of present paper was to analyze the behavior of he spontaneous and traumatic pneumothorax and also to assess its treatment. METHODS. A multi-center study was conducted using analytical, descriptive, retrospective and prospective, cross-sectional elements in 154 patients with clinical, radiological diagnosis of the pneumothorax seen between October, 1998 and December, 2008, following the work algorithm designed for this aim. Study sample included 154 patients. RESULTS. In present study there was predominance of male sex, smoking and the type of traumatic pneumothorax. The minimal pleurotomy was effective in the 94,8% of patients. The traumatic pneumothorax were 126 (81,2%. From these, 120 (77,9% were caused by firearms wounds and contusions and six were of iatrogenic type (3,8%. The more frequent complication after pleurotomy was the pleural tube obstruction. CONCLUSIONS. The medical treatment, indifferent minimal pleurotomy, the high minimal pleurotomy and the chemical pleurodesis had a effectiveness between the 90 and the 100%. There was predominance of several types of traumatic pneumothorax In this series, thoracotomy indications were due to a persistent, traumatic, relapsing pneumothorax.

  4. WWW portal usage analysis using genetic algorithms

    Directory of Open Access Journals (Sweden)

    Ondřej Popelka

    2009-01-01

    Full Text Available The article proposes a new method suitable for advanced analysis of web portal visits. This is part of retrieving information and knowledge from web usage data (web usage mining. Such information is necessary in order to gain better insight into visitor’s needs and generally consumer behaviour. By le­ve­ra­ging this information a company can optimize the organization of its internet presentations and offer a better end-user experience. The proposed approach is using Grammatical evolution which is computational method based on genetic algorithms. Grammatical evolution is using a context-free grammar in order to generate the solution in arbitrary reusable form. This allows us to describe visitors’ behaviour in different manners depending on desired further processing. In this article we use description with a procedural programming language. Web server access log files are used as source data.The extraction of behaviour patterns can currently be solved using statistical analysis – specifically sequential analysis based methods. Our objective is to develop an alternative algorithm.The article further describes the basic algorithms of two-level grammatical evolution; this involves basic Grammatical Evolution and Differential Evolution, which forms the second phase of the computation. Grammatical evolution is used to generate the basic structure of the solution – in form of a part of application code. Differential evolution is used to find optimal parameters for this solution – the specific pages visited by a random visitor. The grammar used to conduct experiments is described along with explanations of the links to the actual implementation of the algorithm. Furthermore the fitness function is described and reasons which yield to its’ current shape. Finally the process of analyzing and filtering the raw input data is described as it is vital part in obtaining reasonable results.

  5. Dolgu Duvarların Betonarme Çerçeve Davranışına Etkisinin Basitleştirilmiş Bir Yöntemle Dikkate Alınması

    Directory of Open Access Journals (Sweden)

    Hasan Aksoy

    2015-07-01

    Full Text Available Betonarme bir bina tasarlanırken ya da performans değerlendirmesi yapılırken, dolgu duvarların etkisi, çoğunlukla uygulamada hesaba katılmamaktadır. Bu etkiyi dikkate alabilmek için, bir takım hesaplamaların ve kabullerin yapılması gerekmektedir. Bu çalışmada, dolgu duvarların betonarme çerçeve davranışı üzerindeki etkisinin bir katsayı ile dikkate alınması için önerilen basitleştirilmiş bir yöntem ve yöntemin uygulanma kriterleri irdelenmiştir. Kentsel Dönüşüm Yasası kapsamında uygulanan Riskli Yapıların Tespit Edilmesine İlişkin Esaslar (RYTEİE kısmında bu etkinin göz önüne alınması için verilen benzer bir önerinin uygun olup olmadığı araştırılmıştır. Kolon kesme kuvveti, göreli kat ötelemesi ve mod şekilleri ve periyotları yapılan değerlendirmelerde, temel kriterler olarak ele alınmıştır. 4 katlı bir binanın analitik modeli oluşturularak dolgu duvarın betonarme çerçeve üzerindeki etkisi incelenmiştir. Bu çalışma sonucunda RYTEİE’de tanımlanan dolgu duvar etkisini bir katsayı ile hesaba katan önerinin, incelenen bir çok durumda güvenli tarafta kaldığı belirlenmiştir. Ancak, dolgu duvarın planda simetrik olmayan yerleşimi nedeni ile yapıda burulmaya neden olması durumunda bazı kolonlarda daha büyük kesme kuvveti talepleri oluşturduğu görülmüştür. Buna göre, RYTEİE’de tanımlanan önerinin uygulanabileceği binaların sınırlandırıldığı kriterlere ilave olarak dolgu duvarların burulmaya neden olması durumunda kullanılmaması gerekliliğinin vurgulanması önerilmektedir.

  6. Pandora Particle Flow Algorithm

    CERN Document Server

    Marshall, J S

    2013-01-01

    A high-energy e+e collider, such as the ILC or CLIC, is arguably the best option to complement and extend the LHC physics programme. A lepton collider will allow for exploration of Standard Model Physics, such as precise measurements of the Higgs, top and gauge sectors, in addition to enabling a multitude of New Physics searches. However, physics analyses at such a collider will place unprecedented demands on calorimetry, with a required jet energy resolution of sE=E . 3:5%. To meet these requirements will need a new approach to calorimetry. The particle flow approach to calorimetry requires both fine granularity detectors and sophisticated software algorithms. It promises to deliver unparalleled jet energy resolution by fully reconstructing the paths of individual particles through the detector. The energies of charged particles can then be extracted from precise inner detector tracker measurements, whilst photon energies will be measured in the ECAL, and only neutral hadron energies (10% of jet energies...

  7. Algorithm for Autonomous Landing

    Science.gov (United States)

    Kuwata, Yoshiaki

    2011-01-01

    Because of their small size, high maneuverability, and easy deployment, micro aerial vehicles (MAVs) are used for a wide variety of both civilian and military missions. One of their current drawbacks is the vast array of sensors (such as GPS, altimeter, radar, and the like) required to make a landing. Due to the MAV s small payload size, this is a major concern. Replacing the imaging sensors with a single monocular camera is sufficient to land a MAV. By applying optical flow algorithms to images obtained from the camera, time-to-collision can be measured. This is a measurement of position and velocity (but not of absolute distance), and can avoid obstacles as well as facilitate a landing on a flat surface given a set of initial conditions. The key to this approach is to calculate time-to-collision based on some image on the ground. By holding the angular velocity constant, horizontal speed decreases linearly with the height, resulting in a smooth landing. Mathematical proofs show that even with actuator saturation or modeling/ measurement uncertainties, MAVs can land safely. Landings of this nature may have a higher velocity than is desirable, but this can be compensated for by a cushioning or dampening system, or by using a system of legs to grab onto a surface. Such a monocular camera system can increase vehicle payload size (or correspondingly reduce vehicle size), increase speed of descent, and guarantee a safe landing by directly correlating speed to height from the ground.

  8. Pulmonary thromboembolism diagnosis algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Kasai, Takeshi; Eto, Jun; Hayano, Daisuke; Ohashi, Masaki; Yoneda, Takahiro; Oyama, Hisaya; Inaba, Akira [Kameda General Hospital, Kamogawa, Chiba (Japan). Trauma and Emergency Care Center

    2002-01-01

    Our algorithm for diagnosing pulmonary thromboembolism combines ventilation/perfusion scanning with clinical criteria. Our perfusion scanning criterion states that high probability defines 2 segmental perfusion defects without corresponding radiographic abnormality and indeterminate probability defines less than 2 segmental perfusion defects (low probability: less than one segmental perfusion defect; intermediate: perfusion defects between high and low probability). The clinical criterion is divided into 7 items related to symptoms and signs suggestive of pulmonary thromboembolism. More than 4 items are defined as a highly suspicious clinical manifestation (HSCM), and less than 4 are considered a low suspicious clinical manifestation (LSCM). In 31 cases of high probability, 18 of HSCM did not include pulmonary angiograhy (PAG), and 13 of LSCM included PAG (positive: 11; negative: 2). In 12 cases of indeterminate probability, 7 of LSCM were observed without PAG and 5 of HSCM with PAG (positive: 4; negative: 1). PAG performance thus decreased to 41.9%. The positive prediction of high probability is 93.5%, which is very high, compared to indeterminate probability at 33.3%. (author)

  9. New Algorithm of Seed Finding for Track Reconstruction

    OpenAIRE

    Baranov Dmitry; Merts Sergei; Ososkov Gennady; Rogachevsky Oleg

    2016-01-01

    Event reconstruction is a fundamental problem in the high energy physics experiments. It consists of track finding and track fitting procedures in the experiment tracking detectors. This requires a tremendous search of detector responses belonging to each track aimed at obtaining so-called “seeds”, i.e. initial approximations of track parameters of charged particles. In the paper we propose a new algorithm of the seedfinding procedure for the BM@N experiment.

  10. New Algorithm of Seed Finding for Track Reconstruction

    Directory of Open Access Journals (Sweden)

    Baranov Dmitry

    2016-01-01

    Full Text Available Event reconstruction is a fundamental problem in the high energy physics experiments. It consists of track finding and track fitting procedures in the experiment tracking detectors. This requires a tremendous search of detector responses belonging to each track aimed at obtaining so-called “seeds”, i.e. initial approximations of track parameters of charged particles. In the paper we propose a new algorithm of the seedfinding procedure for the BM@N experiment.

  11. Global alignment algorithms implementations | Fatumo ...

    African Journals Online (AJOL)

    In this paper, we implemented the two routes for sequence comparison, that is; the dotplot and Needleman-wunsch algorithm for global sequence alignment. Our algorithms were implemented in python programming language and were tested on Linux platform 1.60GHz, 512 MB of RAM SUSE 9.2 and 10.1 versions.

  12. GPU-Accelerated Apriori Algorithm

    Directory of Open Access Journals (Sweden)

    Jiang Hao

    2017-01-01

    Full Text Available This paper propose a parallel Apriori algorithm based on GPU (GPUApriori for frequent itemsets mining, and designs a storage structure using bit table (BIT matrix to replace the traditional storage mode. In addition, parallel computing scheme on GPU is discussed. The experimental results show that GPUApriori algorithm can effectively improve the efficiency of frequent itemsets mining.

  13. Algorithms in combinatorial design theory

    CERN Document Server

    Colbourn, CJ

    1985-01-01

    The scope of the volume includes all algorithmic and computational aspects of research on combinatorial designs. Algorithmic aspects include generation, isomorphism and analysis techniques - both heuristic methods used in practice, and the computational complexity of these operations. The scope within design theory includes all aspects of block designs, Latin squares and their variants, pairwise balanced designs and projective planes and related geometries.

  14. Recovery Rate of Clustering Algorithms

    NARCIS (Netherlands)

    Li, Fajie; Klette, Reinhard; Wada, T; Huang, F; Lin, S

    2009-01-01

    This article provides a simple and general way for defining the recovery rate of clustering algorithms using a given family of old clusters for evaluating the performance of the algorithm when calculating a family of new clusters. Under the assumption of dealing with simulated data (i.e., known old

  15. Five Performability Algorithms. A Comparison

    NARCIS (Netherlands)

    Cloth, L.; Haverkort, Boudewijn R.H.M.

    Since the introduction by John F. Meyer in 1980, various algorithms have been proposed to evaluate the performability distribution. In this paper we describe and compare five algorithms that have been proposed recently to evaluate this distribution: Picard's method, a uniformisation-based method, a

  16. Algorithms on ensemble quantum computers.

    Science.gov (United States)

    Boykin, P Oscar; Mor, Tal; Roychowdhury, Vwani; Vatan, Farrokh

    2010-06-01

    In ensemble (or bulk) quantum computation, all computations are performed on an ensemble of computers rather than on a single computer. Measurements of qubits in an individual computer cannot be performed; instead, only expectation values (over the complete ensemble of computers) can be measured. As a result of this limitation on the model of computation, many algorithms cannot be processed directly on such computers, and must be modified, as the common strategy of delaying the measurements usually does not resolve this ensemble-measurement problem. Here we present several new strategies for resolving this problem. Based on these strategies we provide new versions of some of the most important quantum algorithms, versions that are suitable for implementing on ensemble quantum computers, e.g., on liquid NMR quantum computers. These algorithms are Shor's factorization algorithm, Grover's search algorithm (with several marked items), and an algorithm for quantum fault-tolerant computation. The first two algorithms are simply modified using a randomizing and a sorting strategies. For the last algorithm, we develop a classical-quantum hybrid strategy for removing measurements. We use it to present a novel quantum fault-tolerant scheme. More explicitly, we present schemes for fault-tolerant measurement-free implementation of Toffoli and σ(z)(¼) as these operations cannot be implemented "bitwise", and their standard fault-tolerant implementations require measurement.

  17. Online co-regularized algorithms

    NARCIS (Netherlands)

    Ruijter, T. de; Tsivtsivadze, E.; Heskes, T.

    2012-01-01

    We propose an online co-regularized learning algorithm for classification and regression tasks. We demonstrate that by sequentially co-regularizing prediction functions on unlabeled data points, our algorithm provides improved performance in comparison to supervised methods on several UCI benchmarks

  18. Ensemble algorithms in reinforcement learning

    NARCIS (Netherlands)

    Wiering, Marco A; van Hasselt, Hado

    This paper describes several ensemble methods that combine multiple different reinforcement learning (RL) algorithms in a single agent. The aim is to enhance learning speed and final performance by combining the chosen actions or action probabilities of different RL algorithms. We designed and

  19. Algorithm Calculates Cumulative Poisson Distribution

    Science.gov (United States)

    Bowerman, Paul N.; Nolty, Robert C.; Scheuer, Ernest M.

    1992-01-01

    Algorithm calculates accurate values of cumulative Poisson distribution under conditions where other algorithms fail because numbers are so small (underflow) or so large (overflow) that computer cannot process them. Factors inserted temporarily to prevent underflow and overflow. Implemented in CUMPOIS computer program described in "Cumulative Poisson Distribution Program" (NPO-17714).

  20. Algorithms seminars 1992-1993

    OpenAIRE

    Salvy, Bruno

    1993-01-01

    These seminar notes represent the proceedings (some in French) of a seminar devoted to the analysis of algorithms and related topics. The subjects covered include combinatorial models and random generation, symbolic computation, asymptotic analysis, average-case analysis of algorithms and data structures and some computational number theory.

  1. The asymptotic probabilistic genetic algorithm

    OpenAIRE

    Galushin, P.; Semenkin, E.

    2009-01-01

    This paper proposes the modification of probabilistic genetic algorithm, which uses genetic operators, not affecting the particular solutions, but the probabilities distribution of solution vector's components. This paper also compares the reliability and efficiency of the base algorithm and proposed modification using the set of test optimization problems and bank loan portfolio problem.

  2. A distributed spanning tree algorithm

    DEFF Research Database (Denmark)

    Johansen, Karl Erik; Jørgensen, Ulla Lundin; Nielsen, Svend Hauge

    1988-01-01

    We present a distributed algorithm for constructing a spanning tree for connected undirected graphs. Nodes correspond to processors and edges correspond to two way channels. Each processor has initially a distinct identity and all processors perform the same algorithm. Computation as well...

  3. A fast fractional difference algorithm

    DEFF Research Database (Denmark)

    Jensen, Andreas Noack; Nielsen, Morten Ørregaard

    2014-01-01

    We provide a fast algorithm for calculating the fractional difference of a time series. In standard implementations, the calculation speed (number of arithmetic operations) is of order T 2, where T is the length of the time series. Our algorithm allows calculation speed of order T log...

  4. A Fast Fractional Difference Algorithm

    DEFF Research Database (Denmark)

    Jensen, Andreas Noack; Nielsen, Morten Ørregaard

    We provide a fast algorithm for calculating the fractional difference of a time series. In standard implementations, the calculation speed (number of arithmetic operations) is of order T 2, where T is the length of the time series. Our algorithm allows calculation speed of order T log...

  5. A Distributed Spanning Tree Algorithm

    DEFF Research Database (Denmark)

    Johansen, Karl Erik; Jørgensen, Ulla Lundin; Nielsen, Sven Hauge

    We present a distributed algorithm for constructing a spanning tree for connected undirected graphs. Nodes correspond to processors and edges correspond to two-way channels. Each processor has initially a distinct identity and all processors perform the same algorithm. Computation as well...

  6. Adaptive Beamforming Using Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Z. Raida

    1998-09-01

    Full Text Available The presented submission describes how genetic algorithms can be applied to the control of adaptive antennas. The proposed optimization method is easily implementable on one hand, but relatively slowly converging and depending on the parameters of the genetic algorithms on the other hand. The disadvantages as well as some possible improvements are discussed in this paper.

  7. LHCb: LHCb VELO TELL1 Algorithms

    CERN Multimedia

    Hennessy, Karol

    2012-01-01

    The LHCb experiment is dedicated to searching for New Physics effects in the heavy flavour sector, precise measurements of CP violation and rare heavy meson decays. Precise tracking and vertexing around the interaction point is crucial in achieving these physics goals. The LHCb VELO (VErtex LOcator) silicon micro-strip detector is the highest precision vertex detector at the LHC and is located at only 8 mm from the proton beams. The high spatial resolution (up to 4 microns single hit precision) is obtained by a complex chain of processing algorithms to suppress noise and reconstruct clusters. These are implemented in large FPGAs, with over one million parameters that need to be individually optimised. Previously we presented a novel approach that has been developed to optimise the parameters and integrating their determination into the full software framework of the LHCb experiment. Presently we report on the experience gained from regular operation of the calibration and monitoring software with the collisio...

  8. Elementary functions algorithms and implementation

    CERN Document Server

    Muller, Jean-Michel

    2016-01-01

    This textbook presents the concepts and tools necessary to understand, build, and implement algorithms for computing elementary functions (e.g., logarithms, exponentials, and the trigonometric functions). Both hardware- and software-oriented algorithms are included, along with issues related to accurate floating-point implementation. This third edition has been updated and expanded to incorporate the most recent advances in the field, new elementary function algorithms, and function software. After a preliminary chapter that briefly introduces some fundamental concepts of computer arithmetic, such as floating-point arithmetic and redundant number systems, the text is divided into three main parts. Part I considers the computation of elementary functions using algorithms based on polynomial or rational approximations and using table-based methods; the final chapter in this section deals with basic principles of multiple-precision arithmetic. Part II is devoted to a presentation of “shift-and-add” algorithm...

  9. Poster - Thurs Eve-03: Dose verification using a 2D diode array (Mapcheck) for electron beam modeling, QA and patient customized cutouts.

    Science.gov (United States)

    Ghasroddashti, E; Sawchuk, S

    2008-07-01

    To assess a diode detector array (MapCheck) for commissioning, quality assurance (QA); and patient specific QA for electrons. 2D dose information was captured for various depths at several square fields ranging from 2×2 to 25×25cm2 , and 9 patient customized cutouts using both Mapcheck and a scanning water phantom. Beam energies of 6, 9, 12, 16 and 20 MeV produced by Varian linacs were used. The water tank, beam energies and fields were also modeled on the Pinnacle planning system obtaining dose information. Mapcheck, water phantom and Pinnacle results were compared. Relative output factors (ROF) acquired with Mapcheck were compared to an in-house algorithm (JeffIrreg). Inter- and intra-observer variability was also investigated Results: Profiles and %DD data for Mapcheck, water tank, and Pinnacle agree well. High-dose, low-dose-gradient comparisons agree to within 1% between Mapcheck and water phantom. Field size comparisons showed mostly sub-millimeter agreement. ROFs for Mapcheck and JeffIrreg agreed within 2.0% (mean=0.9%±0.6%). The current standard for electron commissioning and QA is the scanning water tank which may be inefficient. Our results demonstrate that MapCheck can potentially be an alternative. Also the dose distributions for patient specific electron treatment require verification. This procedure is particularly challenging when the minimum dimension across the central axis of the cutout is smaller than the range of the electrons in question. Mapcheck offers an easy and efficient way of determining patient dose distributions especially compared to using the alternatives, namely, ion chamber and film. © 2008 American Association of Physicists in Medicine.

  10. Poster — Thur Eve — 44: Linearization of Compartmental Models for More Robust Estimates of Regional Hemodynamic, Metabolic and Functional Parameters using DCE-CT/PET Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Blais, AR; Dekaban, M; Lee, T-Y [Department of Medical Biophysics and Robarts Research Institute, Western University, Lawson Health Research Institute, London, ON (Canada)

    2014-08-15

    Quantitative analysis of dynamic positron emission tomography (PET) data usually involves minimizing a cost function with nonlinear regression, wherein the choice of starting parameter values and the presence of local minima affect the bias and variability of the estimated kinetic parameters. These nonlinear methods can also require lengthy computation time, making them unsuitable for use in clinical settings. Kinetic modeling of PET aims to estimate the rate parameter k{sub 3}, which is the binding affinity of the tracer to a biological process of interest and is highly susceptible to noise inherent in PET image acquisition. We have developed linearized kinetic models for kinetic analysis of dynamic contrast enhanced computed tomography (DCE-CT)/PET imaging, including a 2-compartment model for DCE-CT and a 3-compartment model for PET. Use of kinetic parameters estimated from DCE-CT can stabilize the kinetic analysis of dynamic PET data, allowing for more robust estimation of k{sub 3}. Furthermore, these linearized models are solved with a non-negative least squares algorithm and together they provide other advantages including: 1) only one possible solution and they do not require a choice of starting parameter values, 2) parameter estimates are comparable in accuracy to those from nonlinear models, 3) significantly reduced computational time. Our simulated data show that when blood volume and permeability are estimated with DCE-CT, the bias of k{sub 3} estimation with our linearized model is 1.97 ± 38.5% for 1,000 runs with a signal-to-noise ratio of 10. In summary, we have developed a computationally efficient technique for accurate estimation of k{sub 3} from noisy dynamic PET data.

  11. Applying new seismic analysis techniques to the lunar seismic dataset: New information about the Moon and planetary seismology on the eve of InSight

    Science.gov (United States)

    Dimech, J. L.; Weber, R. C.; Knapmeyer-Endrun, B.; Arnold, R.; Savage, M. K.

    2016-12-01

    The field of planetary science is poised for a major advance with the upcoming InSight mission to Mars due to launch in May 2018. Seismic analysis techniques adapted for use on planetary data are therefore highly relevant to the field. The heart of this project is in the application of new seismic analysis techniques to the lunar seismic dataset to learn more about the Moon's crust and mantle structure, with particular emphasis on `deep' moonquakes which are situated half-way between the lunar surface and its core with no surface expression. Techniques proven to work on the Moon might also be beneficial for InSight and future planetary seismology missions which face similar technical challenges. The techniques include: (1) an event-detection and classification algorithm based on `Hidden Markov Models' to reclassify known moonquakes and look for new ones. Apollo 17 gravimeter and geophone data will also be included in this effort. (2) Measurements of anisotropy in the lunar mantle and crust using `shear-wave splitting'. Preliminary measurements on deep moonquakes using the MFAST program are encouraging, and continued evaluation may reveal new structural information on the Moon's mantle. (3) Probabilistic moonquake locations using NonLinLoc, a non-linear hypocenter location technique, using a modified version of the codes designed to work with the Moon's radius. Successful application may provide a new catalog of moonquake locations with rigorous uncertainty information, which would be a valuable input into: (4) new fault plane constraints from focal mechanisms using a novel approach to Bayes' theorem which factor in uncertainties in hypocenter coordinates and S-P amplitude ratios. Preliminary results, such as shear-wave splitting measurements, will be presented and discussed.

  12. Poster — Thur Eve — 46: Monte Carlo model of the Novalis Classic 6MV stereotactic linear accelerator using the GATE simulation platform

    Energy Technology Data Exchange (ETDEWEB)

    Wiebe, J [Department of Medical Physics, Tom Baker Cancer Centre, Calgary AB (Canada); Department of Physics and Astronomy, University of Calgary, Calgary, AB (Canada); Ploquin, N [Department of Medical Physics, Tom Baker Cancer Centre, Calgary AB (Canada); Department of Physics and Astronomy, University of Calgary, Calgary, AB (Canada); Department of Oncology, University of Calgary, Calgary, AB (Canada)

    2014-08-15

    Monte Carlo (MC) simulation is accepted as the most accurate method to predict dose deposition when compared to other methods in radiation treatment planning. Current dose calculation algorithms used for treatment planning can become inaccurate when small radiation fields and tissue inhomogeneities are present. At our centre the Novalis Classic linear accelerator (linac) is used for Stereotactic Radiosurgery (SRS). The first MC model to date of the Novalis Classic linac was developed at our centre using the Geant4 Application for Tomographic Emission (GATE) simulation platform. GATE is relatively new, open source MC software built from CERN's Geometry and Tracking 4 (Geant4) toolkit. The linac geometry was modeled using manufacturer specifications, as well as in-house measurements of the micro MLC's. Among multiple model parameters, the initial electron beam was adjusted so that calculated depth dose curves agreed with measured values. Simulations were run on the European Grid Infrastructure through GateLab. Simulation time is approximately 8 hours on GateLab for a complete head model simulation to acquire a phase space file. Current results have a majority of points within 3% of the measured dose values for square field sizes ranging from 6×6 mm{sup 2} to 98×98 mm{sup 2} (maximum field size on the Novalis Classic linac) at 100 cm SSD. The x-ray spectrum was determined from the MC data as well. The model provides an investigation into GATE'S capabilities and has the potential to be used as a research tool and an independent dose calculation engine for clinical treatment plans.

  13. Linear feature detection algorithm for astronomical surveys - I. Algorithm description

    Science.gov (United States)

    Bektešević, Dino; Vinković, Dejan

    2017-11-01

    Computer vision algorithms are powerful tools in astronomical image analyses, especially when automation of object detection and extraction is required. Modern object detection algorithms in astronomy are oriented towards detection of stars and galaxies, ignoring completely the detection of existing linear features. With the emergence of wide-field sky surveys, linear features attract scientific interest as possible trails of fast flybys of near-Earth asteroids and meteors. In this work, we describe a new linear feature detection algorithm designed specifically for implementation in big data astronomy. The algorithm combines a series of algorithmic steps that first remove other objects (stars and galaxies) from the image and then enhance the line to enable more efficient line detection with the Hough algorithm. The rate of false positives is greatly reduced thanks to a step that replaces possible line segments with rectangles and then compares lines fitted to the rectangles with the lines obtained directly from the image. The speed of the algorithm and its applicability in astronomical surveys are also discussed.

  14. Comparison of Classification Algorithms and Training Sample Sizes in Urban Land Classification with Landsat Thematic Mapper Imagery

    Directory of Open Access Journals (Sweden)

    Congcong Li

    2014-01-01

    Full Text Available Although a large number of new image classification algorithms have been developed, they are rarely tested with the same classification task. In this research, with the same Landsat Thematic Mapper (TM data set and the same classification scheme over Guangzhou City, China, we tested two unsupervised and 13 supervised classification algorithms, including a number of machine learning algorithms that became popular in remote sensing during the past 20 years. Our analysis focused primarily on the spectral information provided by the TM data. We assessed all algorithms in a per-pixel classification decision experiment and all supervised algorithms in a segment-based experiment. We found that when sufficiently representative training samples were used, most algorithms performed reasonably well. Lack of training samples led to greater classification accuracy discrepancies than classification algorithms themselves. Some algorithms were more tolerable to insufficient (less representative training samples than others. Many algorithms improved the overall accuracy marginally with per-segment decision making.

  15. Algorithmic advances in stochastic programming

    Energy Technology Data Exchange (ETDEWEB)

    Morton, D.P.

    1993-07-01

    Practical planning problems with deterministic forecasts of inherently uncertain parameters often yield unsatisfactory solutions. Stochastic programming formulations allow uncertain parameters to be modeled as random variables with known distributions, but the size of the resulting mathematical programs can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We consider two classes of decomposition-based stochastic programming algorithms. The first type of algorithm addresses problems with a ``manageable`` number of scenarios. The second class incorporates Monte Carlo sampling within a decomposition algorithm. We develop and empirically study an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs within a prespecified tolerance. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of ``real-world`` multistage stochastic hydroelectric scheduling problems. Recently, there has been an increased focus on decomposition-based algorithms that use sampling within the optimization framework. These approaches hold much promise for solving stochastic programs with many scenarios. A critical component of such algorithms is a stopping criterion to ensure the quality of the solution. With this as motivation, we develop a stopping rule theory for algorithms in which bounds on the optimal objective function value are estimated by sampling. Rules are provided for selecting sample sizes and terminating the algorithm under which asymptotic validity of confidence interval statements for the quality of the proposed solution can be verified. Issues associated with the application of this theory to two sampling-based algorithms are considered, and preliminary empirical coverage results are presented.

  16. A New Adaptive H-Infinity Filtering Algorithm for the GPS/INS Integrated Navigation.

    Science.gov (United States)

    Jiang, Chen; Zhang, Shu-Bi; Zhang, Qiu-Zhao

    2016-12-19

    The Kalman filter is an optimal estimator with numerous applications in technology, especially in systems with Gaussian distributed noise. Moreover, the adaptive Kalman filtering algorithms, based on the Kalman filter, can control the influence of dynamic model errors. In contrast to the adaptive Kalman filtering algorithms, the H-infinity filter is able to address the interference of the stochastic model by minimization of the worst-case estimation error. In this paper, a novel adaptive H-infinity filtering algorithm, which integrates the adaptive Kalman filter and the H-infinity filter in order to perform a comprehensive filtering algorithm, is presented. In the proposed algorithm, a robust estimation method is employed to control the influence of outliers. In order to verify the proposed algorithm, experiments with real data of the Global Positioning System (GPS) and Inertial Navigation System (INS) integrated navigation, were conducted. The experimental results have shown that the proposed algorithm has multiple advantages compared to the other filtering algorithms.

  17. Iterative reconstruction of transcriptional regulatory networks: an algorithmic approach.

    Directory of Open Access Journals (Sweden)

    Christian L Barrett

    2006-05-01

    Full Text Available The number of complete, publicly available genome sequences is now greater than 200, and this number is expected to rapidly grow in the near future as metagenomic and environmental sequencing efforts escalate and the cost of sequencing drops. In order to make use of this data for understanding particular organisms and for discerning general principles about how organisms function, it will be necessary to reconstruct their various biochemical reaction networks. Principal among these will be transcriptional regulatory networks. Given the physical and logical complexity of these networks, the various sources of (often noisy data that can be utilized for their elucidation, the monetary costs involved, and the huge number of potential experiments approximately 10(12 that can be performed, experiment design algorithms will be necessary for synthesizing the various computational and experimental data to maximize the efficiency of regulatory network reconstruction. This paper presents an algorithm for experimental design to systematically and efficiently reconstruct transcriptional regulatory networks. It is meant to be applied iteratively in conjunction with an experimental laboratory component. The algorithm is presented here in the context of reconstructing transcriptional regulation for metabolism in Escherichia coli, and, through a retrospective analysis with previously performed experiments, we show that the produced experiment designs conform to how a human would design experiments. The algorithm is able to utilize probability estimates based on a wide range of computational and experimental sources to suggest experiments with the highest potential of discovering the greatest amount of new regulatory knowledge.

  18. Algorithmic Clustering of Music

    OpenAIRE

    Cilibrasi, Rudi; Vitanyi, Paul; de Wolf, Ronald

    2003-01-01

    We present a fully automatic method for music classification, based only on compression of strings that represent the music pieces. The method uses no background knowledge about music whatsoever: it is completely general and can, without change, be used in different areas like linguistic classification and genomics. It is based on an ideal theory of the information content in individual objects (Kolmogorov complexity), information distance, and a universal similarity metric. Experiments show ...

  19. Lidar calibration experiments

    DEFF Research Database (Denmark)

    Ejsing Jørgensen, Hans; Mikkelsen, T.; Streicher, J.

    1997-01-01

    A series of atmospheric aerosol diffusion experiments combined with lidar detection was conducted to evaluate and calibrate an existing retrieval algorithm for aerosol backscatter lidar systems. The calibration experiments made use of two (almost) identical mini-lidar systems for aerosol cloud...... detection to test the reproducibility and uncertainty of lidars. Lidar data were obtained from both single-ended and double-ended Lidar configurations. A backstop was introduced in one of the experiments and a new method was developed where information obtained from the backstop can be used in the inversion...... algorithm. Independent in-situ aerosol plume concentrations were obtained from a simultaneous tracer gas experiment with SF6, and comparisons with the two lidars were made. The study shows that the reproducibility of the lidars is within 15%, including measurements from both sides of a plume...

  20. A BitTorrent-Based Dynamic Bandwidth Adaptation Algorithm for Video Streaming

    Science.gov (United States)

    Hsu, Tz-Heng; Liang, You-Sheng; Chiang, Meng-Shu

    In this paper, we propose a BitTorrent-based dynamic bandwidth adaptation algorithm for video streaming. Two mechanisms to improve the original BitTorrent protocol are proposed: (1) the decoding order frame first (DOFF) frame selection algorithm and (2) the rarest I frame first (RIFF) frame selection algorithm. With the proposed algorithms, a peer can periodically check the number of downloaded frames in the buffer and then allocate the available bandwidth adaptively for video streaming. As a result, users can have smooth video playout experience with the proposed algorithms.