EPO for the NASA SDO Extreme Ultraviolet Variability Experiment (EVE) Learning Suite for Educators
Kellagher, Emily; Scherrer, D. K.
2013-07-01
EVE Education and Public Outreach (EPO) promotes an understanding of the process of science and concepts within solar science and sun-earth connections. EVE EPO also features working scientists, current research and career awareness. One of the highlights for of this years projects is the digitization of solar lessons and the collaboration with the other instrument teams to develop new resources for students and educators. Digital lesson suite: EVE EPO has taken the best solar lessons and reworked then to make then more engaging, to reflect SDO data and made them SMARTboard compatible. We are creating a website that Students and teachers can access these lesson and use them online or download them. Project team collaboration: The SDO instruments (EVE, AIA and HMI) teams have created a comic book series for upper elementary and middle school students with the SDO mascot Camilla. These comics may be printed or read on mobile devices. Many teachers are looking for resources to use with their students via the Ipad so our collaboration helps supply teachers with a great resource that teachers about solar concepts and helps dispel solar misconceptions.Abstract (2,250 Maximum Characters): EVE Education and Public Outreach (EPO) promotes an understanding of the process of science and concepts within solar science and sun-earth connections. EVE EPO also features working scientists, current research and career awareness. One of the highlights for of this years projects is the digitization of solar lessons and the collaboration with the other instrument teams to develop new resources for students and educators. Digital lesson suite: EVE EPO has taken the best solar lessons and reworked then to make then more engaging, to reflect SDO data and made them SMARTboard compatible. We are creating a website that Students and teachers can access these lesson and use them online or download them. Project team collaboration: The SDO instruments (EVE, AIA and HMI) teams have created a
Buhr, S. M.; Eparvier, F.; McCaffrey, M.; Murillo, M.
2007-12-01
Recent immigrant high school students were successfully engaged in learning about Sun-Earth connections through a partnership with the NASA SDO Extreme-Ultraviolet Variability Experiment (EVE) project. The students were enrolled in a pilot course as part of the Math, Engineering and Science Achievement MESA) program. For many of the students, this was the only science option available to them due to language limitations. The English Language Learner (ELL) students doubled their achievement on a pre- and post-assessment on the content of the course. Students learned scientific content and vocabulary in English with support in Spanish, attended field trips, hosted scientist speakers, built and deployed space weather monitors as part of the Stanford SOLAR project, and gave final presentations in English, showcasing their new computer skills. Teachers who taught the students in other courses noted gains in the students' willingness to use English in class and noted gains in math skills. The MESA-EVE course won recognition as a Colorado MESA Program of Excellence and is being offered again in 2007-08. The course has been broken into modules for use in shorter after-school environments, or for use by EVE scientists who are outside of the Boulder area. Other EVE EPO includes professional development for teachers and content workshops for journalists.
Buhr, S. M.; McCaffrey, M. S.; Eparvier, F.; Murillo, M.
2008-05-01
Recent immigrant high school students were successfully engaged in learning about Sun-Earth connections through a partnership with the NASA Solar Dynamics Observatory Extreme Ultraviolet Variability Experiment (EVE) project. The students were enrolled in a pilot course as part of the Math, Engineering and Science Achievement (MESA) program. The English Language Learner (ELL) students doubled their achievement on a pre- and post- assessment on the content of the course. Students learned scientific content and vocabulary in English with support in Spanish, attended field trips, hosted scientist speakers, built antenna and deployed space weather monitors as part of the Stanford SOLAR project, and gave final presentations in English, showcasing their new computer skills. Teachers who taught the students in other courses noted gains in the students' willingness to use English in class and noted gains in math skills. The course has been broken into modules for use in shorter after-school environments, or for use by EVE scientists who are outside of the Boulder area. Video footage of "The Making of a Satellite", and "All About EVE" is completed for use in the kits. Other EVE EPO includes upcoming professional development for teachers and content workshops for journalists.
Poster - Thurs Eve-21: Experience with the Velocity(TM) pre-commissioning services.
Scora, D; Sixel, K; Mason, D; Neath, C
2008-07-01
As the first Canadian users of the Velocity™ program offered by Siemens, we would like to share our experience with the program. The Velocity program involves the measurement of the commissioning data by an independent Physics consulting company at the factory test cell. The data collected was used to model the treatment beams in our planning system in parallel with the linac delivery and installation. Beam models and a complete data book were generated for two photon energies including Virtual Wedge, physical wedge, and IMRT, and 6 electron energies at 100 and 110 cm SSD. Our final beam models are essentially the Velocity models with some minor modifications to customize the fit to our liking. Our experience with the Velocity program was very positive; the data collection was professional and efficient. It allowed us to proceed with confidence in our beam data and modeling and to spend more time on other aspects of opening a new clinic. With the assistance of the program we were able to open a three-linac clinic with Image-Guided IMRT within 4.5 months of machine delivery. © 2008 American Association of Physicists in Medicine.
EVE et École
2017-01-01
IMPORTANT DATES Enrolments 2017-2018 Enrolments for the school year 2017-2018 to the Nursery, the Kindergarten and the School will take place on 6, 7 and 8 March 2017 from 10 am to 1 pm at EVE and School. Registration forms will be available from Thursday 2nd March. More information on the website: http://nurseryschool.web.cern.ch/.
EVE et École
2017-01-01
IMPORTANT DATES Enrolments 2017-2018 Enrolments for the school year 2017-2018 to the Nursery, the Kindergarten and the School will take place on 6, 7 and 8 March 2017 from 10 am to 1 pm at EVE and School. Registration forms will be available from Thursday 2nd March. More information on the website: http://nurseryschool.web.cern.ch/. Saturday 4 March 2017 Open day at EVE and School of CERN Staff Association Are you considering enrolling your child to the Children’s Day-Care Centre EVE and School of the CERN Staff Association? If you work at CERN, then this event is for you: come visit the school and meet the Management on Saturday 4 March 2017 from 10 to 12 am We look forward to welcoming you and will be delighted to present our structure, its projects and premises to you, and answer all of your questions. Sign up for one of the two sessions on Doodle via the link below before Wednesday 1st March 2017 : http://doodle.com/poll/gbrz683wuvixk8as
McGeachy, P; Khan, R
2012-07-01
In early stage prostate cancer, low dose rate (LDR) prostate brachytherapy is a favorable treatment modality, where small radioactive seeds are permanently implanted throughout the prostate. Treatment centres currently rely on a commercial optimization algorithm, IPSA, to generate seed distributions for treatment plans. However, commercial software does not allow the user access to the source code, thus reducing the flexibility for treatment planning and impeding any implementation of new and, perhaps, improved clinical techniques. An open source genetic algorithm (GA) has been encoded in MATLAB to generate seed distributions for a simplified prostate and urethra model. To assess the quality of the seed distributions created by the GA, both the GA and IPSA were used to generate seed distributions for two clinically relevant scenarios and the quality of the GA distributions relative to IPSA distributions and clinically accepted standards for seed distributions was investigated. The first clinically relevant scenario involved generating seed distributions for three different prostate volumes (19.2 cc, 32.4 cc, and 54.7 cc). The second scenario involved generating distributions for three separate seed activities (0.397 mCi, 0.455 mCi, and 0.5 mCi). Both GA and IPSA met the clinically accepted criteria for the two scenarios, where distributions produced by the GA were comparable to IPSA in terms of full coverage of the prostate by the prescribed dose, and minimized dose to the urethra, which passed straight through the prostate. Further, the GA offered improved reduction of high dose regions (i.e hot spots) within the planned target volume. © 2012 American Association of Physicists in Medicine.
Staff Association
2017-01-01
Une belle fête de fin d’année Le vendredi 23 juin, il y avait foule dans la structure EVE et École de l’Association du personnel du CERN! Et pour cause, au programme de cette fin d’après-midi : de nombreuses animations et jeux variés proposés pour le plaisir des enfants et des parents ; une fête en plein air, dans la cour de l’école et le jardin, avec des jeux, des stands, une buvette. Les frites et saucisses ont été préparées par l’équipe de l’EVE et École, qui portait des t-shirts fluo, en référence au thème de l’année: les couleurs. Plusieurs stands étaient proposés comme par exemple, un coin maquillage ; avec ces artistes maquilleurs, les enfants sont devenus lion, papillon ou coccinelle, l’histoire de quelques...
Staff Association
2017-01-01
Il reste des places disponibles ! La structure Espace de vie enfantine (EVE) et École de l’Association du personnel du CERN vous informe qu’il reste quelques places pour la rentrée scolaire 2017-2018 : à la crèche (2-3 ans) (accueil sur 2, 3 ou 5 jours) ; au jardin d’enfants (2-4 ans) (accueil à la matinée) ; en classe de 1ère primaire (1P) (4-5 ans). N’hésitez pas à rapidement nous contacter si vous êtes intéressés ; nous sommes à votre disposition pour répondre à toutes vos questions : Staff.Kindergarten@cern.ch. L’EVE et École de l’Association du personnel du CERN est ouverte non seulement aux enfants des personnels du CERN (MPE, MPA) mais également aux enfants des personnes ne travaillant pas sur le domaine du CERN. ...
EVE and School: Important announcement
Staff Association
2017-01-01
Children’s Day-Care Centre (EVE) and School of the CERN Staff Association would like to inform you that there are still a few places available within the structure for the school year 2017–2018, in the: Nursery for 2, 3 or 5 days a week (2- to 3-year-olds); Kindergarten for mornings (2- to 4-year-olds); Primary 1 (1p) class in the School (4- to 5-year-olds). Please get in touch with us quickly if you are interested in any of the available places. We will gladly provide further information and answer any questions you may have: Staff.Kindergarten@cern.ch or (+41) 022 767 36 04 (mornings). EVE and School of the CERN Staff Association welcomes children of CERN Members of Personnel (MPE, MPA), as well as children whose parents do not work on CERN site. We would like to remind you that registrations are also open for the Summer Camp. The camp will run through the four weeks of July from 8.30 am to 5.30 pm with a weekly registration for 450 CHF, lunch included. For more information and regist...
Experiments with parallel algorithms for combinatorial problems
G.A.P. Kindervater (Gerard); H.W.J.M. Trienekens
1985-01-01
textabstractIn the last decade many models for parallel computation have been proposed and many parallel algorithms have been developed. However, few of these models have been realized and most of these algorithms are supposed to run on idealized, unrealistic parallel machines. The parallel machines
Experiments with the auction algorithm for the shortest path problem
DEFF Research Database (Denmark)
Larsen, Jesper; Pedersen, Ib
1999-01-01
The auction approach for the shortest path problem (SPP) as introduced by Bertsekas is tested experimentally. Parallel algorithms using the auction approach are developed and tested. Both the sequential and parallel auction algorithms perform significantly worse than a state-of-the-art Dijkstra-l......-like reference algorithm. Experiments are run on a distributed-memory MIMD class Meiko parallel computer....
Selected event reconstruction algorithms for the CBM experiment at FAIR
International Nuclear Information System (INIS)
Lebedev, Semen; Höhne, Claudia; Lebedev, Andrey; Ososkov, Gennady
2014-01-01
Development of fast and efficient event reconstruction algorithms is an important and challenging task in the Compressed Baryonic Matter (CBM) experiment at the future FAIR facility. The event reconstruction algorithms have to process terabytes of input data produced in particle collisions. In this contribution, several event reconstruction algorithms are presented. Optimization of the algorithms in the following CBM detectors are discussed: Ring Imaging Cherenkov (RICH) detector, Transition Radiation Detectors (TRD) and Muon Chamber (MUCH). The ring reconstruction algorithm in the RICH is discussed. In TRD and MUCH track reconstruction algorithms are based on track following and Kalman Filter methods. All algorithms were significantly optimized to achieve maximum speed up and minimum memory consumption. Obtained results showed that a significant speed up factor for all algorithms was achieved and the reconstruction efficiency stays at high level.
The Adam and Eve Robot Scientists for the Automated Discovery of Scientific Knowledge
King, Ross
A Robot Scientist is a physically implemented robotic system that applies techniques from artificial intelligence to execute cycles of automated scientific experimentation. A Robot Scientist can automatically execute cycles of hypothesis formation, selection of efficient experiments to discriminate between hypotheses, execution of experiments using laboratory automation equipment, and analysis of results. The motivation for developing Robot Scientists is to better understand science, and to make scientific research more efficient. The Robot Scientist `Adam' was the first machine to autonomously discover scientific knowledge: both form and experimentally confirm novel hypotheses. Adam worked in the domain of yeast functional genomics. The Robot Scientist `Eve' was originally developed to automate early-stage drug development, with specific application to neglected tropical disease such as malaria, African sleeping sickness, etc. We are now adapting Eve to work with on cancer. We are also teaching Eve to autonomously extract information from the scientific literature.
Hanson, Eve
2009-01-01
Peadisainer Eve Hanson 1994. aastal loodud moemärgist Ivo Nikkolo. Näitus "Ivo Nikkolo 15 - julgus olla IN" Baltika kvartalis 29. nov.-ni. Eksponeeritud ka Ivo Nikkolo erikollektsioon "One in a hundred"
DEFF Research Database (Denmark)
Wringe, Alison; Moshabela, Mosa; Nyamukapa, Constance
2017-01-01
Objective: In view of expanding ‘test and treat’ initiatives, we sought to elicit how the experience of HIV testing influenced subsequent engagement in HIV care among people diagnosed with HIV. Methods: As part of a multisite qualitative study, we conducted in-depth interviews in Uganda, South...... without consent, which could lead to disengagement from care. Conflicting rationalities for HIV testing between health workers and their clients caused tensions that undermined engagement in HIV care among people living with HIV. Although many health workers helped clients to accept their diagnosis...... may cure HIV. Repeat testing provided an opportunity to develop familiarity with clinical procedures, address concerns about HIV services and build trust with health workers. Conclusion: The principles of consent and confidentiality that should underlie HIV testing and counselling practices may...
Ferrari, L.
2006-12-01
The Meso America Subduction Experiments (MASE), carried out jointly by Caltech, UCLA and UNAM (Institute of Geophysics and Center for Geoscience) is about to provide a detailed image of the crust and upper mantle in the central part of the Mexican subduction zone (Acapulco, Gro. Huejutla, Hgo.). Preliminary results show that the Cocos plate between the coast and the volcanic front is horizontal and placed just beneath the upper plate Moho. Further north, beneath the Trans-Mexican Volcanic Belt (TMVB), seismicity is scarce or absent and the geometry of the subducted plate is poorly defined. This part of the TMVB also displays a large geochemical variability, including lavas with scarce to none evidence of fluids from the subducting plate (OIB in Sierra Chichinautzin) and lavas with slab melting signature (adakites of Nevado de Toluca and Apan area) that coexist with the more abundant products showing clear evidence of fluids from the subduting plate. These peculiarities led several workers to formulate models that depart from a classic subduction scenario for the genesis of the TMVB. These include the presence of a rootless mantle plume, the development of a continental rift, a more or less abrupt increase of the subduction angle and a detached slab. While waiting from the final results of the MASE project the data available from potential methods, thermal modeling and the geologic record of the TMVB provide some constraints to evaluate these models. Gravimetric and magnetotelluric data consistently indicate that beneath the TMVB the upper mantle has a relatively low density and high temperatures/conductivity. Thermal modeling also indicates a low viscosity and high temperature mantle beneath the arc. All the above seems to indicate that the slab must increase rapidly its dip beneath the volcanic front leaving space for a hot asthenospheric mantle. The fate of the slab further to the north is unclear from geophysical data alone. Global and regional tomographic
Klaasikunstisügis Baltikumis / Eve Koha
Koha, Eve
2002-01-01
11. X-15. XI Kaunase raekojas M. K. Ciurlionise nim. rahvusliku kunsti muuseumi keraamika muuseumis rahvusvahelisel klaasikunsti biennaalil "Vitrum Balticum II" osalevad Eestist Eve Koha, Ivo Lill, Mare Saare. Albinas Elskusest, Mark Eckstrandist. 20. X-20. XI Riia vanalinna klaasigaleriis Laipu t. 6 eesti ja läti klaasikunstnike ühisnäitusel, kus tutvustatakse vanu klaasitehnikaid ja vanade klaasiahjude ehitust, osalevad Eestist Kati Kerstna, Merle Hiis
Eve and the Madonna in Victorian Art
Dungan, Bebhinn
2012-01-01
Full version unavailable due to 3rd party copyright restrictions. Full version unavailable due to 3rd party copyright restrictions. Full version unavailable due to 3rd party copyright restrictions. Full version unavailable due to 3rd party copyright restrictions. Full version unavailable due to 3rd party copyright restrictions. Abstract This study identifies and addresses representations of Eve and the Madonna exhibited at seven well-known London venues during the Victorian...
Uus korter ja midagi veel / Eve Kaunis
Kaunis, Eve
2008-01-01
Uus Maa kinnisvarabüroo eluruumide konsultant Eve Kaunis ostjate eelistustest korterite valikul. Peamised müügiargumendid on soodne hind ja rohked lisaväärtused. Näiteks toodud 2-toaline korter (sisekujundus: Aet Piel, 71 m2) Põhja-Tallinnas Eugen Sachariase projekti järgi ehitatud majas ja 3-toaline korter (66,4 m2) Keilas 1980. aastatel ehitatud elamus
Trigger Algorithms for Alignment and Calibration at the CMS Experiment
Fernandez Perez Tomei, Thiago Rafael
2017-01-01
The data needs of the Alignment and Calibration group at the CMS experiment are reasonably different from those of the physics studies groups. Data are taken at CMS through the online event selection system, which is implemented in two steps. The Level-1 Trigger is implemented on custom-made electronics and dedicated to analyse the detector information at a coarse-grained scale, while the High Level Trigger (HLT) is implemented as a series of software algorithms, running in a computing farm, that have access to the full detector information. In this paper we describe the set of trigger algorithms that is deployed to address the needs of the Alignment and Calibration group, how it fits in the general infrastructure of the HLT, and how it feeds the Prompt Calibration Loop (PCL), allowing for a fast turnaround for the alignment and calibration constants.
Machine learning based global particle indentification algorithms at LHCb experiment
Derkach, Denis; Likhomanenko, Tatiana; Rogozhnikov, Aleksei; Ratnikov, Fedor
2017-01-01
One of the most important aspects of data processing at LHC experiments is the particle identification (PID) algorithm. In LHCb, several different sub-detector systems provide PID information: the Ring Imaging CHerenkov (RICH) detector, the hadronic and electromagnetic calorimeters, and the muon chambers. To improve charged particle identification, several neural networks including a deep architecture and gradient boosting have been applied to data. These new approaches provide higher identification efficiencies than existing implementations for all charged particle types. It is also necessary to achieve a flat dependency between efficiencies and spectator variables such as particle momentum, in order to reduce systematic uncertainties during later stages of data analysis. For this purpose, "flat” algorithms that guarantee the flatness property for efficiencies have also been developed. This talk presents this new approach based on machine learning and its performance.
Approximate Quantum Adders with Genetic Algorithms: An IBM Quantum Experience
Directory of Open Access Journals (Sweden)
Li Rui
2017-07-01
Full Text Available It has been proven that quantum adders are forbidden by the laws of quantum mechanics. We analyze theoretical proposals for the implementation of approximate quantum adders and optimize them by means of genetic algorithms, improving previous protocols in terms of efficiency and fidelity. Furthermore, we experimentally realize a suitable approximate quantum adder with the cloud quantum computing facilities provided by IBM Quantum Experience. The development of approximate quantum adders enhances the toolbox of quantum information protocols, paving the way for novel applications in quantum technologies.
Experience with CANDID: Comparison algorithm for navigating digital image databases
Energy Technology Data Exchange (ETDEWEB)
Kelly, P.; Cannon, M.
1994-10-01
This paper presents results from the authors experience with CANDID (Comparison Algorithm for Navigating Digital Image Databases), which was designed to facilitate image retrieval by content using a query-by-example methodology. A global signature describing the texture, shape, or color content is first computed for every image stored in a database, and a normalized similarity measure between probability density functions of feature vectors is used to match signatures. This method can be used to retrieve images from a database that are similar to a user-provided example image. Results for three test applications are included.
Ciccarese, Mariangela; Fabi, Alessandra; Moscetti, Luca; Cazzaniga, Maria Elena; Petrucelli, Luciana; Forcignanò, Rosachiara; Lupo, Laura Isabella; De Matteis, Elisabetta; Chiuri, Vincenzo Emanuele; Cairo, Giuseppe; Febbraro, Antonio; Giordano, Guido; Giampaglia, Marianna; Bilancia, Domenico; La Verde, Nicla; Maiello, Evaristo; Morritti, Maria; Giotta, Francesco; Lorusso, Vito; Latorre, Agnese; Scavelli, Claudio; Romito, Sante; Cusmai, Antonio; Palmiotti, Gennaro; Surico, Giammarco
2017-06-01
This retrospective analysis focused on the effect of treatment with EVE/EXE in a real-world population outside of clinical trials. We examined the efficacy of this combination in terms of PFS and RR related to dose intensity (5 mg daily versus 10 mg daily) and tolerability. 163 HER2-negative ER+/PgR+ ABC patients, treated with EVE/EXE from May 2011 to March 2016, were included in the analysis. The primary endpoints were the correlation between the daily dose and RR and PFS, as well as an evaluation of the tolerability of the combination. Secondary endpoints were RR, PFS, and OS according to the line of treatment. Patients were classified into three different groups, each with a different dose intensity of everolimus (A, B, C). RR was 29.8% (A), 27.8% (B) (p = 0.953), and not evaluable (C). PFS was 9 months (95% CI 7-11) (A), 10 months (95% CI 9-11) (B), and 5 months (95% CI 2-8) (C), p = 0.956. OS was 38 months (95% CI 24-38) (A), median not reached (B), and 13 months (95% CI 10-25) (C), p = 0.002. Adverse events were stomatitis 57.7% (11.0% grade 3-4), asthenia 46.0% (6.1% grade 3-4), hypercholesterolemia 46.0% (0.6% grade 3-4), and hyperglycemia 35.6% (5.5% grade 3-4). The main reason for discontinuation/interruption was grade 2-3 stomatitis. No correlation was found between dose intensity (5 vs. 10 mg labeled dose) and efficacy in terms of RR and PFS. The tolerability of the higher dose was poor in our experience, although this had no impact on efficacy.
Track reconstruction algorithms for the CBM experiment at FAIR
International Nuclear Information System (INIS)
Lebedev, Andrey; Hoehne, Claudia; Kisel, Ivan; Ososkov, Gennady
2010-01-01
The Compressed Baryonic Matter (CBM) experiment at the future FAIR accelerator complex at Darmstadt is being designed for a comprehensive measurement of hadron and lepton production in heavy-ion collisions from 8-45 AGeV beam energy, producing events with large track multiplicity and high hit density. The setup consists of several detectors including as tracking detectors the silicon tracking system (STS), the muon detector (MUCH) or alternatively a set of Transition Radiation Detectors (TRD). In this contribution, the status of the track reconstruction software including track finding, fitting and propagation is presented for the MUCH and TRD detectors. The track propagation algorithm takes into account an inhomogeneous magnetic field and includes accurate calculation of multiple scattering and energy losses in the detector material. Track parameters and covariance matrices are estimated using the Kalman filter method and a Kalman filter modification by assigning weights to hits and using simulated annealing. Three different track finder algorithms based on track following have been developed which either allow for track branches, just select nearest hits or use the mentioned weighting method. The track reconstruction efficiency for central Au+Au collisions at 25 AGeV beam energy using events from the UrQMD model is at the level of 93-95% for both detectors.
Experiments on Supervised Learning Algorithms for Text Categorization
Namburu, Setu Madhavi; Tu, Haiying; Luo, Jianhui; Pattipati, Krishna R.
2005-01-01
Modern information society is facing the challenge of handling massive volume of online documents, news, intelligence reports, and so on. How to use the information accurately and in a timely manner becomes a major concern in many areas. While the general information may also include images and voice, we focus on the categorization of text data in this paper. We provide a brief overview of the information processing flow for text categorization, and discuss two supervised learning algorithms, viz., support vector machines (SVM) and partial least squares (PLS), which have been successfully applied in other domains, e.g., fault diagnosis [9]. While SVM has been well explored for binary classification and was reported as an efficient algorithm for text categorization, PLS has not yet been applied to text categorization. Our experiments are conducted on three data sets: Reuter's- 21578 dataset about corporate mergers and data acquisitions (ACQ), WebKB and the 20-Newsgroups. Results show that the performance of PLS is comparable to SVM in text categorization. A major drawback of SVM for multi-class categorization is that it requires a voting scheme based on the results of pair-wise classification. PLS does not have this drawback and could be a better candidate for multi-class text categorization.
Sügismoes seguneb klassika vimkadega / Eve Rohtla
Rohtla, Eve, 1961-
2003-01-01
Ivo Nikkolo kaubamärki kandvat sügis-talvist kollektsiooni esitleti lioni'te naisteklubi salongiõhtul Viljandis. Disainer Eve Hanson kollektsiooni moeknihvidest. Ivo Nikkolo kollektsiooni disainib, toodab ja turustab osaühing Zik Zak
Indian Academy of Sciences (India)
polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming.
Overview of EVE - the event visualization environment of ROOT
International Nuclear Information System (INIS)
Tadel, Matevz
2010-01-01
EVE is a high-level visualization library using ROOT's data-processing, GUI and OpenGL interfaces. It is designed as a framework for object management offering hierarchical data organization, object interaction and visualization via GUI and OpenGL representations. Automatic creation of 2D projected views is also supported. On the other hand, it can serve as an event visualization toolkit satisfying most HEP requirements: visualization of geometry, simulated and reconstructed data such as hits, clusters, tracks and calorimeter information. Special classes are available for visualization of raw-data. Object-interaction layer allows for easy selection and highlighting of objects and their derived representations (projections) across several views (3D, Rho-Z, R-Phi). Object-specific tooltips are provided in both GUI and GL views. The visual-configuration layer of EVE is built around a data-base of template objects that can be applied to specific instances of visualization objects to ensure consistent object presentation. The data-base can be retrieved from a file, edited during the framework operation and stored to file. EVE prototype was developed within the ALICE collaboration and has been included into ROOT in December 2007. Since then all EVE components have reached maturity. EVE is used as the base of AliEve visualization framework in ALICE, Firework physics-oriented event-display in CMS, and as the visualization engine of FairRoot in FAIR.
Overview of EVE - the event visualization environment of ROOT
Energy Technology Data Exchange (ETDEWEB)
Tadel, Matevz, E-mail: matevz.tadel@cern.c [CERN, CH-1211 Geneve 23 (Switzerland)
2010-04-01
EVE is a high-level visualization library using ROOT's data-processing, GUI and OpenGL interfaces. It is designed as a framework for object management offering hierarchical data organization, object interaction and visualization via GUI and OpenGL representations. Automatic creation of 2D projected views is also supported. On the other hand, it can serve as an event visualization toolkit satisfying most HEP requirements: visualization of geometry, simulated and reconstructed data such as hits, clusters, tracks and calorimeter information. Special classes are available for visualization of raw-data. Object-interaction layer allows for easy selection and highlighting of objects and their derived representations (projections) across several views (3D, Rho-Z, R-Phi). Object-specific tooltips are provided in both GUI and GL views. The visual-configuration layer of EVE is built around a data-base of template objects that can be applied to specific instances of visualization objects to ensure consistent object presentation. The data-base can be retrieved from a file, edited during the framework operation and stored to file. EVE prototype was developed within the ALICE collaboration and has been included into ROOT in December 2007. Since then all EVE components have reached maturity. EVE is used as the base of AliEve visualization framework in ALICE, Firework physics-oriented event-display in CMS, and as the visualization engine of FairRoot in FAIR.
Experiments with conjugate gradient algorithms for homotopy curve tracking
Irani, Kashmira M.; Ribbens, Calvin J.; Watson, Layne T.; Kamat, Manohar P.; Walker, Homer F.
1991-01-01
There are algorithms for finding zeros or fixed points of nonlinear systems of equations that are globally convergent for almost all starting points, i.e., with probability one. The essence of all such algorithms is the construction of an appropriate homotopy map and then tracking some smooth curve in the zero set of this homotopy map. HOMPACK is a mathematical software package implementing globally convergent homotopy algorithms with three different techniques for tracking a homotopy zero curve, and has separate routines for dense and sparse Jacobian matrices. The HOMPACK algorithms for sparse Jacobian matrices use a preconditioned conjugate gradient algorithm for the computation of the kernel of the homotopy Jacobian matrix, a required linear algebra step for homotopy curve tracking. Here, variants of the conjugate gradient algorithm are implemented in the context of homotopy curve tracking and compared with Craig's preconditioned conjugate gradient method used in HOMPACK. The test problems used include actual large scale, sparse structural mechanics problems.
Problems afoot for the CERN kindergarten (EVEE)?
Staff Association
2016-01-01
You might have noticed that recently the Kindergarten changed names, it’s is now known under the name of EVEE which stands for ‘Espace de Vie Enfantine et École’ and currently welcomes 150 children between 4 months and 6 years of age. This establishment which is under the aegis of the Staff Association is governed by a committee composed of a mixture of the following: employers (from the Staff Association), employees, parents and the Headmistress who is an ex officio member (see Echo 238: http://staff-association.web.cern.ch/content/quoi-de-neuf-au-jardin-d%E2%80%99enfants). Great strides have been made in the past decade Over the previous decade in conjuction with the CERN Administration several new services have been proposed, including: the establishment of a canteen with a capacity of up to 60 children/day; the setting-up of a creche for infants ranging between 4 months and 3 years (approx 35 infants); the creation of a day-camp with the capacity to welcome up ...
Indian Academy of Sciences (India)
to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted algorithm design paradigms. In this article, we illustrate algorithm design techniques such as balancing, greedy strategy, dynamic programming strategy, and backtracking or traversal of ...
A New Segment Building Algorithm for the Cathode Strip Chambers in the CMS Experiment
Directory of Open Access Journals (Sweden)
Golutvin I.
2016-01-01
Full Text Available A new segment building algorithm for the Cathode Strip Chambers in the CMS experiment is presented. A detailed description of the new algorithm is given along with a comparison with the algorithm used in the CMS software. The new segment builder was tested with different Monte-Carlo data samples. The new algorithm is meant to be robust and effective for hard muons and the higher luminosity that is expected in the future at the LHC.
76 FR 81827 - Safety Zone; Sacramento New Years Eve Fireworks Display, Sacramento, CA
2011-12-29
... Zone; Sacramento New Years Eve Fireworks Display, Sacramento, CA AGENCY: Coast Guard, DHS. ACTION... during the Sacramento New Years Eve Fireworks Display in the navigable waters of the Sacramento River... Sacramento New Years Eve Fireworks Display safety zones in the navigable waters of the Sacramento River near...
Data analysis algorithms for gravitational-wave experiments
International Nuclear Information System (INIS)
Bonifazi, P.; Ferrari, V.; Frasca, S.; Pallottino, G.V.; Pizzella, G.
1978-01-01
The analysis of the sensitivity of a gravitational-wave antenna system shows that the role of the algorithms used for the analysis of the experimental data is comparable to that of the experimental apparatus. After a discussion of the processing performed on the input signals by the antenna and the electronic instrumentation, we derive a mathematical model of the system. This model is then used as a basis for the discussion of a number of data analysis algorithms that include also the Wiener-Kolmogoroff optimum filter; the performances of the algorithms are presented in terms of signal-to-noise ratio and sensitivity to short bursts of resonant gravitational waves. The theoretical results are in good agreement with the experimental results obtained with a small cryogenic antenna (24 kg)
Cell Formation in Industrial Engineering : Theory, Algorithms and Experiments
Goldengorin, B.; Krushynskyi, D.; Pardalos, P.M.
2013-01-01
This book focuses on a development of optimal, flexible, and efficient models and algorithms for cell formation in group technology. Its main aim is to provide a reliable tool that can be used by managers and engineers to design manufacturing cells based on their own preferences and constraints
PSO algorithm enhanced with Lozi Chaotic Map - Tuning experiment
Energy Technology Data Exchange (ETDEWEB)
Pluhacek, Michal; Senkerik, Roman; Zelinka, Ivan [Tomas Bata University in Zlín, Faculty of Applied Informatics Department of Informatics and Artificial Intelligence nám. T.G. Masaryka 5555, 760 01 Zlín (Czech Republic)
2015-03-10
In this paper it is investigated the effect of tuning of control parameters of the Lozi Chaotic Map employed as a chaotic pseudo-random number generator for the particle swarm optimization algorithm. Three different benchmark functions are selected from the IEEE CEC 2013 competition benchmark set. The Lozi map is extensively tuned and the performance of PSO is evaluated.
LHCb New algorithms for Flavour Tagging at the LHCb experiment
Fazzini, Davide
2016-01-01
The Flavour Tagging technique allows to identify the B initial flavour, required in the measurements of flavour oscillations and time-dependent CP asymmetries in neutral B meson systems. The identification performances at LHCb are further enhanced thanks to the contribution of new algorithms.
PSO algorithm enhanced with Lozi Chaotic Map - Tuning experiment
International Nuclear Information System (INIS)
Pluhacek, Michal; Senkerik, Roman; Zelinka, Ivan
2015-01-01
In this paper it is investigated the effect of tuning of control parameters of the Lozi Chaotic Map employed as a chaotic pseudo-random number generator for the particle swarm optimization algorithm. Three different benchmark functions are selected from the IEEE CEC 2013 competition benchmark set. The Lozi map is extensively tuned and the performance of PSO is evaluated
Learning motor skills from algorithms to robot experiments
Kober, Jens
2014-01-01
This book presents the state of the art in reinforcement learning applied to robotics both in terms of novel algorithms and applications. It discusses recent approaches that allow robots to learn motor skills and presents tasks that need to take into account the dynamic behavior of the robot and its environment, where a kinematic movement plan is not sufficient. The book illustrates a method that learns to generalize parameterized motor plans which is obtained by imitation or reinforcement learning, by adapting a small set of global parameters, and appropriate kernel-based reinforcement learning algorithms. The presented applications explore highly dynamic tasks and exhibit a very efficient learning process. All proposed approaches have been extensively validated with benchmarks tasks, in simulation, and on real robots. These tasks correspond to sports and games but the presented techniques are also applicable to more mundane household tasks. The book is based on the first author’s doctoral thesis, which wo...
Indian Academy of Sciences (India)
ticians but also forms the foundation of computer science. Two ... with methods of developing algorithms for solving a variety of problems but ... applications of computers in science and engineer- ... numerical calculus are as important. We will ...
On-line reconstruction algorithms for the CBM and ALICE experiments
International Nuclear Information System (INIS)
Gorbunov, Sergey
2013-01-01
This thesis presents various algorithms which have been developed for on-line event reconstruction in the CBM experiment at GSI, Darmstadt and the ALICE experiment at CERN, Geneve. Despite the fact that the experiments are different - CBM is a fixed target experiment with forward geometry, while ALICE has a typical collider geometry - they share common aspects when reconstruction is concerned. The thesis describes: - general modifications to the Kalman filter method, which allows one to accelerate, to improve, and to simplify existing fit algorithms; - developed algorithms for track fit in CBM and ALICE experiment, including a new method for track extrapolation in non-homogeneous magnetic field. - developed algorithms for primary and secondary vertex fit in the both experiments. In particular, a new method of reconstruction of decayed particles is presented. - developed parallel algorithm for the on-line tracking in the CBM experiment. - developed parallel algorithm for the on-line tracking in High Level Trigger of the ALICE experiment. - the realisation of the track finders on modern hardware, such as SIMD CPU registers and GPU accelerators. All the presented methods have been developed by or with the direct participation of the author.
Multimedia over cognitive radio networks algorithms, protocols, and experiments
Hu, Fei
2014-01-01
PrefaceAbout the EditorsContributorsNetwork Architecture to Support Multimedia over CRNA Management Architecture for Multimedia Communication in Cognitive Radio NetworksAlexandru O. Popescu, Yong Yao, Markus Fiedler , and Adrian P. PopescuPaving a Wider Way for Multimedia over Cognitive Radios: An Overview of Wideband Spectrum Sensing AlgorithmsBashar I. Ahmad, Hongjian Sun, Cong Ling, and Arumugam NallanathanBargaining-Based Spectrum Sharing for Broadband Multimedia Services in Cognitive Radio NetworkYang Yan, Xiang Chen, Xiaofeng Zhong, Ming Zhao, and Jing WangPhysical Layer Mobility Challen
Multiagency Urban Search Experiment Detector and Algorithm Test Bed
Nicholson, Andrew D.; Garishvili, Irakli; Peplow, Douglas E.; Archer, Daniel E.; Ray, William R.; Swinney, Mathew W.; Willis, Michael J.; Davidson, Gregory G.; Cleveland, Steven L.; Patton, Bruce W.; Hornback, Donald E.; Peltz, James J.; McLean, M. S. Lance; Plionis, Alexander A.; Quiter, Brian J.; Bandstra, Mark S.
2017-07-01
In order to provide benchmark data sets for radiation detector and algorithm development, a particle transport test bed has been created using experimental data as model input and validation. A detailed radiation measurement campaign at the Combined Arms Collective Training Facility in Fort Indiantown Gap, PA (FTIG), USA, provides sample background radiation levels for a variety of materials present at the site (including cinder block, gravel, asphalt, and soil) using long dwell high-purity germanium (HPGe) measurements. In addition, detailed light detection and ranging data and ground-truth measurements inform model geometry. This paper describes the collected data and the application of these data to create background and injected source synthetic data for an arbitrary gamma-ray detection system using particle transport model detector response calculations and statistical sampling. In the methodology presented here, HPGe measurements inform model source terms while detector response calculations are validated via long dwell measurements using 2"×4"×16" NaI(Tl) detectors at a variety of measurement points. A collection of responses, along with sampling methods and interpolation, can be used to create data sets to gauge radiation detector and algorithm (including detection, identification, and localization) performance under a variety of scenarios. Data collected at the FTIG site are available for query, filtering, visualization, and download at muse.lbl.gov.
Indian Academy of Sciences (India)
algorithm design technique called 'divide-and-conquer'. One of ... Turtle graphics, September. 1996. 5. ... whole list named 'PO' is a pointer to the first element of the list; ..... Program for computing matrices X and Y and placing the result in C *).
Indian Academy of Sciences (India)
algorithm that it is implicitly understood that we know how to generate the next natural ..... Explicit comparisons are made in line (1) where maximum and minimum is ... It can be shown that the function T(n) = 3/2n -2 is the solution to the above ...
An improved muon reconstruction algorithm for INO-ICAL experiment
International Nuclear Information System (INIS)
Bhattacharya, Kolahal; MandaI, Naba K.
2013-01-01
The charge current interaction of neutrino in INO-ICAL detector will be identified with a muon (μ ± ) in the detector whose kinematics is related with the kinematics of the neutrino. So, muon reconstruction is a very important step in achieving INO physics goals. The existing muon reconstruction package for INO-ICAL has poor performance in specific regimes of experimental interest: (a) for larger zenith angle (θ > 50°), (b) for lower energies (E < 1 GeV); mainly due to poor error propagation scheme insensitive to energy E, angle (θ, φ) and inhomogeneous magnetic field along the muon track. Since, a significant fraction of muons from atmospheric neutrino interactions will have initial energy < 1 GeV and almost uniform distribution in cosθ a robust package for muon reconstruction is essential. We have implemented higher order correction terms in the propagation of the state and error covariance matrices of the Kalman Iter. The algorithm ensures track element merging in most cases and also increases reconstruction efficiency. The performance of this package will be presented in comparison with the previous one. (author)
Indian Academy of Sciences (India)
will become clear in the next article when we discuss a simple logo like programming language. ... Rod B may be used as an auxiliary store. The problem is to find an algorithm which performs this task. ... No disks are moved from A to Busing C as auxiliary rod. • move _disk (A, C);. (No + l)th disk is moved from A to C directly ...
Percolation Model for the Existence of a Mitochondrial Eve
Neves, A G M
2005-01-01
We look at the process of inheritance of mitochondrial DNA as a percolation model on trees equivalent to the Galton-Watson process. The model is exactly solvable for its percolation threshold $p_c$ and percolation probability critical exponent. In the approximation of small percolation probability, and assuming limited progeny number, we are also able to find the maximum and minimum percolation probabilities over all probability distributions for the progeny number constrained to a given $p_c$. As a consequence, we can relate existence of a mitochondrial Eve to quantitative knowledge about demographic evolution of early mankind. In particular, we show that a mitochondrial Eve may exist even in an exponentially growing population, provided that the average number of children per individual is constrained to a small range depending on the probability $p$ that a newborn child is a female.
Modification of Brueschweiler quantum searching algorithm and realization by NMR experiment
International Nuclear Information System (INIS)
Yang Xiaodong; Wei Daxiu; Luo Jun; Miao Xijia
2002-01-01
In recent years, quantum computing research has made big progress, which exploit quantum mechanical laws, such as interference, superposition and parallelism, to perform computing tasks. The most inducing thing is that the quantum computing can provide large rise to the speedup in quantum algorithm. Quantum computing can solve some problems, which are impossible or difficult for the classical computing. The problem of searching for a specific item in an unsorted database can be solved with certain quantum algorithm, for example, Grover quantum algorithm and Brueschweiler quantum algorithm. The former gives a quadratic speedup, and the latter gives an exponential speedup comparing with the corresponding classical algorithm. In Brueschweiler quantum searching algorithm, the data qubit and the read-out qubit (the ancilla qubit) are different qubits. The authors have studied Brueschweiler algorithm and proposed a modified version, in which no ancilla qubit is needed to reach exponential speedup in the searching, the data and the read-out qubit are the same qubits. The modified Brueschweiler algorithm can be easier to design and realize. The authors also demonstrate the modified Brueschweiler algorithm in a 3-qubit molecular system by Nuclear Magnetic Resonance (NMR) experiment
EVE: Explainable Vector Based Embedding Technique Using Wikipedia
Qureshi, M. Atif; Greene, Derek
2017-01-01
We present an unsupervised explainable word embedding technique, called EVE, which is built upon the structure of Wikipedia. The proposed model defines the dimensions of a semantic vector representing a word using human-readable labels, thereby it readily interpretable. Specifically, each vector is constructed using the Wikipedia category graph structure together with the Wikipedia article link structure. To test the effectiveness of the proposed word embedding model, we consider its usefulne...
Multiagent pursuit-evasion games: Algorithms and experiments
Kim, Hyounjin
Deployment of intelligent agents has been made possible through advances in control software, microprocessors, sensor/actuator technology, communication technology, and artificial intelligence. Intelligent agents now play important roles in many applications where human operation is too dangerous or inefficient. There is little doubt that the world of the future will be filled with intelligent robotic agents employed to autonomously perform tasks, or embedded in systems all around us, extending our capabilities to perceive, reason and act, and replacing human efforts. There are numerous real-world applications in which a single autonomous agent is not suitable and multiple agents are required. However, after years of active research in multi-agent systems, current technology is still far from achieving many of these real-world applications. Here, we consider the problem of deploying a team of unmanned ground vehicles (UGV) and unmanned aerial vehicles (UAV) to pursue a second team of UGV evaders while concurrently building a map in an unknown environment. This pursuit-evasion game encompasses many of the challenging issues that arise in operations using intelligent multi-agent systems. We cast the problem in a probabilistic game theoretic framework and consider two computationally feasible pursuit policies: greedy and global-max. We also formulate this probabilistic pursuit-evasion game as a partially observable Markov decision process and employ a policy search algorithm to obtain a good pursuit policy from a restricted class of policies. The estimated value of this policy is guaranteed to be uniformly close to the optimal value in the given policy class under mild conditions. To implement this scenario on real UAVs and UGVs, we propose a distributed hierarchical hybrid system architecture which emphasizes the autonomy of each agent yet allows for coordinated team efforts. We then describe our implementation on a fleet of UGVs and UAVs, detailing components such
Hardware realization of a fast neural network algorithm for real-time tracking in HEP experiments
International Nuclear Information System (INIS)
Leimgruber, F.R.; Pavlopoulos, P.; Steinacher, M.; Tauscher, L.; Vlachos, S.; Wendler, H.
1995-01-01
A fast pattern recognition system for HEP experiments, based on artificial neural network algorithms (ANN), has been realized with standard electronics. The multiplicity and location of tracks in an event are determined in less than 75 ns. Hardware modules of this first level trigger were extensively tested for performance and reliability with data from the CPLEAR experiment. (orig.)
Online Tracking Algorithms on GPUs for the P̅ANDA Experiment at FAIR
Bianchi, L.; Herten, A.; Ritman, J.; Stockmanns, T.; Adinetz, A.; Kraus, J.; Pleiter, D.
2015-12-01
P̅ANDA is a future hadron and nuclear physics experiment at the FAIR facility in construction in Darmstadt, Germany. In contrast to the majority of current experiments, PANDA's strategy for data acquisition is based on event reconstruction from free-streaming data, performed in real time entirely by software algorithms using global detector information. This paper reports the status of the development of algorithms for the reconstruction of charged particle tracks, optimized online data processing applications, using General-Purpose Graphic Processing Units (GPU). Two algorithms for trackfinding, the Triplet Finder and the Circle Hough, are described, and details of their GPU implementations are highlighted. Average track reconstruction times of less than 100 ns are obtained running the Triplet Finder on state-of- the-art GPU cards. In addition, a proof-of-concept system for the dispatch of data to tracking algorithms using Message Queues is presented.
Online Tracking Algorithms on GPUs for the P-barANDA Experiment at FAIR
International Nuclear Information System (INIS)
Bianchi, L; Herten, A; Ritman, J; Stockmanns, T; Adinetz, A.; Pleiter, D; Kraus, J
2015-01-01
P-barANDA is a future hadron and nuclear physics experiment at the FAIR facility in construction in Darmstadt, Germany. In contrast to the majority of current experiments, PANDA's strategy for data acquisition is based on event reconstruction from free-streaming data, performed in real time entirely by software algorithms using global detector information. This paper reports the status of the development of algorithms for the reconstruction of charged particle tracks, optimized online data processing applications, using General-Purpose Graphic Processing Units (GPU). Two algorithms for trackfinding, the Triplet Finder and the Circle Hough, are described, and details of their GPU implementations are highlighted. Average track reconstruction times of less than 100 ns are obtained running the Triplet Finder on state-of- the-art GPU cards. In addition, a proof-of-concept system for the dispatch of data to tracking algorithms using Message Queues is presented. (paper)
International Nuclear Information System (INIS)
Didkovsky, L.; Judge, D.; Wieman, S.; Kosovichev, A. G.; Woods, T.
2011-01-01
We report on the detection of oscillations in the corona in the frequency range corresponding to five-minute acoustic modes of the Sun. The oscillations have been observed using soft X-ray measurements from the Extreme Ultraviolet Spectrophotometer (ESP) of the Extreme Ultraviolet Variability Experiment on board the Solar Dynamics Observatory. The ESP zeroth-order channel observes the Sun as a star without spatial resolution in the wavelength range of 0.1-7.0 nm (the energy range is 0.18-12.4 keV). The amplitude spectrum of the oscillations calculated from six-day time series shows a significant increase in the frequency range of 2-4 mHz. We interpret this increase as a response of the corona to solar acoustic (p) modes and attempt to identify p-mode frequencies among the strongest peaks. Due to strong variability of the amplitudes and frequencies of the five-minute oscillations in the corona, we study how the spectrum from two adjacent six-day time series combined together affects the number of peaks associated with the p-mode frequencies and their amplitudes. This study shows that five-minute oscillations of the Sun can be observed in the corona in variations of the soft X-ray emission. Further investigations of these oscillations may improve our understanding of the interaction of the oscillation modes with the solar atmosphere, and the interior-corona coupling, in general.
A not quite random walk: Experimenting with the ethnomethods of the algorithm
Directory of Open Access Journals (Sweden)
Malte Ziewitz
2017-11-01
Full Text Available Algorithms have become a widespread trope for making sense of social life. Science, finance, journalism, warfare, and policing—there is hardly anything these days that has not been specified as “algorithmic.” Yet, although the trope has brought together a variety of audiences, it is not quite clear what kind of work it does. Often portrayed as powerful yet inscrutable entities, algorithms maintain an air of mystery that makes them both interesting and difficult to understand. This article takes on this problem and examines the role of algorithms not as techno-scientific objects to be known, but as a figure that is used for making sense of observations. Following in the footsteps of Harold Garfinkel’s tutorial cases, I shall illustrate the implications of this view through an experiment with algorithmic navigation. Challenging participants to go on a walk, guided not by maps or GPS but by an algorithm developed on the spot, I highlight a number of dynamics typical of reasoning with running code, including the ongoing respecification of rules and observations, the stickiness of the procedure, and the selective invocation of the algorithm as an intelligible object. The materials thus provide an opportunity to rethink key issues at the intersection of the social sciences and the computational, including popular concerns with transparency, accountability, and ethics.
2012-12-21
... Zone; Sacramento New Year's Eve Fireworks Display, Sacramento River, Sacramento, CA AGENCY: Coast Guard... safety zones during the Sacramento New Year's Eve Fireworks Display in the navigable waters of the Sacramento River on December 31, 2012 and January 1, 2013. The fireworks displays will occur from 9 p.m. to 9...
The ADAM and EVE project: Heat transfer at ambient temperature
International Nuclear Information System (INIS)
Boltendahl, U.; Harth, R.
1980-01-01
In the nuclear research plant at Juelich a new heating system is at present being developed as part of the Nuclear Long-distance Heating Project. Helium is heated up in a high-temperature reactor. The heat chemically converts a gas mixture in a reformer plant (EVE). The gases 'charged' with energy can be transported through tubes over any distance required at ambient temperatures. In a methanisation plant (ADAM) the gases react with one another, releasing the energy in the form of heat which can be used for heating air or water. (orig.) [de
Thermodynamic Spectrum of Solar Flares Based on SDO/EVE Observations: Techniques and First Results
Wang, Yuming; Zhou, Zhenjun; Zhang, Jie; Liu, Kai; Liu, Rui; Shen, Chenglong; Chamberlin, Phillip C.
2016-01-01
The Solar Dynamics Observatory (SDO)/EUV Variability Experiment (EVE) provides rich information on the thermodynamic processes of solar activities, particularly on solar flares. Here, we develop a method to construct thermodynamic spectrum (TDS) charts based on the EVE spectral lines. This tool could potentially be useful for extreme ultraviolet (EUV) astronomy to learn about the eruptive activities on distant astronomical objects. Through several cases, we illustrate what we can learn from the TDS charts. Furthermore, we apply the TDS method to 74 flares equal to or greater than the M5.0 class, and reach the following statistical results. First, EUV peaks are always behind the soft X-ray (SXR) peaks and stronger flares tend to have faster cooling rates. There is a power-law correlation between the peak delay times and the cooling rates, suggesting a coherent cooling process of flares from SXR to EUV emissions. Second, there are two distinct temperature drift patterns, called Type I and Type II. For Type I flares, the enhanced emission drifts from high to low temperature like a quadrilateral, whereas for Type II flares the drift pattern looks like a triangle. Statistical analysis suggests that Type II flares are more impulsive than Type I flares. Third, for late-phase flares, the peak intensity ratio of the late phase to the main phase is roughly correlated with the flare class, and the flares with a strong late phase are all confined. We believe that the re-deposition of the energy carried by a flux rope, which unsuccessfully erupts out, into thermal emissions is responsible for the strong late phase found in a confined flare. Furthermore, we show the signatures of the flare thermodynamic process in the chromosphere and transition region in the TDS charts. These results provide new clues to advance our understanding of the thermodynamic processes of solar flares and associated solar eruptions, e.g., coronal mass ejections.
Overview of EVE – the event visualization environment of ROOT
Tadel, M
2010-01-01
EVE is a high-level visualization library using ROOT's data-processing, GUI and OpenGL interfaces. It is designed as a framework for object management offering hierarchical data organization, object interaction and visualization via GUI and OpenGL representations. Automatic creation of 2D projected views is also supported. On the other hand, it can serve as an event visualization toolkit satisfying most HEP requirements: visualization of geometry, simulated and reconstructed data such as hits, clusters, tracks and calorimeter information. Special classes are available for visualization of raw-data. Object-interaction layer allows for easy selection and highlighting of objects and their derived representations (projections) across several views (3D, Rho-Z, R-Phi). Object-specific tooltips are provided in both GUI and GL views. The visual-configuration layer of EVE is built around a data-base of template objects that can be applied to specific instances of visualization objects to ensure consistent object prese...
Classical boson sampling algorithms with superior performance to near-term experiments
Neville, Alex; Sparrow, Chris; Clifford, Raphaël; Johnston, Eric; Birchall, Patrick M.; Montanaro, Ashley; Laing, Anthony
2017-12-01
It is predicted that quantum computers will dramatically outperform their conventional counterparts. However, large-scale universal quantum computers are yet to be built. Boson sampling is a rudimentary quantum algorithm tailored to the platform of linear optics, which has sparked interest as a rapid way to demonstrate such quantum supremacy. Photon statistics are governed by intractable matrix functions, which suggests that sampling from the distribution obtained by injecting photons into a linear optical network could be solved more quickly by a photonic experiment than by a classical computer. The apparently low resource requirements for large boson sampling experiments have raised expectations of a near-term demonstration of quantum supremacy by boson sampling. Here we present classical boson sampling algorithms and theoretical analyses of prospects for scaling boson sampling experiments, showing that near-term quantum supremacy via boson sampling is unlikely. Our classical algorithm, based on Metropolised independence sampling, allowed the boson sampling problem to be solved for 30 photons with standard computing hardware. Compared to current experiments, a demonstration of quantum supremacy over a successful implementation of these classical methods on a supercomputer would require the number of photons and experimental components to increase by orders of magnitude, while tackling exponentially scaling photon loss.
Vertigo in childhood: proposal for a diagnostic algorithm based upon clinical experience.
Casani, A P; Dallan, I; Navari, E; Sellari Franceschini, S; Cerchiai, N
2015-06-01
The aim of this paper is to analyse, after clinical experience with a series of patients with established diagnoses and review of the literature, all relevant anamnestic features in order to build a simple diagnostic algorithm for vertigo in childhood. This study is a retrospective chart review. A series of 37 children underwent complete clinical and instrumental vestibular examination. Only neurological disorders or genetic diseases represented exclusion criteria. All diagnoses were reviewed after applying the most recent diagnostic guidelines. In our experience, the most common aetiology for dizziness is vestibular migraine (38%), followed by acute labyrinthitis/neuritis (16%) and somatoform vertigo (16%). Benign paroxysmal vertigo was diagnosed in 4 patients (11%) and paroxysmal torticollis was diagnosed in a 1-year-old child. In 8% (3 patients) of cases, the dizziness had a post-traumatic origin: 1 canalolithiasis of the posterior semicircular canal and 2 labyrinthine concussions, respectively. Menière's disease was diagnosed in 2 cases. A bilateral vestibular failure of unknown origin caused chronic dizziness in 1 patient. In conclusion, this algorithm could represent a good tool for guiding clinical suspicion to correct diagnostic assessment in dizzy children where no neurological findings are detectable. The algorithm has just a few simple steps, based mainly on two aspects to be investigated early: temporal features of vertigo and presence of hearing impairment. A different algorithm has been proposed for cases in which a traumatic origin is suspected.
Study of a reconstruction algorithm for electrons in the ATLAS experiment in LHC
International Nuclear Information System (INIS)
Kerschen, N.
2006-09-01
The ATLAS experiment is a general purpose particle physics experiment mainly aimed at the discovery of the origin of mass through the research of the Higgs boson. In order to achieve this, the Large Hadron Collider at CERN will accelerate two proton beams and make them collide at the centre of the experiment. ATLAS will discover new particles through the measurement of their decay products. Electrons are such decay products: they produce an electromagnetic shower in the calorimeter by which they lose all their energy. The calorimeter is divided into cells and the deposited energy is reconstructed using an algorithm to assemble the cells into clusters. The purpose of this thesis is to study a new kind of algorithm adapting the cluster to the shower topology. In order to reconstruct the energy of the initially created electron, the cluster has to be calibrated by taking into account the energy lost in the dead material in front of the calorimeter. Therefore. a Monte-Carlo simulation of the ATLAS detector has been used to correct for effects of response modulation in position and in energy and to optimise the energy resolution as well as the linearity. An analysis of test beam data has been performed to study the behaviour of the algorithm in a more realistic environment. We show that the requirements of the experiment can be met for the linearity and resolution. The improvement of this new algorithm, compared to a fixed sized cluster. is the better recovery of Bremsstrahlung photons emitted by the electron in the material in front of the calorimeter. A Monte-Carlo analysis of the Higgs boson decay in four electrons confirms this result. (author)
International Nuclear Information System (INIS)
Azadeh, A.; Tarverdian, S.
2007-01-01
This study presents an integrated algorithm for forecasting monthly electrical energy consumption based on genetic algorithm (GA), computer simulation and design of experiments using stochastic procedures. First, time-series model is developed as a benchmark for GA and simulation. Computer simulation is developed to generate random variables for monthly electricity consumption. This is achieved to foresee the effects of probabilistic distribution on monthly electricity consumption. The GA and simulated-based GA models are then developed by the selected time-series model. Therefore, there are four treatments to be considered in analysis of variance (ANOVA) which are actual data, time series, GA and simulated-based GA. Furthermore, ANOVA is used to test the null hypothesis of the above four alternatives being equal. If the null hypothesis is accepted, then the lowest mean absolute percentage error (MAPE) value is used to select the best model, otherwise the Duncan Multiple Range Test (DMRT) method of paired comparison is used to select the optimum model, which could be time series, GA or simulated-based GA. In case of ties the lowest MAPE value is considered as the benchmark. The integrated algorithm has several unique features. First, it is flexible and identifies the best model based on the results of ANOVA and MAPE, whereas previous studies consider the best-fit GA model based on MAPE or relative error results. Second, the proposed algorithm may identify conventional time series as the best model for future electricity consumption forecasting because of its dynamic structure, whereas previous studies assume that GA always provide the best solutions and estimation. To show the applicability and superiority of the proposed algorithm, the monthly electricity consumption in Iran from March 1994 to February 2005 (131 months) is used and applied to the proposed algorithm
On the eve of Copenhagen: Obama and the environment
International Nuclear Information System (INIS)
Pereon, Y.M.
2009-01-01
The author proposes a rather detailed overview of the United States posture with respect to climate change challenges on the eve of the Copenhagen conference. First, he shows how the public opinion in the United States is ambivalent and changing. Then, he outlines that the arrival of President Obama resulted in an important change for the United States environmental policy: a new team was set up and the environment protection became a priority. The author then reports the Congressional process for this policy, first through the House of Representatives, and then the Senate. He highlights the differences between agendas of firms and of environment protection organisations, and between that of the Copenhagen conference and that of the US government
Design of 4D x-ray tomography experiments for reconstruction using regularized iterative algorithms
Mohan, K. Aditya
2017-10-01
4D X-ray computed tomography (4D-XCT) is widely used to perform non-destructive characterization of time varying physical processes in various materials. The conventional approach to improving temporal resolution in 4D-XCT involves the development of expensive and complex instrumentation that acquire data faster with reduced noise. It is customary to acquire data with many tomographic views at a high signal to noise ratio. Instead, temporal resolution can be improved using regularized iterative algorithms that are less sensitive to noise and limited views. These algorithms benefit from optimization of other parameters such as the view sampling strategy while improving temporal resolution by reducing the total number of views or the detector exposure time. This paper presents the design principles of 4D-XCT experiments when using regularized iterative algorithms derived using the framework of model-based reconstruction. A strategy for performing 4D-XCT experiments is presented that allows for improving the temporal resolution by progressively reducing the number of views or the detector exposure time. Theoretical analysis of the effect of the data acquisition parameters on the detector signal to noise ratio, spatial reconstruction resolution, and temporal reconstruction resolution is also presented in this paper.
The Lawless Frontier of Deep Space: Code as Law in EVE Online
Directory of Open Access Journals (Sweden)
Melissa de Zwart
2014-03-01
Full Text Available This article explores the concepts of player agency with respect to governance and regulation of online games. It considers the unique example of the Council of Stellar Management in EVE Online, and explores the multifaceted role performed by players involved in that Council. In particular, it considers the interaction between code, rules, contracts, and play with respect to EVE Online. This is used as a means to better understand the relations of power generated in game spaces.
Nakagawa, So; Takahashi, Mahoko Ueda
2016-01-01
In mammals, approximately 10% of genome sequences correspond to endogenous viral elements (EVEs), which are derived from ancient viral infections of germ cells. Although most EVEs have been inactivated, some open reading frames (ORFs) of EVEs obtained functions in the hosts. However, EVE ORFs usually remain unannotated in the genomes, and no databases are available for EVE ORFs. To investigate the function and evolution of EVEs in mammalian genomes, we developed EVE ORF databases for 20 genomes of 19 mammalian species. A total of 736,771 non-overlapping EVE ORFs were identified and archived in a database named gEVE (http://geve.med.u-tokai.ac.jp). The gEVE database provides nucleotide and amino acid sequences, genomic loci and functional annotations of EVE ORFs for all 20 genomes. In analyzing RNA-seq data with the gEVE database, we successfully identified the expressed EVE genes, suggesting that the gEVE database facilitates studies of the genomic analyses of various mammalian species.Database URL: http://geve.med.u-tokai.ac.jp. © The Author(s) 2016. Published by Oxford University Press.
Model-independent nonlinear control algorithm with application to a liquid bridge experiment
International Nuclear Information System (INIS)
Petrov, V.; Haaning, A.; Muehlner, K.A.; Van Hook, S.J.; Swinney, H.L.
1998-01-01
We present a control method for high-dimensional nonlinear dynamical systems that can target remote unstable states without a priori knowledge of the underlying dynamical equations. The algorithm constructs a high-dimensional look-up table based on the system's responses to a sequence of random perturbations. The method is demonstrated by stabilizing unstable flow of a liquid bridge surface-tension-driven convection experiment that models the float zone refining process. Control of the dynamics is achieved by heating or cooling two thermoelectric Peltier devices placed in the vicinity of the liquid bridge surface. The algorithm routines along with several example programs written in the MATLAB language can be found at ftp://ftp.mathworks.com/pub/contrib/v5/control/nlcontrol. copyright 1998 The American Physical Society
Wieman, S R; Didkovsky, L V; Judge, D L
The Solar EUV Monitor (SEM) onboard SOHO has measured absolute extreme ultraviolet (EUV) and soft X-ray solar irradiance nearly continuously since January 1996. The EUV Variability Experiment (EVE) on SDO, in operation since April of 2010, measures solar irradiance in a wide spectral range that encompasses the band passes (26 - 34 nm and 0.1 - 50 nm) measured by SOHO/SEM. However, throughout the mission overlap, irradiance values from these two instruments have differed by more than the combined stated uncertainties of the measurements. In an effort to identify the sources of these differences and eliminate them, we investigate in this work the effect of reprocessing the SEM data using a more accurate SEM response function (obtained from synchrotron measurements with a SEM sounding-rocket clone instrument taken after SOHO was already in orbit) and time-dependent, measured solar spectral distributions - i.e ., solar reference spectra that were unavailable prior to the launch of the SDO. We find that recalculating the SEM data with these improved parameters reduces mean differences with the EVE measurements from about 20 % to less than 5 % in the 26 - 34 nm band, and from about 35 % to about 15 % for irradiances in the 0.1 - 7 nm band extracted from the SEM 0.1 - 50 nm channel.
Fast parallel tracking algorithm for the muon detector of the CBM experiment at FAIR
International Nuclear Information System (INIS)
Lebedev, A.; Hoehne, C.; Kisel', I.; Ososkov, G.
2010-01-01
Particle trajectory recognition is an important and challenging task in the Compressed Baryonic Matter (CBM) experiment at the future FAIR accelerator at Darmstadt. The tracking algorithms have to process terabytes of input data produced in particle collisions. Therefore, the speed of the tracking software is extremely important for data analysis. In this contribution, a fast parallel track reconstruction algorithm, which uses available features of modern processors is presented. These features comprise a SIMD instruction set (SSE) and multithreading. The first allows one to pack several data items into one register and to operate on all of them in parallel thus achieving more operations per cycle. The second feature enables the routines to exploit all available CPU cores and hardware threads. This parallel version of the tracking algorithm has been compared to the initial serial scalar version which uses a similar approach for tracking. A speed-upfactor of 487 was achieved (from 730 to 1.5 ms/event) for a computer with 2 x Intel Core 17 processors at 2.66 GHz
MAPCUMBA: A fast iterative multi-grid map-making algorithm for CMB experiments
Doré, O.; Teyssier, R.; Bouchet, F. R.; Vibert, D.; Prunet, S.
2001-07-01
The data analysis of current Cosmic Microwave Background (CMB) experiments like BOOMERanG or MAXIMA poses severe challenges which already stretch the limits of current (super-) computer capabilities, if brute force methods are used. In this paper we present a practical solution for the optimal map making problem which can be used directly for next generation CMB experiments like ARCHEOPS and TopHat, and can probably be extended relatively easily to the full PLANCK case. This solution is based on an iterative multi-grid Jacobi algorithm which is both fast and memory sparing. Indeed, if there are Ntod data points along the one dimensional timeline to analyse, the number of operations is of O (Ntod \\ln Ntod) and the memory requirement is O (Ntod). Timing and accuracy issues have been analysed on simulated ARCHEOPS and TopHat data, and we discuss as well the issue of the joint evaluation of the signal and noise statistical properties.
Head and neck paragangliomas: A two-decade institutional experience and algorithm for management.
Smith, Joshua D; Harvey, Rachel N; Darr, Owen A; Prince, Mark E; Bradford, Carol R; Wolf, Gregory T; Else, Tobias; Basura, Gregory J
2017-12-01
Paragangliomas of the head and neck and cranial base are typically benign, slow-growing tumors arising within the jugular foramen, middle ear, carotid bifurcation, or vagus nerve proper. The objective of this study was to provide a comprehensive characterization of our institutional experience with clinical management of these tumors and posit an algorithm for diagnostic evaluation and treatment. This was a retrospective cohort study of patients undergoing treatment for paragangliomas of the head and neck and cranial base at our institution from 2000-2017. Data on tumor location, catecholamine levels, and specific imaging modalities employed in diagnostic work-up, pre-treatment cranial nerve palsy, treatment modality, utilization of preoperative angiographic embolization, complications of treatment, tumor control and recurrence, and hereditary status (ie, succinate dehydrogenase mutations) were collected and summarized. The mean (SD) age of our cohort was 51.8 (±16.1) years with 123 (63.4%) female patients and 71 (36.6%) male patients. Catecholamine-secreting lesions were found in nine (4.6%) patients. Fifty-one patients underwent genetic testing, with mutations identified in 43 (20 SDHD , 13 SDHB, 7 SDHD , 1 SDHA, SDHAF2, and NF1 ). Observation with serial imaging, surgical extirpation, radiation, and stereotactic radiosurgery were variably employed as treatment approaches across anatomic subsites. An algorithmic approach to clinical management of these tumors, derived from our longitudinal institutional experience and current empiric evidence, may assist otolaryngologists, radiation oncologists, and geneticists in the care of these complex neoplasms. 4.
THERMODYNAMIC SPECTRUM OF SOLAR FLARES BASED ON SDO/EVE OBSERVATIONS: TECHNIQUES AND FIRST RESULTS
Energy Technology Data Exchange (ETDEWEB)
Wang, Yuming; Zhou, Zhenjun; Liu, Kai; Liu, Rui; Shen, Chenglong [CAS Key Laboratory of Geospace Environment, Department of Geophysics and Planetary Sciences, University of Science and Technology of China, Hefei, Anhui 230026 (China); Zhang, Jie [School of Physics, Astronomy and Computational Sciences, George Mason University, 4400 University Drive, MSN 6A2, Fairfax, VA 22030 (United States); Chamberlin, Phillip C., E-mail: ymwang@ustc.edu.cn [Solar Physics Laboratory, Heliophysics Division, NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States)
2016-03-15
The Solar Dynamics Observatory (SDO)/EUV Variability Experiment (EVE) provides rich information on the thermodynamic processes of solar activities, particularly on solar flares. Here, we develop a method to construct thermodynamic spectrum (TDS) charts based on the EVE spectral lines. This tool could potentially be useful for extreme ultraviolet (EUV) astronomy to learn about the eruptive activities on distant astronomical objects. Through several cases, we illustrate what we can learn from the TDS charts. Furthermore, we apply the TDS method to 74 flares equal to or greater than the M5.0 class, and reach the following statistical results. First, EUV peaks are always behind the soft X-ray (SXR) peaks and stronger flares tend to have faster cooling rates. There is a power-law correlation between the peak delay times and the cooling rates, suggesting a coherent cooling process of flares from SXR to EUV emissions. Second, there are two distinct temperature drift patterns, called Type I and Type II. For Type I flares, the enhanced emission drifts from high to low temperature like a quadrilateral, whereas for Type II flares the drift pattern looks like a triangle. Statistical analysis suggests that Type II flares are more impulsive than Type I flares. Third, for late-phase flares, the peak intensity ratio of the late phase to the main phase is roughly correlated with the flare class, and the flares with a strong late phase are all confined. We believe that the re-deposition of the energy carried by a flux rope, which unsuccessfully erupts out, into thermal emissions is responsible for the strong late phase found in a confined flare. Furthermore, we show the signatures of the flare thermodynamic process in the chromosphere and transition region in the TDS charts. These results provide new clues to advance our understanding of the thermodynamic processes of solar flares and associated solar eruptions, e.g., coronal mass ejections.
Registrations for EVE and School and Summer Camp
Staff Association
2017-01-01
In the wake of the Open Day, held on Saturday, 4 March 2017 (see Echo No. 264), EVE and School launched into an enrolment campaign on 6, 7 and 8 March. Once again, this year, we registered a great number of applications, and most of the groups are now full. The Nursery is already full, including the groups for babies (4 months to 1 year old), walkers (1 to 2 years old), and 2- to 3-year-olds. Regarding the Kindergarten, which welcomes 2- to 3 year-old children enrolled for mornings, as well as 3- to 4-year-olds enrolled either for mornings or for full days, there are still places available in the morning groups. Finally, the School for children aged 4 to 6 (Primary 1 and 2) enrolled for mornings or for full days, will be composed of three classes of around twenty children in 2017–2018 (one class of P1 and two classes of P2). All of these classes are currently full. If you are interested in a place in the morning groups of the Kindergarten (2- to 4-year-olds), please contact us to enroll your ...
Directory of Open Access Journals (Sweden)
Helena Sanches Marcon
2017-02-01
Full Text Available Abstract Endogenous viral elements (EVEs are the result of heritable horizontal gene transfer from viruses to hosts. In the last years, several EVE integration events were reported in plants by the exponential availability of sequenced genomes. Eucalyptus grandis is a forest tree species with a sequenced genome that is poorly studied in terms of evolution and mobile genetic elements composition. Here we report the characterization of E. grandis endogenous viral element 1 (EgEVE_1, a transcriptionally active EVE with a size of 5,664 bp. Phylogenetic analysis and genomic distribution demonstrated that EgEVE_1 is a newly described member of the Caulimoviridae family, distinct from the recently characterized plant Florendoviruses. Genomic distribution of EgEVE_1 and Florendovirus is also distinct. EgEVE_1 qPCR quantification in Eucalyptus urophylla suggests that this genome has more EgEVE_1 copies than E. grandis. EgEVE_1 transcriptional activity was demonstrated by RT-qPCR in five Eucalyptus species and one intrageneric hybrid. We also identified that Eucalyptus EVEs can generate small RNAs (sRNAs,that might be involved in de novo DNA methylation and virus resistance. Our data suggest that EVE families in Eucalyptus have distinct properties, and we provide the first comparative analysis of EVEs in Eucalyptus genomes.
2013-12-23
...-AA00 Eighth Coast Guard District Annual Safety Zones; New Year's Eve Celebration/City of Mobile; Mobile Channel; Mobile, AL AGENCY: Coast Guard, DHS. ACTION: Temporary final rule. SUMMARY: The Coast Guard will enforce the City of Mobile New Year's Eve Celebration safety zone in the Mobile Channel, Mobile, AL from...
Development of algorithms for real time track selection in the TOTEM experiment
Minafra, Nicola; Radicioni, E
The TOTEM experiment at the LHC has been designed to measure the total proton-proton cross-section with a luminosity independent method and to study elastic and diffractive scattering at energy up to 14 TeV in the center of mass. Elastic interactions are detected by Roman Pot stations, placed at 147m and 220m along the two exiting beams. At the present time, data acquired by these detectors are stored on disk without any data reduction by the data acquisition chain. In this thesis several tracking and selection algorithms, suitable for real-time implementation in the firmware of the back-end electronics, have been proposed and tested using real data.
Gender Ideology in The Diary Of Adam And Eve by Mark Twain
Directory of Open Access Journals (Sweden)
Paramita Ayuningtyas
2011-04-01
Full Text Available This article aims to show that behind the new version of Genesis by Mark Twain in his novel The Diary of Adam and Eve, there are some patriarchal principles that appear in it. It can be seen from the characterizations of Adam and Eve. By using some concepts from feminism and also focusing on the context of the novel, the analysis shows that patriarchal stereotypes about gender are applied in constructing the characters of Adam and Eve. Not only the content, but the form of the diary is also analyzed with the same method, and the same result is found. Therefore, it can be concluded that in spite of his progressiveness, Mark Twain still held patriarchal values in re-interpreting the tale of human creation.
An Innovative Aperture Cover Mechanism Used on SDO/EVE and MMS/SDP
Steg, Stephen; Vermeer, William; Tucker, Scott; Passe, Heather
2014-01-01
This paper describes an aperture cover mechanism that was successfully flown in four locations on SDO/EVE, and is awaiting launch in sixteen locations on MMS. This design uses a paraffin actuator and a latch that secures the cover closed and removes the actuator from the load path. This latch allows the assembly to operate both as a light weight contamination cover (SDO/EVE), and also as a high-strength sensor restraint mechanism (MMS/SDP). The paper provides design/analysis/test information about the mechanism.
The EVE plus RHESSI DEM for Solar Flares, and Implications for Residual Non-Thermal X-Ray Emission
McTiernan, James; Caspi, Amir; Warren, Harry
2016-05-01
Solar flare spectra are typically dominated by thermal emission in the soft X-ray energy range. The low energy extent of non-thermal emission can only be loosely quantified using currently available X-ray data. To address this issue, we combine observations from the EUV Variability Experiment (EVE) on-board the Solar Dynamics Observatory (SDO) with X-ray data from the Reuven Ramaty High Energy Spectroscopic Imager (RHESSI) to calculate the Differential Emission Measure (DEM) for solar flares. This improvement over the isothermal approximation helps to resolve the ambiguity in the range where the thermal and non-thermal components may have similar photon fluxes. This "crossover" range can extend up to 30 keV.Previous work (Caspi et.al. 2014ApJ...788L..31C) concentrated on obtaining DEM models that fit both instruments' observations well. For this current project we are interested in breaks and cutoffs in the "residual" non-thermal spectrum; i.e., the RHESSI spectrum that is left over after the DEM has accounted for the bulk of the soft X-ray emission. As in our earlier work, thermal emission is modeled using a DEM that is parametrized as multiple gaussians in temperature. Non-thermal emission is modeled as a photon spectrum obtained using a thin-target emission model ('thin2' from the SolarSoft Xray IDL package). Spectra for both instruments are fit simultaneously in a self-consistent manner.For this study, we have examined the DEM and non-thermal resuidual emission for a sample of relatively large (GOES M class and above) solar flares observed from 2011 to 2014. The results for the DEM and non-thermal parameters found using the combined EVE-RHESSI data are compared with those found using only RHESSI data.
Hinkelbein, C; Kugel, A; Männer, R; Miiller, M
2004-01-01
Pattern recognition algorithms are used in experimental High Energy physics for getting parameters (features) of particles tracks in detectors. It is particularly important to have fast algorithms in trigger system. This paper investigates the suitability of using FPGA coprocessor for speedup of the TRT-LUT algorithm - one of the feature extraction algorithms for second level trigger for ATLAS experiment (CERN). Two realization of the same algorithm have been compared: C++ realization tested on a computer equipped with dual Xeon 2.4 GHz CPU, 64-bit, 66MHz PCI bus, 1024Mb DDR RAM main memories with Red Hat Linux 7.1 and hybrid C++ - VHDL realisation tested on same PC equipped in addition by MPRACE board (FPGA-Coprocessor board based on Xilinx Virtex-II FPGA and made as 64-bit, 66 MHz PCI card developed at the University of Mannheim). Usage of the FPGA coprocessor can give some reasonable speedup in contrast to general purpose processor only for those algorithms (or parts of algorithms), for which there is a po...
Biology, the way it should have been, experiments with a Lamarckian algorithm
Energy Technology Data Exchange (ETDEWEB)
Brown, F.M.; Snider, J. [Univ. of Kansas, Lawrence, KS (United States)
1996-12-31
This paper investigates the case where some information can be extracted directly from the fitness function of a genetic algorithm so that mutation may be achieved essentially on the Lamarckian principle of acquired characteristics. The basic rationale is that such additional information will provide better mutations, thus speeding up the search process. Comparisons are made between a pure Neo-Darwinian genetic algorithm and this Lamarckian algorithm on a number of problems, including a problem of interest to the US Army.
LHCb: Optimization and Calibration of Flavour Tagging Algorithms for the LHCb experiment
Falabella, A
2013-01-01
The LHCb purposes are to make precise measurements of $B$ and $D$ meson decays. In particular in time-dependent CP violation studies the determination of $B$ flavour at production is fundamental. This is known as "flavour tagging" and at LHCb it is performed with several algorithms. The performances and calibration of the flavour tagging algorithms with 2011 data collected by LHCb are reported. Also the performances of the flavour tagging algorithms in the relevant CP violation and asymmetry studies are also reported.
Performance of the reconstruction algorithms of the FIRST experiment pixel sensors vertex detector
Energy Technology Data Exchange (ETDEWEB)
Rescigno, R., E-mail: regina.rescigno@iphc.cnrs.fr [Institut Pluridisciplinaire Hubert Curien, 23 rue du Loess, 67037 Strasbourg Cedex 2 (France); Finck, Ch.; Juliani, D. [Institut Pluridisciplinaire Hubert Curien, 23 rue du Loess, 67037 Strasbourg Cedex 2 (France); Spiriti, E. [Istituto Nazionale di Fisica Nucleare - Laboratori Nazionali di Frascati (Italy); Istituto Nazionale di Fisica Nucleare - Sezione di Roma 3 (Italy); Baudot, J. [Institut Pluridisciplinaire Hubert Curien, 23 rue du Loess, 67037 Strasbourg Cedex 2 (France); Abou-Haidar, Z. [CNA, Sevilla (Spain); Agodi, C. [Istituto Nazionale di Fisica Nucleare - Laboratori Nazionali del Sud (Italy); Alvarez, M.A.G. [CNA, Sevilla (Spain); Aumann, T. [GSI Helmholtzzentrum für Schwerionenforschung, Darmstadt (Germany); Battistoni, G. [Istituto Nazionale di Fisica Nucleare - Sezione di Milano (Italy); Bocci, A. [CNA, Sevilla (Spain); Böhlen, T.T. [European Organization for Nuclear Research CERN, Geneva (Switzerland); Medical Radiation Physics, Karolinska Institutet and Stockholm University, Stockholm (Sweden); Boudard, A. [CEA-Saclay, IRFU/SPhN, Gif sur Yvette Cedex (France); Brunetti, A.; Carpinelli, M. [Istituto Nazionale di Fisica Nucleare - Sezione di Cagliari (Italy); Università di Sassari (Italy); Cirrone, G.A.P. [Istituto Nazionale di Fisica Nucleare - Laboratori Nazionali del Sud (Italy); Cortes-Giraldo, M.A. [Departamento de Fisica Atomica, Molecular y Nuclear, University of Sevilla, 41080-Sevilla (Spain); Cuttone, G.; De Napoli, M. [Istituto Nazionale di Fisica Nucleare - Laboratori Nazionali del Sud (Italy); Durante, M. [GSI Helmholtzzentrum für Schwerionenforschung, Darmstadt (Germany); and others
2014-12-11
Hadrontherapy treatments use charged particles (e.g. protons and carbon ions) to treat tumors. During a therapeutic treatment with carbon ions, the beam undergoes nuclear fragmentation processes giving rise to significant yields of secondary charged particles. An accurate prediction of these production rates is necessary to estimate precisely the dose deposited into the tumours and the surrounding healthy tissues. Nowadays, a limited set of double differential carbon fragmentation cross-section is available. Experimental data are necessary to benchmark Monte Carlo simulations for their use in hadrontherapy. The purpose of the FIRST experiment is to study nuclear fragmentation processes of ions with kinetic energy in the range from 100 to 1000 MeV/u. Tracks are reconstructed using information from a pixel silicon detector based on the CMOS technology. The performances achieved using this device for hadrontherapy purpose are discussed. For each reconstruction step (clustering, tracking and vertexing), different methods are implemented. The algorithm performances and the accuracy on reconstructed observables are evaluated on the basis of simulated and experimental data.
Fast parallel ring recognition algorithm in the RICH detector of the CBM experiment at FAIR
International Nuclear Information System (INIS)
Lebedev, S.
2011-01-01
The Compressed Baryonic Matter (CBM)experiment at the future FAIR facility at Darmstadt will measure dileptons emitted from the hot and dense phase in heavy ion collisions. In case of an electron measurement, a high purity of identified electrons is required in order to suppress the background. Electron identification in CBM will be performed by a Ring Imaging Cherenkov (RICH) detector and Transition Radiation Detector (TRD). Very fast data reconstruction is extremely important for CBM because of the huge amount of data which has to be handled. In this contribution, a parallelized ring recognition algorithm is presented. Modern CPUs have two features, which enable parallel programming. First, the SSE technology allows using the SIMD execution model. Second, multicore CPUs enable the use of multithreading. Both features have been implemented in the ring reconstruction of the RICH detector. A considerable speedup factor from 357 to 2.5 ms/event has been achieved including preceding code optimization for Intel Xeon X5550 processors at 2.67 GHz
Application of an Image Tracking Algorithm in Fire Ant Motion Experiment
Directory of Open Access Journals (Sweden)
Lichuan Gui
2009-04-01
Full Text Available An image tracking algorithm, which was originally used with the particle image velocimetry (PIV to determine velocities of buoyant solid particles in water, is modified and applied in the presented work to detect motion of fire ant on a planar surface. A group of fire ant workers are put to the bottom of a tub and excited with vibration of selected frequency and intensity. The moving fire ants are captured with an image system that successively acquires image frames of high digital resolution. The background noise in the imaging recordings is extracted by averaging hundreds of frames and removed from each frame. The individual fire ant images are identified with a recursive digital filter, and then they are tracked between frames according to the size, brightness, shape, and orientation angle of the ant image. The speed of an individual ant is determined with the displacement of its images and the time interval between frames. The trail of the individual fire ant is determined with the image tracking results, and a statistical analysis is conducted for all the fire ants in the group. The purpose of the experiment is to investigate the response of fire ants to the substrate vibration. Test results indicate that the fire ants move faster after being excited, but the number of active ones are not increased even after a strong excitation.
2013-12-11
... Zone; Sacramento New Years Eve Fireworks Display, Sacramento River, Sacramento, CA AGENCY: Coast Guard... safety zone in the navigable waters of the Sacramento River in Sacramento, CA on December 31, 2013 during... Sacramento River around the Tower Bridge in Sacramento, CA in approximate position 38[deg]34'49.98'' N, 121...
2013-12-13
... Zone; Sacramento New Years Eve Fireworks Display, Sacramento River, Sacramento, CA AGENCY: Coast Guard... safety zone in the navigable waters of the Sacramento River in Sacramento, CA on December 31, 2013 during... Sacramento River around the Tower Bridge in Sacramento, CA in approximate position 38[deg]34'49.98'' N, 121...
2011-12-20
...-AA00 Safety Zone; City of Beaufort's Tricentennial New Year's Eve Fireworks Display, Beaufort River... establishing a temporary safety zone on the Beaufort River, in Beaufort, South Carolina, during the City of... Carolina. The fireworks will be launched from a barge, which will be located on the Beaufort River. The...
Exhibition from 6 to 17 March 2017: EVE and School covered in colours.
Staff Association
2017-01-01
The children of the EVE and School exhibited their artwork in the Main Building from 6 to 17 March. They worked on the theme of “colours” expressing themselves through various drawing, painting, collage and arts-and-crafts techniques. The result was a beautiful explosion of bright and shimmering colours!
Tantsija Eve Mutso jäi Londonis silma / Stuart Sweeney
Sweeney, Stuart
2006-01-01
Londonis andis märtsis külalisetendusi Šoti ballett, kus solistina esines juba aastaid Shoti balletis tantsiv eestlanna Eve Mutso. Ta astus üles Ashley Page'i balletis "Tuhkatriinu", samuti ühe juhtivsolistina George Balanchine'i "Episoodides" ja William Forsythe'i "Süidis artefaktist"
LHCb: Optimization and Calibration of Flavour Tagging Algorithms for the LHCb experiment
Falabella, A
2013-01-01
The LHCb purposes are to make precise measurements in $B$ and $D$ meson decays. In particular in time-dependent CP violation studies the determination of $B$ flavour at production ("Flavour Tagging") is fundamental. The performances and calibration of the flavour tagging algorithms with 2011 data collected by LHCb are reported. The performances of the flavour tagging algorithms on the relevant CP violation and asymmetry studies are also reported.
Anomalous Temporal Behaviour of Broadband Ly Alpha Observations During Solar Flares from SDO/EVE
Milligan, Ryan O.; Chamberlin, Phillip C.
2016-01-01
Although it is the most prominent emission line in the solar spectrum, there has been a notable lack of studies devoted to variations in Lyman-alpha (Ly-alpha) emission during solar flares in recent years. However, the few examples that do exist have shown Ly-alpha emission to be a substantial radiator of the total energy budget of solar flares (of the order of 10 percent). It is also a known driver of fluctuations in the Earth's ionosphere. The EUV (Extreme Ultra-Violet) Variability Experiment (EVE) on board the Solar Dynamics Observatory (SDO) now provides broadband, photometric Ly-alpha data at 10-second cadence with its Multiple EUV Grating Spectrograph-Photometer (MEGS-P) component, and has observed scores of solar flares in the 5 years since it was launched. However, the MEGS-P time profiles appear to display a rise time of tens of minutes around the time of the flare onset. This is in stark contrast to the rapid, impulsive increase observed in other intrinsically chromospheric features (H-alpha, Ly-beta, LyC, C III, etc.). Furthermore, the emission detected by MEGS-P peaks around the time of the peak of thermal soft X-ray emission and not during the impulsive phase when energy deposition in the chromosphere (often assumed to be in the form of non-thermal electrons) is greatest. The time derivative of Ly-alpha lightcurves also appears to resemble that of the time derivative of soft X-rays, reminiscent of the Neupert effect. Given that spectrally-resolved Ly-alpha observations during flares from SORCE / SOLSTICE (Solar Radiation and Climate Experiment / Solar Stellar Irradiance Comparison Experiment) peak during the impulsive phase as expected, this suggests that the atypical behaviour of MEGS-P data is a manifestation of the broadband nature of the observations. This could imply that other lines andor continuum emission that becomes enhanced during flares could be contributing to the passband. Users are hereby urged to exercise caution when interpreting
Directory of Open Access Journals (Sweden)
V. V. Uyba
2016-01-01
Full Text Available In the period from 2005 to December 2015, 37 transplantations of vascularized composite facial tissue allografts (VCAs were performed in the world. A vascularized composite tissue allotransplantation has been recognized as a solid organ transplantation rather than a special kind of tissue transplantation. The recent classification of composite tissue allografts into the category of donor organs gave rise to a number of organizational, ethical, legal, technical, and economic problems. In May 2015, the first successful transplantation of a composite facial tissue allograft was performed in Russia. The article describes our experience of multiple team interactions at donor management stage when involved in the identification, conditioning, harvesting, and delivering donor organs to various hospitals. A man, aged 51 years old, diagnosed with traumatic brain injury became a donor after the diagnosis of brain deathhad been made, his death had been ascertained, and the requested consent for organ donation had been obtained from relatives. At donor management stage, a tracheostomy was performed and a posthumous facial mask was molded. The "face first, concurrent completion" algorithm was chosen for organ harvesting and facial VCA procurement; meanwhile, the facial allograft was procured as the "full face" category. The total surgery duration from the incision to completing the procurement (including that of solid organs made 8 hours 20 minutes. Immediately after the procurement, the facial VCA complex was sent to the St. Petersburg clinic by medical aircraft transportation, and was there transplanted 9 hours later. Donor kidneys were transported to Moscow bycivil aviation and transplanted 17 and 20 hours later. The authors believe that this clinical case report demonstrates the feasibility and safety of multiple harvesting of solid organs and a vascularized composite facial tissue allograft. However, this kind of surgery requires an essential
A Randomized Exchange Algorithm for Computing Optimal Approximate Designs of Experiments
Harman, Radoslav
2018-01-17
We propose a class of subspace ascent methods for computing optimal approximate designs that covers both existing as well as new and more efficient algorithms. Within this class of methods, we construct a simple, randomized exchange algorithm (REX). Numerical comparisons suggest that the performance of REX is comparable or superior to the performance of state-of-the-art methods across a broad range of problem structures and sizes. We focus on the most commonly used criterion of D-optimality that also has applications beyond experimental design, such as the construction of the minimum volume ellipsoid containing a given set of data-points. For D-optimality, we prove that the proposed algorithm converges to the optimum. We also provide formulas for the optimal exchange of weights in the case of the criterion of A-optimality. These formulas enable one to use REX for computing A-optimal and I-optimal designs.
A Randomized Exchange Algorithm for Computing Optimal Approximate Designs of Experiments
Harman, Radoslav; Filová , Lenka; Richtarik, Peter
2018-01-01
We propose a class of subspace ascent methods for computing optimal approximate designs that covers both existing as well as new and more efficient algorithms. Within this class of methods, we construct a simple, randomized exchange algorithm (REX). Numerical comparisons suggest that the performance of REX is comparable or superior to the performance of state-of-the-art methods across a broad range of problem structures and sizes. We focus on the most commonly used criterion of D-optimality that also has applications beyond experimental design, such as the construction of the minimum volume ellipsoid containing a given set of data-points. For D-optimality, we prove that the proposed algorithm converges to the optimum. We also provide formulas for the optimal exchange of weights in the case of the criterion of A-optimality. These formulas enable one to use REX for computing A-optimal and I-optimal designs.
Localization of short-range acoustic and seismic wideband sources: Algorithms and experiments
Stafsudd, J. Z.; Asgari, S.; Hudson, R.; Yao, K.; Taciroglu, E.
2008-04-01
We consider the determination of the location (source localization) of a disturbance source which emits acoustic and/or seismic signals. We devise an enhanced approximate maximum-likelihood (AML) algorithm to process data collected at acoustic sensors (microphones) belonging to an array of, non-collocated but otherwise identical, sensors. The approximate maximum-likelihood algorithm exploits the time-delay-of-arrival of acoustic signals at different sensors, and yields the source location. For processing the seismic signals, we investigate two distinct algorithms, both of which process data collected at a single measurement station comprising a triaxial accelerometer, to determine direction-of-arrival. The direction-of-arrivals determined at each sensor station are then combined using a weighted least-squares approach for source localization. The first of the direction-of-arrival estimation algorithms is based on the spectral decomposition of the covariance matrix, while the second is based on surface wave analysis. Both of the seismic source localization algorithms have their roots in seismology; and covariance matrix analysis had been successfully employed in applications where the source and the sensors (array) are typically separated by planetary distances (i.e., hundreds to thousands of kilometers). Here, we focus on very-short distances (e.g., less than one hundred meters) instead, with an outlook to applications in multi-modal surveillance, including target detection, tracking, and zone intrusion. We demonstrate the utility of the aforementioned algorithms through a series of open-field tests wherein we successfully localize wideband acoustic and/or seismic sources. We also investigate a basic strategy for fusion of results yielded by acoustic and seismic arrays.
The effect of endoleak on intra-aneurysmal pressure after EVE for abdominal aortic aneurysm
International Nuclear Information System (INIS)
Huang Sheng; Jing Zaiping; Mei Zhijun; Lu Qingsheng; Zhao Jun; Zhang Suzhen; Zhao Xin; Cai Lili; Tang Jingdong; Xiong Jiang; Liao Mingfang
2003-01-01
Objective: To investigate the intra-aneurysmal pressure curve in the presence of endoleak after endovascular exclusion (EVE) for abdominal aortic aneurysm (AAA). Methods: Infrarenal aortic aneurysms were created with bovine jugular vein segments or patches. Then they were underwent incomplete endovascular exclusion of the aneurysm and formation of endoleaks. The pressures of blood flow outside the graft into the sac were measured. Results: The intrasac pressure was higher than systemic pressure in the presence of endoleak. After sealing the endoleak, pressure decreased significantly, and the pressure cure showed approximately linear. Conclusion: The change of intra-aneurysmal pressure curve reflected the load on aneurysmal wall after EVE, and can also help to determine the endoleak existence
Experiments in Discourse Analysis Impact on Information Classification and Retrieval Algorithms.
Morato, Jorge; Llorens, J.; Genova, G.; Moreiro, J. A.
2003-01-01
Discusses the inclusion of contextual information in indexing and retrieval systems to improve results and the ability to carry out text analysis by means of linguistic knowledge. Presents research that investigated whether discourse variables have an impact on information and retrieval and classification algorithms. (Author/LRW)
Experiences with serial and parallel algorithms for channel routing using simulated annealing
Brouwer, Randall Jay
1988-01-01
Two algorithms for channel routing using simulated annealing are presented. Simulated annealing is an optimization methodology which allows the solution process to back up out of local minima that may be encountered by inappropriate selections. By properly controlling the annealing process, it is very likely that the optimal solution to an NP-complete problem such as channel routing may be found. The algorithm presented proposes very relaxed restrictions on the types of allowable transformations, including overlapping nets. By freeing that restriction and controlling overlap situations with an appropriate cost function, the algorithm becomes very flexible and can be applied to many extensions of channel routing. The selection of the transformation utilizes a number of heuristics, still retaining the pseudorandom nature of simulated annealing. The algorithm was implemented as a serial program for a workstation, and a parallel program designed for a hypercube computer. The details of the serial implementation are presented, including many of the heuristics used and some of the resulting solutions.
Directory of Open Access Journals (Sweden)
Eduardo Batista de Moraes Barbosa
2017-01-01
Full Text Available Usually, metaheuristic algorithms are adapted to a large set of problems by applying few modifications on parameters for each specific case. However, this flexibility demands a huge effort to correctly tune such parameters. Therefore, the tuning of metaheuristics arises as one of the most important challenges in the context of research of these algorithms. Thus, this paper aims to present a methodology combining Statistical and Artificial Intelligence methods in the fine-tuning of metaheuristics. The key idea is a heuristic method, called Heuristic Oriented Racing Algorithm (HORA, which explores a search space of parameters looking for candidate configurations close to a promising alternative. To confirm the validity of this approach, we present a case study for fine-tuning two distinct metaheuristics: Simulated Annealing (SA and Genetic Algorithm (GA, in order to solve the classical traveling salesman problem. The results are compared considering the same metaheuristics tuned through a racing method. Broadly, the proposed approach proved to be effective in terms of the overall time of the tuning process. Our results reveal that metaheuristics tuned by means of HORA achieve, with much less computational effort, similar results compared to the case when they are tuned by the other fine-tuning approach.
Directory of Open Access Journals (Sweden)
Gaurav Sanghi
2018-01-01
Full Text Available Purpose: To determine the efficacy of the online monitoring tool, WINROP (https://winrop.com/ in detecting sight-threatening type 1 retinopathy of prematurity (ROP in Indian preterm infants. Methods: Birth weight, gestational age, and weekly weight measurements of seventy preterm infants (<32 weeks gestation born between June 2014 and August 2016 were entered into WINROP algorithm. Based on weekly weight gain, WINROP algorithm signaled an alarm to indicate that the infant is at risk for sight-threatening Type 1 ROP. ROP screening was done according to standard guidelines. The negative and positive predictive values were calculated using the sensitivity, specificity, and prevalence of ROP type 1 for the study group. 95% confidence interval (CI was calculated. Results: Of the seventy infants enrolled in the study, 31 (44.28% developed Type 1 ROP. WINROP alarm was signaled in 74.28% (52/70 of all infants and 90.32% (28/31 of infants treated for Type 1 ROP. The specificity was 38.46% (15/39. The positive predictive value was 53.84% (95% CI: 39.59–67.53 and negative predictive value was 83.3% (95% CI: 57.73–95.59. Conclusion: This is the first study from India using a weight gain-based algorithm for prediction of ROP. Overall sensitivity of WINROP algorithm in detecting Type 1 ROP was 90.32%. The overall specificity was 38.46%. Population-specific tweaking of algorithm may improve the result and practical utility for ophthalmologists and neonatologists.
International Nuclear Information System (INIS)
Jalmuzna, W.
2006-02-01
The X-ray free-electron laser XFEL that is being planned at the DESY research center in cooperation with European partners will produce high-intensity ultra-short Xray flashes with the properties of laser light. This new light source, which can only be described in terms of superlatives, will open up a whole range of new perspectives for the natural sciences. It could also offer very promising opportunities for industrial users. SIMCON (SIMulator and CONtroller) is the project of the fast, low latency digital controller dedicated for LLRF system in VUV FEL experiment based on modern FPGA chips It is being developed by ELHEP group in Institute of Electronic Systems at Warsaw University of Technology. The main purpose of the project is to create a controller for stabilizing the vector sum of fields in cavities of one cryomodule in the experiment. The device can be also used as the simulator of the cavity and testbench for other devices. Flexibility and computation power of this device allow implementation of fast mathematical algorithms. This paper describes the concept, implementation and tests of universal mathematical library for FPGA algorithm implementation. It consists of many useful components such as IQ demodulator, division block, library for complex and floating point operations, etc. It is able to speed up implementation time of many complicated algorithms. Library have already been tested using real accelerator signals and the performance achieved is satisfactory. (Orig.)
Energy Technology Data Exchange (ETDEWEB)
Jalmuzna, W.
2006-02-15
The X-ray free-electron laser XFEL that is being planned at the DESY research center in cooperation with European partners will produce high-intensity ultra-short Xray flashes with the properties of laser light. This new light source, which can only be described in terms of superlatives, will open up a whole range of new perspectives for the natural sciences. It could also offer very promising opportunities for industrial users. SIMCON (SIMulator and CONtroller) is the project of the fast, low latency digital controller dedicated for LLRF system in VUV FEL experiment based on modern FPGA chips It is being developed by ELHEP group in Institute of Electronic Systems at Warsaw University of Technology. The main purpose of the project is to create a controller for stabilizing the vector sum of fields in cavities of one cryomodule in the experiment. The device can be also used as the simulator of the cavity and testbench for other devices. Flexibility and computation power of this device allow implementation of fast mathematical algorithms. This paper describes the concept, implementation and tests of universal mathematical library for FPGA algorithm implementation. It consists of many useful components such as IQ demodulator, division block, library for complex and floating point operations, etc. It is able to speed up implementation time of many complicated algorithms. Library have already been tested using real accelerator signals and the performance achieved is satisfactory. (Orig.)
Machine Learning Algorithms for $b$-Jet Tagging at the ATLAS Experiment
Paganini, Michela; The ATLAS collaboration
2017-01-01
The separation of $b$-quark initiated jets from those coming from lighter quark flavors ($b$-tagging) is a fundamental tool for the ATLAS physics program at the CERN Large Hadron Collider. The most powerful $b$-tagging algorithms combine information from low-level taggers, exploiting reconstructed track and vertex information, into machine learning classifiers. The potential of modern deep learning techniques is explored using simulated events, and compared to that achievable from more traditional classifiers such as boosted decision trees.
Experiment of X-MP CRAY multitasking with vectorial Monte Carlo algorithm
International Nuclear Information System (INIS)
Chauvet, Y.
1984-08-01
After a short comparison between CRAY-1S and CRAY X-MP we present the main multitasking tools available with FORTRAN. Next we present the main characteristics of the algorithm used and the principles of its parallelization. At last we show the measured results on the two computers and we prove that tasks should be long enough to get a good speed-up factor [fr
AUTHOR|(INSPIRE)INSPIRE-00372074; The ATLAS collaboration; Sotiropoulou, Calliope Louisa; Annovi, Alberto; Kordas, Kostantinos
2016-01-01
In this paper the performance of the 2D pixel clustering algorithm developed for the Input Mezzanine card of the ATLAS Fast TracKer system is presented. Fast TracKer is an approved ATLAS upgrade that has the goal to provide a complete list of tracks to the ATLAS High Level Trigger for each level-1 accepted event, at up to 100 kHz event rate with a very small latency, in the order of 100µs. The Input Mezzanine card is the input stage of the Fast TracKer system. Its role is to receive data from the silicon detector and perform real time clustering, thus to reduce the amount of data propagated to the subsequent processing levels with minimal information loss. We focus on the most challenging component on the Input Mezzanine card, the 2D clustering algorithm executed on the pixel data. We compare two different implementations of the algorithm. The first is one called the ideal one which searches clusters of pixels in the whole silicon module at once and calculates the cluster centroids exploiting the whole avail...
Gkaitatzis, Stamatios; The ATLAS collaboration
2016-01-01
In this paper the performance of the 2D pixel clustering algorithm developed for the Input Mezzanine card of the ATLAS Fast TracKer system is presented. Fast TracKer is an approved ATLAS upgrade that has the goal to provide a complete list of tracks to the ATLAS High Level Trigger for each level-1 accepted event, at up to 100 kHz event rate with a very small latency, in the order of 100 µs. The Input Mezzanine card is the input stage of the Fast TracKer system. Its role is to receive data from the silicon detector and perform real time clustering, thus to reduce the amount of data propagated to the subsequent processing levels with minimal information loss. We focus on the most challenging component on the Input Mezzanine card, the 2D clustering algorithm executed on the pixel data. We compare two different implementations of the algorithm. The first is one called the ideal one which searches clusters of pixels in the whole silicon module at once and calculates the cluster centroids exploiting the whole avai...
Computational experience with a parallel algorithm for tetrangle inequality bound smoothing.
Rajan, K; Deo, N
1999-09-01
Determining molecular structure from interatomic distances is an important and challenging problem. Given a molecule with n atoms, lower and upper bounds on interatomic distances can usually be obtained only for a small subset of the 2(n(n-1)) atom pairs, using NMR. Given the bounds so obtained on the distances between some of the atom pairs, it is often useful to compute tighter bounds on all the 2(n(n-1)) pairwise distances. This process is referred to as bound smoothing. The initial lower and upper bounds for the pairwise distances not measured are usually assumed to be 0 and infinity. One method for bound smoothing is to use the limits imposed by the triangle inequality. The distance bounds so obtained can often be tightened further by applying the tetrangle inequality--the limits imposed on the six pairwise distances among a set of four atoms (instead of three for the triangle inequalities). The tetrangle inequality is expressed by the Cayley-Menger determinants. For every quadruple of atoms, each pass of the tetrangle inequality bound smoothing procedure finds upper and lower limits on each of the six distances in the quadruple. Applying the tetrangle inequalities to each of the (4n) quadruples requires O(n4) time. Here, we propose a parallel algorithm for bound smoothing employing the tetrangle inequality. Each pass of our algorithm requires O(n3 log n) time on a REW PRAM (Concurrent Read Exclusive Write Parallel Random Access Machine) with O(log(n)n) processors. An implementation of this parallel algorithm on the Intel Paragon XP/S and its performance are also discussed.
Machine Learning Algorithms for $b$-Jet Tagging at the ATLAS Experiment
Paganini, Michela; The ATLAS collaboration
2017-01-01
The separation of b-quark initiated jets from those coming from lighter quark flavours (b-tagging) is a fundamental tool for the ATLAS physics program at the CERN Large Hadron Collider. The most powerful b-tagging algorithms combine information from low-level taggers exploiting reconstructed track and vertex information using a multivariate classifier. The potential of modern Machine Learning techniques such as Recurrent Neural Networks and Deep Learning is explored using simulated events, and compared to that achievable from more traditional classifiers such as boosted decision trees.
Energy Technology Data Exchange (ETDEWEB)
Apfaltrer, Paul, E-mail: paul.apfaltrer@medma.uni-heidelberg.de [Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, PO Box 250322, 169 Ashley Avenue, Charleston, SC 29425 (United States); Institute of Clinical Radiology and Nuclear Medicine, Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, D-68167 Mannheim (Germany); Schoendube, Harald, E-mail: harald.schoendube@siemens.com [Siemens Healthcare, CT Division, Forchheim Siemens, Siemensstr. 1, 91301 Forchheim (Germany); Schoepf, U. Joseph, E-mail: schoepf@musc.edu [Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, PO Box 250322, 169 Ashley Avenue, Charleston, SC 29425 (United States); Allmendinger, Thomas, E-mail: thomas.allmendinger@siemens.com [Siemens Healthcare, CT Division, Forchheim Siemens, Siemensstr. 1, 91301 Forchheim (Germany); Tricarico, Francesco, E-mail: francescotricarico82@gmail.com [Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, PO Box 250322, 169 Ashley Avenue, Charleston, SC 29425 (United States); Department of Bioimaging and Radiological Sciences, Catholic University of the Sacred Heart, “A. Gemelli” Hospital, Largo A. Gemelli 8, Rome (Italy); Schindler, Andreas, E-mail: andreas.schindler@campus.lmu.de [Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, PO Box 250322, 169 Ashley Avenue, Charleston, SC 29425 (United States); Vogt, Sebastian, E-mail: sebastian.vogt@siemens.com [Siemens Healthcare, CT Division, Forchheim Siemens, Siemensstr. 1, 91301 Forchheim (Germany); Sunnegårdh, Johan, E-mail: johan.sunnegardh@siemens.com [Siemens Healthcare, CT Division, Forchheim Siemens, Siemensstr. 1, 91301 Forchheim (Germany); and others
2013-02-15
Objective: To evaluate the effect of a temporal resolution improvement method (TRIM) for cardiac CT on diagnostic image quality for coronary artery assessment. Materials and methods: The TRIM-algorithm employs an iterative approach to reconstruct images from less than 180° of projections and uses a histogram constraint to prevent the occurrence of limited-angle artifacts. This algorithm was applied in 11 obese patients (7 men, 67.2 ± 9.8 years) who had undergone second generation dual-source cardiac CT with 120 kV, 175–426 mAs, and 500 ms gantry rotation. All data were reconstructed with a temporal resolution of 250 ms using traditional filtered-back projection (FBP) and of 200 ms using the TRIM-algorithm. Contrast attenuation and contrast-to-noise-ratio (CNR) were measured in the ascending aorta. The presence and severity of coronary motion artifacts was rated on a 4-point Likert scale. Results: All scans were considered of diagnostic quality. Mean BMI was 36 ± 3.6 kg/m{sup 2}. Average heart rate was 60 ± 9 bpm. Mean effective dose was 13.5 ± 4.6 mSv. When comparing FBP- and TRIM reconstructed series, the attenuation within the ascending aorta (392 ± 70.7 vs. 396.8 ± 70.1 HU, p > 0.05) and CNR (13.2 ± 3.2 vs. 11.7 ± 3.1, p > 0.05) were not significantly different. A total of 110 coronary segments were evaluated. All studies were deemed diagnostic; however, there was a significant (p < 0.05) difference in the severity score distribution of coronary motion artifacts between FBP (median = 2.5) and TRIM (median = 2.0) reconstructions. Conclusion: The algorithm evaluated here delivers diagnostic imaging quality of the coronary arteries despite 500 ms gantry rotation. Possible applications include improvement of cardiac imaging on slower gantry rotation systems or mitigation of the trade-off between temporal resolution and CNR in obese patients.
Zhang, Y.; Chatterjea, Supriyo; Havinga, Paul J.M.
2007-01-01
We report our experiences with implementing a distributed and self-organizing scheduling algorithm designed for energy-efficient data gathering on a 25-node multihop wireless sensor network (WSN). The algorithm takes advantage of spatial correlations that exist in readings of adjacent sensor nodes
Development of a B-flavor tagging algorithm for the Belle II experiment
Energy Technology Data Exchange (ETDEWEB)
Abudinen, Fernando; Li Gioi, Luigi [Max-Planck-Institut fuer Physik Muenchen (Germany); Gelb, Moritz [Karlsruher Institut fuer Technologie (Germany)
2015-07-01
The high luminosity SuperB-factory SuperKEKB will allow a precision measurement of the time-dependent CP violation parameters in the B-meson system. The analysis requires the reconstruction of one of the two exclusively produced neutral B mesons to a CP eigenstate and the determination of the flavor of the other one. Because of the high amount of decay possibilities, full reconstruction of the tagging B is not feasible. Consequently, inclusive methods that utilize flavor specific signatures of B decays are employed. The algorithm is based on multivariate methods and follows the approach adopted by BaBar. It proceeds in three steps: the track level, where the most probable target track is selected for each decay category; the event level, where the flavor specific signatures of the selected targets are analyzed; and the combiner, where the results of all categories are combined into the final output. The framework has been completed reaching a tagging efficiency of ca. 25%. A comprehensive optimization is being launched in order to increase the efficiency. This includes studies on the categories, the method-specific parameters and the kinematic variables. An overview of the algorithm is presented together with the results at the current status.
Myrcha, Julian; Trzciński, Tomasz; Rokita, Przemysław
2017-08-01
Analyzing massive amounts of data gathered during many high energy physics experiments, including but not limited to the LHC ALICE detector experiment, requires efficient and intuitive methods of visualisation. One of the possible approaches to that problem is stereoscopic 3D data visualisation. In this paper, we propose several methods that provide high quality data visualisation and we explain how those methods can be applied in virtual reality headsets. The outcome of this work is easily applicable to many real-life applications needed in high energy physics and can be seen as a first step towards using fully immersive virtual reality technologies within the frames of the ALICE experiment.
Thermal weapon sights with integrated fire control computers: algorithms and experiences
Rothe, Hendrik; Graswald, Markus; Breiter, Rainer
2008-04-01
The HuntIR long range thermal weapon sight of AIM is deployed in various out of area missions since 2004 as a part of the German Future Infantryman system (IdZ). In 2007 AIM fielded RangIR as upgrade with integrated laser Range finder (LRF), digital magnetic compass (DMC) and fire control unit (FCU). RangIR fills the capability gaps of day/night fire control for grenade machine guns (GMG) and the enhanced system of the IdZ. Due to proven expertise and proprietary methods in fire control, fast access to military trials for optimisation loops and similar hardware platforms, AIM and the University of the Federal Armed Forces Hamburg (HSU) decided to team for the development of suitable fire control algorithms. The pronounced ballistic trajectory of the 40mm GMG requires most accurate FCU-solutions specifically for air burst ammunition (ABM) and is most sensitive to faint effects like levelling or firing up/downhill. This weapon was therefore selected to validate the quality of the FCU hard- and software under relevant military conditions. For exterior ballistics the modified point mass model according to STANAG 4355 is used. The differential equations of motions are solved numerically, the two point boundary value problem is solved iteratively. Computing time varies according to the precision needed and is typical in the range from 0.1 - 0.5 seconds. RangIR provided outstanding hit accuracy including ABM fuze timing in various trials of the German Army and allied partners in 2007 and is now ready for series production. This paper deals mainly with the fundamentals of the fire control algorithms and shows how to implement them in combination with any DSP-equipped thermal weapon sights (TWS) in a variety of light supporting weapon systems.
International Nuclear Information System (INIS)
Li Liang; Chen Zhiqiang; Zhang Li; Xing Yuxiang; Kang Kejun
2007-01-01
In a traditional cone-beam computed tomography (CT) system, the cost of product and computation is very high. In this paper, we develop a transversely truncated cone-beam X-ray CT system with a reduced-size detector positioned off-center, in which X-ray beams only cover half of the object. The existing filtered backprojection (FBP) or backprojection-filtration (BPF) algorithms are not directly applicable in this new system. Hence, we develop a BPF-type direct backprojection algorithm. Different from the traditional rebinning methods, our algorithm directly backprojects the pretreated projection data without rebinning. This makes the algorithm compact and computationally more efficient. Because of avoiding interpolation errors of rebinning process, higher spatial resolution is obtained. Finally, some numerical simulations and practical experiments are done to validate the proposed algorithm and compare with rebinning algorithm
Taylor, John; Bambrick, Rachel; Dutton, Michelle; Harper, Robert; Ryan, Barbara; Tudor-Edwards, Rhiannon; Waterman, Heather; Whitaker, Chris; Dickinson, Chris
2014-09-01
To describe the study design and methodology for the p-EVES study, a trial designed to determine the effectiveness, cost-effectiveness and acceptability of portable Electronic Vision Enhancement System (p-EVES) devices and conventional optical low vision aids (LVAs) for near tasks in people with low vision. The p-EVES study is a prospective two-arm randomised cross-over trial to test the hypothesis that, in comparison to optical LVAs, p-EVES can be: used for longer duration; used for a wider range of tasks than a single optical LVA and/or enable users to do tasks that they were not able to do with optical LVAs; allow faster performance of instrumental activities of daily living; and allow faster reading. A total of 100 adult participants with visual impairment are currently being recruited from Manchester Royal Eye Hospital and randomised into either Group 1 (receiving the two interventions A and B in the order AB), or Group 2 (receiving the two interventions in the order BA). Intervention A is a 2-month period with conventional optical LVAs and a p-EVES device, and intervention B is a 2-month period with conventional optical LVAs only. The study adopts a mixed methods approach encompassing a broad range of outcome measures. The results will be obtained from the following primary outcome measures: Manchester Low Vision Questionnaire, capturing device 'usage' data (which devices are used, number of times, for what purposes, and for how long) and the MNRead test, measuring threshold print size, critical print size, and acuity reserve in addition to reading speed at high (≈90%) contrast. Results will also be obtained from a series of secondary outcome measures which include: assessment of timed instrumental activities of daily living and a 'near vision' visual functioning questionnaire. A companion qualitative study will permit comparison of results on how, where, and under what circumstances, p-EVES devices and LVAs are used in daily life. A health economic
Paradise Lost: Difference Between Adam and Eve's Lament on Leaving Paradise - a Contrastive Analysis
Servín, Sara Torres
2013-01-01
The difference between Adam and Eve’s lament on leaving Paradise in Milton’s Paradise Lost is striking in its contrastive content and depth. This paper analyzes the difference that exists between the feelings and spiritual attitudes that Adam and Eve express on the occasion when they are informed by the angel Michael that they have to abandon the Garden of Eden. It is a comparison of their lament in order to understand the contrast of the two attitudes that Milton wove in the tapestry that Pa...
Directory of Open Access Journals (Sweden)
Mustafa DEMETGÜL
2008-01-01
Full Text Available In this study, an artificial neural network is developed to find an error rapidly on pneumatic system. Also the ANN prevents the system versus the failure. The error on the experimental bottle filling plant can be defined without any interference using analog values taken from pressure sensors and linear potentiometers. The sensors and potentiometers are placed on different places of the plant. Neural network diagnosis faults on plant, where no bottle, cap closing cylinder B is not working, bottle cap closing cylinder C is not working, air pressure is not sufficient, water is not filling and low air pressure faults. The fault is diagnosed by artificial neural network with LVQ. It is possible to find an failure by using normal programming or PLC. The reason offing Artificial Neural Network is to give a information where the fault is. However, ANN can be used for different systems. The aim is to find the fault by using ANN simultaneously. In this situation, the error taken place on the pneumatic system is collected by a data acquisition card. It is observed that the algorithm is very capable program for many industrial plants which have mechatronic systems.
Aalaei, Shokoufeh; Shahraki, Hadi; Rowhanimanesh, Alireza; Eslami, Saeid
2016-01-01
This study addresses feature selection for breast cancer diagnosis. The present process uses a wrapper approach using GA-based on feature selection and PS-classifier. The results of experiment show that the proposed model is comparable to the other models on Wisconsin breast cancer datasets. To
Meijer, Y.J.; Swart, D.P.J.; Baier, F.; Bhartia, P.K.; Bodeker, G.E.; Casadio, S.; Chance, K.; Frate, Del F.; Erbertseder, T.; Felder, M.D.; Flynn, L.E.; Godin-Beekmann, S.; Hansen, G.; Hasekamp, O.P.; Kaifel, A.; Kelder, H.M.; Kerridge, B.J.; Lambert, J.-C.; Landgraf, J.; Latter, B.G.; Liu, X.; McDermid, I.S.; Pachepsky, Y.; Rozanov, V.; Siddans, R.; Tellmann, S.; A, van der R.J.; Oss, van R.F.; Weber, M.; Zehner, C.
2006-01-01
An evaluation is made of ozone profiles retrieved from measurements of the nadir-viewing Global Ozone Monitoring Experiment (GOME) instrument. Currently, four different approaches are used to retrieve ozone profile information from GOME measurements, which differ in the use of external information
Pondaag, Willem; van Driest, Finn Y; Groen, Justus L; Malessy, Martijn J A
2018-01-26
OBJECTIVE The object of this study was to assess the advantages and disadvantages of early nerve repair within 2 weeks following adult traumatic brachial plexus injury (ATBPI). METHODS From 2009 onwards, the authors have strived to repair as early as possible extended C-5 to C-8 or T-1 lesions or complete loss of C-5 to C-6 or C-7 function in patients in whom there was clinical and radiological suspicion of root avulsion. Among a group of 36 patients surgically treated in the period between 2009 and 2011, surgical findings in those who had undergone treatment within 2 weeks after trauma were retrospectively compared with results in those who had undergone delayed treatment. The result of biceps muscle reanimation was the primary outcome measure. RESULTS Five of the 36 patients were referred within 2 weeks after trauma and were eligible for early surgery. Nerve ruptures and/or avulsions were found in all early cases of surgery. The advantages of early surgery are as follows: no scar formation, easy anatomical identification, and gap length reduction. Disadvantages include less-clear demarcation of vital nerve tissue and unfamiliarity with the interpretation of frozen-section examination findings. All 5 early-treatment patients recovered a biceps force rated Medical Research Council grade 4. CONCLUSIONS Preliminary results of nerve repair within 2 weeks of ATBPI are encouraging, and the benefits outweigh the drawbacks. The authors propose a decision algorithm to select patients eligible for early surgery. Referral standards for patients with ATBPI must be adapted to enable early surgery.
On The Eve Of IYA2009 In Canada
Hesser, James E.; Breland, K.; Hay, K.; Lane, D.; Lacasse, R.; Lemay, D.; Langill, P.; Percy, J.; Welch, D.; Woodsworth, A.
2009-01-01
Local events organized by astronomy clubs, colleges and universities across Canada will softly launch IYA on Saturday, 10 January and begin building awareness of opportunities for every Canadian to experience a `Galileo Moment’ in 2009. The launch typifies our `grass roots’ philosophy based upon our strong partnership of amateurs and professionals which already represents an IYA legacy. In this poster we anticipate the activities of the first half of 2009 and exhibit the educational and public outreach materials and programs we have produced in both official languages, e.g., Astronomy Trading Cards, Mary Lou's New Telescope, Star Finder, etc. Some of these play central roles in our tracking of participation, including allowing people to register to have their name launched into space in 2010. Several contests for youth are underway, with the prize in one being an hour of Gemini telescope observing. In the first half of 2009 some 30,000 grade 6 students will experience `Music of the Spheres’ astronomical orchestral programming conducted by Galileo (a.k.a. Tania Miller, Victoria Symphony). Audiences in Canada and the US will experience Taflemusik's marvelous new soundscape of music and words exploring the deep connections between astronomy and Baroque-era music. An Astronomy Kit featuring Galileoscope for classroom and astronomy club EPO will be tested. Canada Post will issue two stamps during 100 Hours of Astronomy. A new production, Galileo Live!, by Canadian planetaria involving live actors will premier, as will the national Galileo Legacy Lectures in which top astronomers familiarize the public with forefront research being done in Canada. Image exhibits drawing upon material generated by Canadian astronomers and artists, as well as from the IAU Cornerstones, FETTU and TWAN, are opening in malls and airports early in 2009. We will present the latest information about these and other events.
International Nuclear Information System (INIS)
Cieri, D.
2016-01-01
At the HL-LHC, proton bunches collide every 25 ns, producing an average of 140 pp interactions per bunch crossing. To operate in such an environment, the CMS experiment will need a Level-1 (L1) hardware trigger, able to identify interesting events within a latency of 12.5 μs. This novel L1 trigger will make use of data coming from the silicon tracker to constrain the trigger rate . Goal of this new track trigger will be to build L1 tracks from the tracker information. The architecture that will be implemented in future to process tracker data is still under discussion. One possibility is to adopt a system entirely based on FPGA electronic. The proposed track finding algorithm is based on the Hough transform method. The algorithm has been tested using simulated pp collision data and it is currently being demonstrated in hardware, using the “MP7”, which is a μTCA board with a powerful FPGA capable of handling data rates approaching 1 Tb/s. Two different implementations of the Hough transform technique are currently under investigation: one utilizes a systolic array to represent the Hough space, while the other exploits a pipelined approach. (paper)
AUTHOR|(CDS)2090481
2016-01-01
At the HL-LHC, proton bunches collide every 25\\,ns, producing an average of 140 pp interactions per bunch crossing. To operate in such an environment, the CMS experiment will need a Level-1 (L1) hardware trigger, able to identify interesting events within a latency of 12.5\\,$\\mu$s. This novel L1 trigger will make use of data coming from the silicon tracker to constrain the trigger rate. Goal of this new \\textit{track trigger} will be to build L1 tracks from the tracker information. The architecture that will be implemented in future to process tracker data is still under discussion. One possibility is to adopt a system entirely based on FPGA electronic. The proposed track finding algorithm is based on the Hough transform method. The algorithm has been tested using simulated pp collision data and it is currently being demonstrated in hardware, using the ``MP7'', which is a $\\mu$TCA board with a powerful FPGA capable of handling data rates approaching 1 Tb/s. Two different implementations of the Hough tran...
New Peak Finding Algorithm for the BCM1F Detector of the CMS Experiment at CERN
Ruede, Alexander Jonas
The Compact Muon Solenoid experiment (CMS) is located at the most powerful particle collider in the world, the Large Hadron Collider (LHC). It detects the resulting prod- ucts of the proton beams that have been accelerated and collide in the center of the experiment. For the operation of a particle collider, a prompt measurement result of the so-called luminosity is of high priority, as it quantiﬁes the ability of the collider to produce a certain number of speciﬁc particle interactions. The BCM1F detector is one of the instruments that are installed along the beam pipe to provide an online measure- ment of the luminosity. For reliable results, the electrical pulses that are produced by particle hits at the sensors have to be detected and characterised by their relative arrival time. In the currently operating back end system, the pulses are detected using a single threshold discriminator. This approach shows hit counting ineﬃciencies, especially for closely occurring, consecutive pulses. A similar puls...
Berretti, Mirko; Latino, Giuseppe
The TOTEM experiment at the Large Hadron Collider (LHC) is designed and optimized to measure the total pp cross section at a center of mass energy of E = 14 TeV with a precision of about 1÷2 %, to study the nuclear elastic pp cross section over a wide range of the squared four-momentum transfer (10^{-3} GeV^2 < |t| < 10 GeV^2) and to perform a comprehensive physics program on diffractive dissociation processes, partially in cooperation with the CMS experiment. Based on the “luminosity independent method”, the evaluation of the total cross section with such a small error will in particular require simultaneous measurement of the pp elastic scattering cross section d\\sigma/dt down to |t| ~10^{-3} GeV^2 (to be extrapolated to t = 0) as well as of the pp inelastic interaction rate, with a large coverage in the forward region. The TOTEM physics programme will be accomplished by using three different types of detectors: elastically scattered protons will be detected by Roman Pots detectors (based on sili...
Level 3 trigger algorithm and hardware platform for the HADES experiment
International Nuclear Information System (INIS)
Kirschner, Daniel Georg
2007-01-01
One focus of the HADES experiment is the investigation of the decay of light vector mesons inside a dense medium into lepton pairs. These decays provide a conceptually ideal tool to study the invariant mass of the vector meson in-medium, since the lepton pairs of these meson decays leave the reaction without further strong interaction. Thus, no final state interaction affects the measurement. Unfortunately, the branching ratios of vector mesons into lepton pairs are very small (∼ 10 -5 ). This calls for a high rate, high acceptance experiment. In addition, a sophisticated real time trigger system is used in HADES to enrich the interesting events in the recorded data. The focus of this thesis is the development of a next generation real time trigger method to improve the enrichment of lepton events in the HADES trigger. In addition, a flexible hardware platform (GE-MN) was developed to implement and test the trigger method. The GE-MN features two Gigabit-Ethernet interfaces for data transport, a VMEbus for slow control and configuration, and a TigerSHARC DSP for data processing. It provides the experience to discuss the challenges and benefits of using a commercial standard network technology based system in an experiment. The developed and tested trigger method correlates the ring information of the HADES RICH with the fired wires (cells) of the HADES MDC detector. This correlation method operates by calculating for each event the cells which should have seen the signal of a traversing lepton, and compares these calculated cells to all the cells that did see a signal. The cells which should have fired are calculated from the polar and azimuthal angle information of the RICH rings by assuming a straight line in space, which is starting at the target and extending into a direction given by the ring angles. The line extends through the inner MDC chambers and the traversed cells are those that should have been hit. To compensate different sources for inaccuracies not
Level 3 trigger algorithm and hardware platform for the HADES experiment
Energy Technology Data Exchange (ETDEWEB)
Kirschner, Daniel Georg
2007-10-26
One focus of the HADES experiment is the investigation of the decay of light vector mesons inside a dense medium into lepton pairs. These decays provide a conceptually ideal tool to study the invariant mass of the vector meson in-medium, since the lepton pairs of these meson decays leave the reaction without further strong interaction. Thus, no final state interaction affects the measurement. Unfortunately, the branching ratios of vector mesons into lepton pairs are very small ({approx} 10{sup -5}). This calls for a high rate, high acceptance experiment. In addition, a sophisticated real time trigger system is used in HADES to enrich the interesting events in the recorded data. The focus of this thesis is the development of a next generation real time trigger method to improve the enrichment of lepton events in the HADES trigger. In addition, a flexible hardware platform (GE-MN) was developed to implement and test the trigger method. The GE-MN features two Gigabit-Ethernet interfaces for data transport, a VMEbus for slow control and configuration, and a TigerSHARC DSP for data processing. It provides the experience to discuss the challenges and benefits of using a commercial standard network technology based system in an experiment. The developed and tested trigger method correlates the ring information of the HADES RICH with the fired wires (cells) of the HADES MDC detector. This correlation method operates by calculating for each event the cells which should have seen the signal of a traversing lepton, and compares these calculated cells to all the cells that did see a signal. The cells which should have fired are calculated from the polar and azimuthal angle information of the RICH rings by assuming a straight line in space, which is starting at the target and extending into a direction given by the ring angles. The line extends through the inner MDC chambers and the traversed cells are those that should have been hit. To compensate different sources for
Milton’s Pro-Feminist Presentation of Eve in Paradise Lost
Directory of Open Access Journals (Sweden)
Abdullah F. Al-Badarneh
2014-07-01
Full Text Available This paper absolves John Milton from any critical and feminist accusation of being anti-feminist in Paradise Lost. Such defense of Milton is based on the close reading and analysis of the passages that focus on Adam and Eve in the text. This study highlights Milton’s modern view of women as independent, free, and responsible. It also presents his iconoclasm in representing women as leaders and initiative takers when decision is to be taken. This study shows Milton reversing traditional gender roles through the relationship of Adam and Eve. In addition, one can find a modern emphasis on the principles of equality, democracy, dialogue, freedom, and free will available in the poem and designed in a way that serves Eve’s rights in comparison with Adam’s. Therefore, my voice in defending Milton joins the cohort of critics who believe of Milton’s wide perspective of pro-feminism and not a person of misogyny.
Maldonado Puente, Bryan Patricio
2014-01-01
The inner detector of the ATLAS experiment has two types of silicon detectors used for tracking: Pixel Detector and SCT (semiconductor tracker). Once a proton-proton collision occurs, the result- ing particles pass through these detectors and these are recorded as hits on the detector surfaces. A medium to high energy particle passes through seven different surfaces of the two detectors, leaving seven hits, while lower energy particles can leave many more hits as they circle through the detector. For a typical event during the expected operational conditions, there are 30 000 hits in average recorded by the sensors. Only high energy particles are of interest for physics analysis and are taken into account for the path reconstruction; thus, a filtering process helps to discard the low energy particles produced in the collision. The following report presents a solution for increasing the speed of the filtering process in the path reconstruction algorithm.
Directory of Open Access Journals (Sweden)
V. Krenn
2017-01-01
inflammatory antigens were suggested for immunohistochemical analysis (including Ki-67, CD68-, CD3-, CD15и CD20. This immunohistochemical scale and subdivision into low and high degree synovitis provided a possibility to assess the risk of development and biological sensitivity of rheumatoid arthritis. Thus, an important histological input was made into primary rheumatology diagnostics which did not consider tissue changes. Due to formal integration of synovitis scale into the algorithm of synovial pathology diagnostics a comprehensive classification was developed specifically for differentiated orthopaedics diagnostics.
Leonardi, E.; Piperno, G.; Raggi, M.
2017-10-01
A possible solution to the Dark Matter problem postulates that it interacts with Standard Model particles through a new force mediated by a “portal”. If the new force has a U(1) gauge structure, the “portal” is a massive photon-like vector particle, called dark photon or A’. The PADME experiment at the DAΦNE Beam-Test Facility (BTF) in Frascati is designed to detect dark photons produced in positron on fixed target annihilations decaying to dark matter (e+e-→γA‧) by measuring the final state missing mass. One of the key roles of the experiment will be played by the electromagnetic calorimeter, which will be used to measure the properties of the final state recoil γ. The calorimeter will be composed by 616 21×21×230 mm3 BGO crystals oriented with the long axis parallel to the beam direction and disposed in a roughly circular shape with a central hole to avoid the pile up due to the large number of low angle Bremsstrahlung photons. The total energy and position of the electromagnetic shower generated by a photon impacting on the calorimeter can be reconstructed by collecting the energy deposits in the cluster of crystals interested by the shower. In PADME we are testing two different clustering algorithms, PADME-Radius and PADME-Island, based on two complementary strategies. In this paper we will describe the two algorithms, with the respective implementations, and report on the results obtained with them at the PADME energy scale (< 1 GeV), both with a GEANT4 based simulation and with an existing 5×5 matrix of BGO crystals tested at the DAΦNE BTF.
International Nuclear Information System (INIS)
Piepel, Gregory F.; Cooley, Scott K.; Jones, Bradley
2005-01-01
This paper describes the solution to a unique and challenging mixture experiment design problem involving: (1) 19 and 21 components for two different parts of the design, (2) many single-component and multi-component constraints, (3) augmentation of existing data, (4) a layered design developed in stages, and (5) a no-candidate-point optimal design approach. The problem involved studying the liquidus temperature of spinel crystals as a function of nuclear waste glass composition. The statistical objective was to develop an experimental design by augmenting existing glasses with new nonradioactive and radioactive glasses chosen to cover the designated nonradioactive and radioactive experimental regions. The existing 144 glasses were expressed as 19-component nonradioactive compositions and then augmented with 40 new nonradioactive glasses. These included 8 glasses on the outer layer of the region, 27 glasses on an inner layer, 2 replicate glasses at the centroid, and one replicate each of three existing glasses. Then, the 144 + 40 = 184 glasses were expressed as 21-component radioactive compositions, and augmented with 5 radioactive glasses. A D-optimal design algorithm was used to select the new outer layer, inner layer, and radioactive glasses. Several statistical software packages can generate D-optimal experimental designs, but nearly all of them require a set of candidate points (e.g., vertices) from which to select design points. The large number of components (19 or 21) and many constraints made it impossible to generate the huge number of vertices and other typical candidate points. JMP was used to select design points without candidate points. JMP uses a coordinate-exchange algorithm modified for mixture experiments, which is discussed and illustrated in the paper
Performance of the reconstruction algorithms of the FIRST experiment pixel sensors vertex detector
Rescigno, R; Juliani, D; Spiriti, E; Baudot, J; Abou-Haidar, Z; Agodi, C; Alvarez, M A G; Aumann, T; Battistoni, G; Bocci, A; Böhlen, T T; Boudard, A; Brunetti, A; Carpinelli, M; Cirrone, G A P; Cortes-Giraldo, M A; Cuttone, G; De Napoli, M; Durante, M; Gallardo, M I; Golosio, B; Iarocci, E; Iazzi, F; Ickert, G; Introzzi, R; Krimmer, J; Kurz, N; Labalme, M; Leifels, Y; Le Fevre, A; Leray, S; Marchetto, F; Monaco, V; Morone, M C; Oliva, P; Paoloni, A; Patera, V; Piersanti, L; Pleskac, R; Quesada, J M; Randazzo, N; Romano, F; Rossi, D; Rousseau, M; Sacchi, R; Sala, P; Sarti, A; Scheidenberger, C; Schuy, C; Sciubba, A; Sfienti, C; Simon, H; Sipala, V; Tropea, S; Vanstalle, M; Younis, H
2014-01-01
Hadrontherapy treatments use charged particles (e.g. protons and carbon ions) to treat tumors. During a therapeutic treatment with carbon ions, the beam undergoes nuclear fragmentation processes giving rise to significant yields of secondary charged particles. An accurate prediction of these production rates is necessary to estimate precisely the dose deposited into the tumours and the surrounding healthy tissues. Nowadays, a limited set of double differential carbon fragmentation cross-section is available. Experimental data are necessary to benchmark Monte Carlo simulations for their use in hadrontherapy. The purpose of the FIRST experiment is to study nuclear fragmentation processes of ions with kinetic energy in the range from 100 to 1000 MeV/u. Tracks are reconstructed using information from a pixel silicon detector based on the CMOS technology. The performances achieved using this device for hadrontherapy purpose are discussed. For each reconstruction step (clustering, tracking and vertexing), different...
Development experience and strategy for the combined algorithm on the alarm processing and diagnosis
International Nuclear Information System (INIS)
Chung, Hak-Yeong
1997-01-01
In this paper, I presented the development experience on the alarm processing and fault diagnosis which has been achieved from early 1988 to late 1995. The scope covered is the prototype stage, the development stage of on-line operator-aid system, and an intelligent human-machine interface system. In the second part, I proposed a new method (APEXS) of multi-alarm processing to select the causal alarm(s) among occurred alarms by using the time information of each occurred alarm and alarm tree knowledge and the corresponding diagnosis method based on the selected causal alarm(s) by using the prescribed qualitative model. With more knowledge base about the plant and some modification suitable for real environment, APEXS will be able to adapt to a real steam power plant. (author). 18 refs, 3 figs, 1 tab
Barokka (Okka), Khairani
2017-01-01
This article presents lessons from touring a show on pain with limited resources and in chronic pain. In 2014, I toured solo deaf-accessible poetry/art show "Eve and Mary Are Having Coffee" in various forms in the UK, Austria, and India. As an Indonesian woman with then-extreme chronic pain and fatigue, herein are lessons learned from…
Project overview of OPTIMOS-EVE: the fibre-fed multi-object spectrograph for the E-ELT
Navarro, R.; Chemla, F.; Bonifacio, P.; Flores, H.; Guinouard, I.; Huet, J.-M.; Puech, M.; Royer, F.; Pragt, J.H.; Wulterkens, G.; Sawyer, E.C.; Caldwell, M.E.; Tosh, I.A.J.; Whalley, M.S.; Woodhouse, G.F.W.; Spanò, P.; Di Marcantonio, P.; Andersen, M.I.; Dalton, G.B.; Kaper, L.; Hammer, F.
2010-01-01
OPTIMOS-EVE (OPTical Infrared Multi Object Spectrograph - Extreme Visual Explorer) is the fibre fed multi object spectrograph proposed for the European Extremely Large Telescope (E-ELT), planned to be operational in 2018 at Cerro Armazones (Chile). It is designed to provide a spectral resolution of
2012-01-01
Küsimusele vastavad: Tartu Täiskasvanute Gümnaasiumi abiturient Valmar Pantšenko, Eesti meister motokrossis klassis Quad 100 Martin Adamson, Tamsalu Gümnaasiumi õppealajuhataja Maire Tamm ja Ristiku põhikooli õpetaja-metoodik Eve Reisalu
International Nuclear Information System (INIS)
Clarmann, T. von; Hoepfner, M.; Funke, B.; Lopez-Puertas, M.; Dudhia, A.; Jay, V.; Schreier, F.; Ridolfi, M.; Ceccherini, S.; Kerridge, B.J.; Reburn, J.; Siddans, R.
2003-01-01
When retrieving atmospheric parameters from radiance spectra, the forward modelling of radiative transfer through the Earth's atmosphere plays a key role, since inappropriate modelling directly maps on to the retrieved state parameters. In the context of pre-launch activities of the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS) experiment, which is a high resolution limb emission sounder for measurement of atmospheric composition and temperature, five scientific groups intercompared their forward models within the framework of the Advanced MIPAS Level 2 Data Analysis (AMIL2DA) project. These forward models have been developed, or, in certain respects, adapted in order to be used as part of the groups' MIPAS data processing. The following functionalities have been assessed: the calculation of line strengths including non-local thermodynamic equilibrium, the evaluation of the spectral line shape, application of chi-factors and semi-empirical continua, the interpolation of pre-tabulated absorption cross sections in pressure and temperature, line coupling, atmospheric ray tracing, the integration of the radiative transfer equation through an inhomogeneous atmosphere, the convolution of monochromatic spectra with an instrument line shape function, and the integration of the incoming radiances over the instrument field of view
Numerical Simulations of Counter Current Flow Experiments Using a Morphology Detection Algorithm
Directory of Open Access Journals (Sweden)
Thomas Höhne
2012-09-01
Full Text Available In order to improve the understanding of counter-current two-phase flows and to validate new physical models, CFD simulations of 1/3rd scale model of the hot leg of a German Konvoi PWR with rectangular cross section was performed. Selected counter-current flow limitation (CCFL experiments at the Helmholtz-Zentrum Dresden-Rossendorf (HZDR were calculated with ANSYS CFX 12.1 using the multi-fluid Euler-Euler modeling approach. The transient calculations were carried out using a gas/liquid inhomogeneous multiphase flow model coupled with a SST turbulence model for each phase. In the simulation, the surface drag was approached by a new correlation inside the Algebraic Interfacial Area Density (AIAD model. The AIAD model allows the detection of the morphological form of the two phase flow and the corresponding switching via a blending function of each correlation from one object pair to another. As a result this model can distinguish between bubbles, droplets and the free surface using the local liquid phase volume fraction value. A comparison with the high-speed video observations shows a good qualitative agreement. The results indicated that quantitative agreement of the CCFL characteristics between calculation and experimental data was obtained. To validate the model and to study scaling effects CFD simulations of the CCFL phenomenon in a full scale PWR hot leg of the UPTF test facility were performed. Also these results indicated a good agreement between the calculation and experimental data. The final goal is to provide an easy usable AIAD framework for all ANSYS CFX users, with the possibility of the implementation of their own correlations.
Utari, Heny Budi; Soowannayan, Chumporn; Flegel, Timothy W; Whityachumnarnkul, Boonsirm; Kruatrachue, Maleeya
2017-11-01
The viral accommodation hypothesis proposes that endogenous viral elements (EVE) from both RNA and DNA viruses are being continually integrated into the shrimp genome by natural host processes and that they can result in tolerance to viral infection by fortuitous production of antisense, immunospecific RNA (imRNA). Thus, we hypothesized that previously reported microarray results for the presence of white spot syndrome virus (WSSV) open reading frames (ORFs) formerly called 151, 366 and 427 in a domesticated giant tiger shrimp (Penaeus monodon) breeding stock might have represented expression from EVE, since the stock had shown uninterrupted freedom from white spot disease (WSD) for many generations. To test this hypothesis, 128 specimens from a current stock generation were confirmed for freedom from WSSV infection using two nested PCR detection methods. Subsequent nested-PCR testing revealed 33/128 specimens (26%) positive for at least one of the ORF at very high sequence identity (95-99%) to extant WSSV. Positive results for ORF 366 (now known to be a fragment of the WSSV capsid protein gene) dominated (28/33 = 84.8%), so 9 arbitrarily selected 366-positive specimens were tested by strand-specific, nested RT-PCR using DNase-treated RNA templates. This revealed variable RNA expression in individual shrimp including no RNA transcripts (n = 1), sense transcripts only (n = 1), antisense transcripts only (n = 2) or transcripts of both sense (n = 5). The latter 7 expression products indicated specimens producing putative imRNA. The variable types and numbers of the EVE and the variable RNA expression (including potential imRNA) support predictions of the viral accommodation hypothesis that EVE are randomly produced and expressed. Positive nested PCR test results for EVE of ORF 366 using DNA templates derived from shrimp sperm (germ cells), indicated that they were heritable. Copyright © 2017 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Falchieri, Davide; Gandolfi, Enzo; Masotti, Matteo
2004-01-01
This paper evaluates the performances of a wavelet-based compression algorithm applied to the data produced by the silicon drift detectors of the ALICE experiment at CERN. This compression algorithm is a general purpose lossy technique, in other words, its application could prove useful even on a wide range of other data reduction's problems. In particular the design targets relevant for our wavelet-based compression algorithm are the following ones: a high-compression coefficient, a reconstruction error as small as possible and a very limited execution time. Interestingly, the results obtained are quite close to the ones achieved by the algorithm implemented in the first prototype of the chip CARLOS, the chip that will be used in the silicon drift detectors readout chain
Polymer-based blood vessel models with micro-temperature sensors in EVE
Mizoshiri, Mizue; Ito, Yasuaki; Hayakawa, Takeshi; Maruyama, Hisataka; Sakurai, Junpei; Ikeda, Seiichi; Arai, Fumihito; Hata, Seiichi
2017-04-01
Cu-based micro-temperature sensors were directly fabricated on poly(dimethylsiloxane) (PDMS) blood vessel models in EVE using a combined process of spray coating and femtosecond laser reduction of CuO nanoparticles. CuO nanoparticle solution coated on a PDMS blood vessel model are thermally reduced and sintered by focused femtosecond laser pulses in atmosphere to write the sensors. After removing the non-irradiated CuO nanoparticles, Cu-based microtemperature sensors are formed. The sensors are thermistor-type ones whose temperature dependences of the resistance are used for measuring temperature inside the blood vessel model. This fabrication technique is useful for direct-writing of Cu-based microsensors and actuators on arbitrary nonplanar substrates.
Kirber, Helja
2014-01-01
Tutvustus: Sits, Eve Hele. Mooni talurahva tähtpäevik. [Tallinn] : Argo, c2011; Mooni metsauudistaja : tähelepanekuid ja lugusid metsaelust. [Tallinn] : Argo, c2012; Mooni Eestimaa raamat. [Tallinn] : Argo, c2013
Heskes, Tom; Eisinga, Rob; Breitling, Rainer
2014-11-21
The rank product method is a powerful statistical technique for identifying differentially expressed molecules in replicated experiments. A critical issue in molecule selection is accurate calculation of the p-value of the rank product statistic to adequately address multiple testing. Both exact calculation and permutation and gamma approximations have been proposed to determine molecule-level significance. These current approaches have serious drawbacks as they are either computationally burdensome or provide inaccurate estimates in the tail of the p-value distribution. We derive strict lower and upper bounds to the exact p-value along with an accurate approximation that can be used to assess the significance of the rank product statistic in a computationally fast manner. The bounds and the proposed approximation are shown to provide far better accuracy over existing approximate methods in determining tail probabilities, with the slightly conservative upper bound protecting against false positives. We illustrate the proposed method in the context of a recently published analysis on transcriptomic profiling performed in blood. We provide a method to determine upper bounds and accurate approximate p-values of the rank product statistic. The proposed algorithm provides an order of magnitude increase in throughput as compared with current approaches and offers the opportunity to explore new application domains with even larger multiple testing issue. The R code is published in one of the Additional files and is available at http://www.ru.nl/publish/pages/726696/rankprodbounds.zip .
International Nuclear Information System (INIS)
Arpinar, V E; Muftuler, L T; Hamamura, M J; Degirmenci, E
2012-01-01
Magnetic resonance electrical impedance tomography (MREIT) is a technique that produces images of conductivity in tissues and phantoms. In this technique, electrical currents are applied to an object and the resulting magnetic flux density is measured using magnetic resonance imaging (MRI) and the conductivity distribution is reconstructed using these MRI data. Currently, the technique is used in research environments, primarily studying phantoms and animals. In order to translate MREIT to clinical applications, strict safety standards need to be established, especially for safe current limits. However, there are currently no standards for safe current limits specific to MREIT. Until such standards are established, human MREIT applications need to conform to existing electrical safety standards in medical instrumentation, such as IEC601. This protocol limits patient auxiliary currents to 100 µA for low frequencies. However, published MREIT studies have utilized currents 10–400 times larger than this limit, bringing into question whether the clinical applications of MREIT are attainable under current standards. In this study, we investigated the feasibility of MREIT to accurately reconstruct the relative conductivity of a simple agarose phantom using 200 µA total injected current and tested the performance of two MREIT reconstruction algorithms. These reconstruction algorithms used are the iterative sensitivity matrix method (SMM) by Ider and Birgul (1998 Elektrik 6 215–25) with Tikhonov regularization and the harmonic B Z proposed by Oh et al (2003 Magn. Reason. Med. 50 875–8). The reconstruction techniques were tested at both 200 µA and 5 mA injected currents to investigate their noise sensitivity at low and high current conditions. It should be noted that 200 µA total injected current into a cylindrical phantom generates only 14.7 µA current in imaging slice. Similarly, 5 mA total injected current results in 367 µA in imaging slice. Total acquisition
Directory of Open Access Journals (Sweden)
Eloise T. Choice
2016-11-01
Full Text Available Inherent in the Creation (“Intelligent Design” versus the Theory of Evolution (“natural selection” War, is the scientific belief in the millennia-long “evolution” of humankind, versus the Christian belief in the creation of the Biblical Adam and Eve circa 7,700 B.C. This article shows that there is no such thing as an “evolution” of Earth’s life forms. Over the course of millions of years, God created then, re-designed all Earth’s life forms in successive stages, which accounts for the non-linear progression of said forms and particularly of humanoids. Integrating Mitochondria DNA and archaeological evidence with Judeo-Christian and extra-Biblical texts, shows that after God created then re-designed pre-Adamic humans, He created the Biblical Adam and Eve circa 9,700 years ago during the Mesolithic Period as prototypes for present-day man. Moreover, archaeological evidence, plus Judeo-Christian and extra-Biblical texts suggest that Eve and Adam co-existed with Homo sapiens during the Mesolithic/Neolithic Periods.
Staff Association
2017-01-01
On Saturday, 4 March 2017, the Children’s Day-Care Centre EVE and School of CERN Staff Association opened its doors to allow interested parents to visit the structure. Staff Association - Carole Dargagnon presents the EVE and school during the open day. This event was a great success and brought together many families. The Open Day was held in two sessions (first session at 10 am and second at 11 am), each consisting in two parts: a general presentation of the structure by the Headmistress Carole Dargagnon, a tour of the installations with Marie-Luz Cavagna and Stéphanie Palluel, the administrative assistants. The management team was delighted to offer parents the opportunity to participate in this pleasant event, where everyone could express themselves, ask questions and find answers in a friendly atmosphere.
Kick off of the 2017-2018 school year at the EVE and School of the CERN Staff Association
Staff Association
2017-01-01
The Children’s Day-Care Centre (“Espace de Vie Enfantine” - EVE) and School of the CERN Staff Association opened its doors once again to welcome the children, along with the teaching and administrative staff of the structure. The start of the school year was carried out gradually and in small groups to allow quality interaction between children, professionals and parents. At the EVE (Nursery and Kindergarten) and School, the children have the opportunity to thrive in a privileged environment, rich in cultural diversity, since the families (parents and children) come from many different nationalities. The teaching staff do their utmost to ensure that the children can become more autonomous and develop their social skills, all the while taking care of their well-being. This year, several new features are being introduced, for instance, first steps towards English language awareness. Indeed, the children will get to discover the English language in creative classes together with tr...
Alinia, Siros; Rezaei, Satar; Daroudi, Rajabali; Hadadi, Mashyaneh; Akbari Sari, Ali
2013-01-01
Fireworks are commonly used in local and national celebrations. The aim of this study is to explore the extent, nature and hospital costs of injuries related to the Persian Wednesday Eve festival in Iran. Data for injuries caused by fireworks during the 2009 Persian Wednesday Eve festival were collected from the national Ministry of Health database. Injuries were divided into nine groups and the average and total hospital costs were estimated for each group. The cost of care for patients with burns was estimated by reviewing a sample of 100 patients randomly selected from a large burn center in Tehran. Other costs were estimated by conducting semi structured interviews with expert managers at two large government hospitals. 1817 people were injured by fireworks during the 2009 Wednesday Eve festival. The most frequently injured sites were the hand (43.3%), eye (24.5%) and face (13.2%), and the most common types of injury were burns (39.9%), contusions/abrasions (24.6%) and lacerations (12.7%). The mean length of hospital stay was 8.15 days for patients with burns, 10.7 days for those with amputations, and 3 days for those with other types of injury. The total hospital cost of injuries was US$ 284 000 and the average cost per injury was US$ 156. The total hospital cost of patients with amputations was US$ 48 598. Most of the costs were related to burns (56.6%) followed by amputations (12.2%). Injuries related to the Persian Wednesday Eve festival are common and lead to extensive morbidity and medical costs. © 2013 KUMS, All rights reserved.
Inspiratsiooni lätetel Amsterdamis : kuhu püüelda ja millest hoiduda... / Eve Koha
Koha, Eve
2002-01-01
Ülemaailmse klaasikunsti ühingu Glass Art Society 32. aastakonverentsist Amsterdamis 30. V-2. VI. Eestist osalesid Viivi-Ann Keerdo, Eve Koha, Kai Koppel, Eeva Käsper-Lennuk, Ivo Lill ja Tiina Sarapu. Näitusel "Young and Hot, European Emerging Artists" esindasid Eestit E. Käsper-Lennuk ja T. Sarapu. Elutööpreemiad ameeriklasele Fritz Dreisbachile (sünd. 1941) ja taanlasele Finn Lynggaardile (sünd. 1939)
Tatli, Hamza; Yucel, Derya; Yilmaz, Sercan; Fayda, Merdan
2018-02-01
The aim of this study is to develop an algorithm for independent MU/treatment time (TT) verification for non-IMRT treatment plans, as a part of QA program to ensure treatment delivery accuracy. Two radiotherapy delivery units and their treatment planning systems (TPS) were commissioned in Liv Hospital Radiation Medicine Center, Tbilisi, Georgia. Beam data were collected according to vendors' collection guidelines, and AAPM reports recommendations, and processed by Microsoft Excel during in-house algorithm development. The algorithm is designed and optimized for calculating SSD and SAD treatment plans, based on AAPM TG114 dose calculation recommendations, coded and embedded in MS Excel spreadsheet, as a preliminary verification algorithm (VA). Treatment verification plans were created by TPSs based on IAEA TRS 430 recommendations, also calculated by VA, and point measurements were collected by solid water phantom, and compared. Study showed that, in-house VA can be used for non-IMRT plans MU/TT verifications.
Saavedra, Juan Alejandro
Quality Control (QC) and Quality Assurance (QA) strategies vary significantly across industries in the manufacturing sector depending on the product being built. Such strategies range from simple statistical analysis and process controls, decision-making process of reworking, repairing, or scraping defective product. This study proposes an optimal QC methodology in order to include rework stations during the manufacturing process by identifying the amount and location of these workstations. The factors that are considered to optimize these stations are cost, cycle time, reworkability and rework benefit. The goal is to minimize the cost and cycle time of the process, but increase the reworkability and rework benefit. The specific objectives of this study are: (1) to propose a cost estimation model that includes energy consumption, and (2) to propose an optimal QC methodology to identify quantity and location of rework workstations. The cost estimation model includes energy consumption as part of the product direct cost. The cost estimation model developed allows the user to calculate product direct cost as the quality sigma level of the process changes. This provides a benefit because a complete cost estimation calculation does not need to be performed every time the processes yield changes. This cost estimation model is then used for the QC strategy optimization process. In order to propose a methodology that provides an optimal QC strategy, the possible factors that affect QC were evaluated. A screening Design of Experiments (DOE) was performed on seven initial factors and identified 3 significant factors. It reflected that one response variable was not required for the optimization process. A full factorial DOE was estimated in order to verify the significant factors obtained previously. The QC strategy optimization is performed through a Genetic Algorithm (GA) which allows the evaluation of several solutions in order to obtain feasible optimal solutions. The GA
Start of enrolment for the Champs-Fréchets crèche (EVE)
HR Department
2008-01-01
As announced in Bulletin 43/2007, CERN signed an agreement with the commune of Meyrin on 17 October 2007 under which 20 places will be reserved for the children of CERN personnel in the Champs-Fréchets day care centre (EVE), which will open on Monday, 25 August, and CERN will contribute to the funding. This agreement allows members of the CERN personnel (employees and associates) access to the crèche, for children aged between 4 months and 4 years, irrespective of where they are living. Applications for the school year starting autumn 2008 will be accepted from Monday 17 March until Monday 30 June 2008. Members of the personnel must complete the enrolment formalities with the Meyrin infant education service themselves: Mairie de Meyrin Service de la Petite Enfance 2 rue des Boudines Case postale 367 - 1217 Meyrin 1 - Tel. + 41 (0)22 782 21 21 mailto:meyrin@meyrin.ch http://www.meyrin.ch/petiteenfance Application forms (in PDF) can be downloaded from the website of the com...
The Eroticism of Artificial Flesh in Villiers de L'Isle Adam's L'Eve Future
Directory of Open Access Journals (Sweden)
Patricia Pulham
2008-10-01
Full Text Available Villiers de L'Isle Adam's 'L'Eve Future' published in 1886 features a fictional version of the inventor Thomas Edison who constructs a complex, custom-made android for Englishman Lord Ewald as a substitute for his unsatisfactory lover. Hadaly, the android, has a number of literary and cultural precursors and successors. Her most commonly accepted ancestor is Olympia in E. T. A. Hoffmann's 'The Sandman' (1816 and among her fascinating descendants are Oskar Kokoschka's 'Silent Woman'; Model Borghild, a sex doll designed by German technicians during World War II;‘Caracas' in Tommaso Landolfi's short story ‘Gogol's Wife' (1954; a variety of gynoids and golems from the realms of science fiction, including Ira Levin's 'Stepford Wives' (1972; and, most recently, that silicon masterpiece - the Real Doll. All, arguably, have their genesis in the classical myth of Pygmalion. This essay considers the tension between animation and stasis in relation to this myth, and explores the necrophiliac aesthetic implicit in Villiers's novel.
Allowance officers Russian and Austro-Hungarian armies on the eve of the First World War
Directory of Open Access Journals (Sweden)
Alexander P. Abramov
2016-09-01
Full Text Available On the basis of historical material provides information on measures of state and military administration on the eve of the First World War to improve the welfare of Russian officers and Austro-Hungary, through various forms of material incentives, which are reflected in the cash payments, promotions, awards and social guarantees. On the basis of archival materials of the study period, open scientific publications and Internet resources there are disclosed the features of the destination of salaries, various allowances and compensations Russian army in comparison to the Austro-Hungarian army, who spoke Russian opponent in the First World War. The author notes that the existing system of money allowances in the Russian army was more advantageous than in the Austro-Hungarian army. However, neither one nor the other could not fully meet the needs of the majority of officers of both armies, entered as opponents in the First World War. One of its major shortcomings, both in Russia and in the Austro-Hungarian Empire, was a wide gap in the amounts of all kinds of money allowances between chief officers, staff officers and generals.
Didkovsky, Leonid; Wieman, Seth; Woods, Thomas
2016-10-01
The Extreme ultraviolet Spectrophotometer (ESP), one of the channels of SDO's Extreme ultraviolet Variability Experiment (EVE), measures solar irradiance in several EUV and soft x-ray (SXR) bands isolated using thin-film filters and a transmission diffraction grating, and includes a quad-diode detector positioned at the grating zeroth-order to observe in a wavelength band from about 0.1 to 7.0 nm. The quad diode signal also includes some contribution from shorter wavelength in the grating's first-order and the ratio of zeroth-order to first-order signal depends on both source geometry, and spectral distribution. For example, radiometric calibration of the ESP zeroth-order at the NIST SURF BL-2 with a near-parallel beam provides a different zeroth-to-first-order ratio than modeled for solar observations. The relative influence of "uncalibrated" first-order irradiance during solar observations is a function of the solar spectral irradiance and the locations of large Active Regions or solar flares. We discuss how the "uncalibrated" first-order "solar" component and the use of variable solar reference spectra affect determination of absolute SXR irradiance which currently may be significantly overestimated during high solar activity.
Yang, Li-Tao; Li, Hau-Bin; Yue, Qian; Kang, Ke-Jun; Cheng, Jian-Ping; Li, Yuan-Jing; Tsz-King Wong, Henry; Aǧartioǧlu, M.; An, Hai-Peng; Chang, Jian-Ping; Chen, Jing-Han; Chen, Yun-Hua; Deng, Zhi; Du, Qiang; Gong, Hui; He, Li; Hu, Jin-Wei; Hu, Qing-Dong; Huang, Han-Xiong; Jia, Li-Ping; Jiang, Hao; Li, Hong; Li, Jian-Min; Li, Jin; Li, Xia; Li, Xue-Qian; Li, Yu-Lan; Lin, Fong-Kay; Lin, Shin-Ted; Liu, Shu-Kui; Liu, Zhong-Zhi; Ma, Hao; Ma, Jing-Lu; Pan, Hui; Ren, Jie; Ruan, Xi-Chao; Sevda, B.; Sharma, Vivek; Shen, Man-Bin; Singh, Lakhwinder; Singh, Manoj Kumar; Tang, Chang-Jian; Tang, Wei-You; Tian, Yang; Wang, Ji-Min; Wang, Li; Wang, Qing; Wang, Yi; Wu, Shi-Yong; Wu, Yu-Cheng; Xing, Hao-Yang; Xu, Yin; Xue, Tao; Yang, Song-Wei; Yi, Nan; Yu, Chun-Xu; Yu, Hai-Jun; Yue, Jian-Feng; Zeng, Xiong-Hui; Zeng, Ming; Zeng, Zhi; Zhang, Yun-Hua; Zhao, Ming-Gang; Zhao, Wei; Zhou, Ji-Fang; Zhou, Zu-Ying; Zhu, Jing-Jun; Zhu, Zhong-Hua; CDEX Collaboration
2018-01-01
We report results of a search for light weakly interacting massive particle (WIMP) dark matter from the CDEX-1 experiment at the China Jinping Underground Laboratory (CJPL). Constraints on WIMP-nucleon spin-independent (SI) and spin-dependent (SD) couplings are derived with a physics threshold of 160 eVee, from an exposure of 737.1 kg-days. The SI and SD limits extend the lower reach of light WIMPs to 2 GeV and improve over our earlier bounds at WIMP mass less than 6 GeV. Supported by the National Key Research and Development Program of China (2017YFA0402200, 2017YFA0402201), the National Natural Science Foundation of China (11175099, 11275107, 11475117, 11475099, 11475092, 11675088), the National Basic Research Program of China (973 Program) (2010CB833006). We thank the support of grants from the Tsinghua University Initiative Scientific Research Program (20121088494, 20151080354) and the Academia Sinica Investigator Award 2011-15, contracts 103-2112-M-001-024 and 104-2112-M-001-038-MY3 from the Ministry of Science and Technology of Taiwan.
International Nuclear Information System (INIS)
Gonzalez, V.; Dolores, VV. de los; Pastor, V.; Martinez, J.; Gimeno, J.; Guardino, C.; Crispin, V.
2011-01-01
Algorithm has been used at our institution iGRiMLO scheduled for individual verification of treatment plans for intensity modulated radiotherapy (IMRT) step and shoot through portal dosimetry pretreatment of non-transmission, triggering the plan directly to a portal imaging device (EPID) of an amorphous silicon flat panel.
Bousson, N; The ATLAS collaboration
2011-01-01
The ability to identify jets containing b-hadrons is important for the high-pT physics program of a general-purpose experiment at the LHC such as ATLAS. Two robust b-tagging algorithms taking advantage of the impact parameter of tracks or reconstructing secondary vertices have been swiftly commissioned and used for several analyses of the 2010 data: bottom and top quark production cross-section measurements, searches for supersymmetry and new physics, etc. Building on this success, several more advanced b-tagging algorithms are commissioned with the 2011 data. All these algorithms are based on a likelihood ratio formalism to compare the signal (b-jet) or background (light or in some cases charm jet) hypotheses, using Monte Carlo predicates. The accuracy with which the simulation reproduces the experimental data is therefore critical and is explained in details, as well as the expected improvement in performance brought in by these new algorithms and some first results about the measurement in data of their pe...
Bousson, N; The ATLAS collaboration
2011-01-01
The ability to identify jets containing b-hadrons is important for the high-pT physics program of a general-purpose experiment at the LHC such as ATLAS. Two robust b-tagging algorithms, JetProb and SV0, taking advantage of the impact parameter of tracks or reconstructing secondary vertices have been swiftly commissioned and used for several analyses of the 2010 data: bottom and top quark production cross-section measurements, searches for supersymmetry and new physics, etc. Building on this success, several more advanced b-tagging algorithms are commissioned using ~330 pb^{-1} of the 2011 data. All these algorithms are based on a likelihood ratio formalism to compare the signal (b-jet) or background (light or in some cases charm jet) hypotheses, using Monte Carlo predicates. The accuracy with which the simulation reproduces the experimental data is therefore critical and is explained in details, as well as the expected improvement in performance brought in by these new algorithms.
Schwamm, Christoph
2015-01-01
This article analyses the illness experiences of male patients from the Heidelberg University Psychiatric Hospital during the protests against Psychiatry in the year 1973. Protest is one of the most important expressions of masculinity in socially disadvantaged men, such as men with mental disorders. The analysis of 100 medical records shows that some patients tried to construct themselves as men in a way that was explicitly motivated by antipsychiatric ideas: They questioned psychiatric authority, behaved "sexually inappropriate", or used drugs. On the eve of psychiatric reform in West Germany those patients were well aware that the alternative--complying with the treatment--would put them at considerable risk. In addition to the usual inference of hegemonic or normative masculinities as risk-factors, the behavior of those ,,rebellious patients" has to be interpreted as individual coping strategies.
DEFF Research Database (Denmark)
Mahnke, Martina; Uprichard, Emma
2014-01-01
Imagine sailing across the ocean. The sun is shining, vastness all around you. And suddenly [BOOM] you’ve hit an invisible wall. Welcome to the Truman Show! Ever since Eli Pariser published his thoughts on a potential filter bubble, this movie scenario seems to have become reality, just with slight...... changes: it’s not the ocean, it’s the internet we’re talking about, and it’s not a TV show producer, but algorithms that constitute a sort of invisible wall. Building on this assumption, most research is trying to ‘tame the algorithmic tiger’. While this is a valuable and often inspiring approach, we...
Khang, Hyun Soo; Lee, Byung Il; Oh, Suk Hoon; Woo, Eung Je; Lee, Soo Yeol; Cho, Min Hyoung; Kwon, Ohin; Yoon, Jeong Rock; Seo, Jin Keun
2002-06-01
Recently, a new static resistivity image reconstruction algorithm is proposed utilizing internal current density data obtained by magnetic resonance current density imaging technique. This new imaging method is called magnetic resonance electrical impedance tomography (MREIT). The derivation and performance of J-substitution algorithm in MREIT have been reported as a new accurate and high-resolution static impedance imaging technique via computer simulation methods. In this paper, we present experimental procedures, denoising techniques, and image reconstructions using a 0.3-tesla (T) experimental MREIT system and saline phantoms. MREIT using J-substitution algorithm effectively utilizes the internal current density information resolving the problem inherent in a conventional EIT, that is, the low sensitivity of boundary measurements to any changes of internal tissue resistivity values. Resistivity images of saline phantoms show an accuracy of 6.8%-47.2% and spatial resolution of 64 x 64. Both of them can be significantly improved by using an MRI system with a better signal-to-noise ratio.
Firework-related injuries in Tehran's Persian Wednesday Eve Festival (Chaharshanbe Soori).
Tavakoli, Hassan; Khashayar, Patricia; Amoli, Hadi Ahmadi; Esfandiari, Khalil; Ashegh, Hossein; Rezaii, Jalal; Salimi, Javad
2011-03-01
Fireworks are the leading cause of injuries such as burns and amputations during the Persian Wednesday Eve Festival (Chaharshanbeh Soori). This study was designed to explore the age of the high-risk population, the type of fireworks most frequently causing injury, the pattern of injury, and the frequency of permanent disabilities. This cohort study was performed by Tehran Emergency Medical Services at different medical centers all around Tehran, Iran, in individuals referred due to firework-related injuries during 1 month surrounding the festival in the year 2007. The following information was extracted from the patients' medical records: demographic data, the type of fireworks causing injury, the pattern and severity of the injury, the pre-hospital and hospital care provided for the patient, and the patient's condition at the time of discharge. In addition, information on the severity of the remaining disability was recorded 8 months after the injury. There were 197 patients enrolled in the study with a mean age of 20.94 ± 11.31 years; the majority of them were male. Fuse-detonated noisemakers and homemade grenades were the most frequent causes of injury. Hand injury was reported in 39.8% of the cases. Amputation and long-term disability were found in 6 and 12 cases, respectively. None of the patients died during the study period. The fireworks used during a Chaharshanbe Soori ceremony were responsible for a considerable number of injuries to different parts of the body, and some of them led to permanent disabilities. Copyright © 2011 Elsevier Inc. All rights reserved.
THE FOREIGN POLICY OF THE BOLSHEVIKS ON THE EVE OF THE PARIS PEACE CONFERENCE OF 1919
Directory of Open Access Journals (Sweden)
Elena Nikolaevna Emelyanova
2017-11-01
Full Text Available Purpose. The article examines the foreign policy activities of the Bolshevik leadership on the eve of the opening of the Paris Peace Conference. The strategy and tactics of the RCP (B in the autumn-winter of 1918–1919 are analyzed, as well as the attempt to establish relations with the great powers hostile to the RSFSR, the striving of Soviet Russia to take its place in the new Versailles system. The ways to achieve this goal are exploring. The methodological basis of the article are the principles of objectivity, historicism, a critical approach to the sources used and a comprehensive analysis of the problem posed. Results: it is argued that the international situation, the growth of the revolutionary movement in Europe in 1918–1919, the unification of all left-wing forces around the Soviet state forced the leaders of Britain and the US to send their representative for talks with the Bolsheviks. On the other hand, the Bolshevik leadership sought to reach agreement with the world powers on recognizing the Soviet government, even by temporarily abandoning international goals, the implementation of these tasks was delegated to the Communist International created in March 1919. The preservation of the Soviet state was put by the Bolsheviks above the idea of a “world revolution”. Scope of application of the results. The results of the work can be used for further research in the field of history and political science, as well as in the teaching of these disciplines in the university.
Kreplin, Katharina
This thesis presents a novel flavour tagging algorithm using machine learning techniques and a precision measurement of the $B^0 -\\overline{B^0}$ oscillation frequency $\\Delta m_d$ using semileptonic $B^0$ decays. The LHC Run I data set is used which corresponds to $3 \\textrm{fb}^{-1}$ of data taken by the LHCb experiment at a center-of-mass energy of 7 TeV and 8 TeV. The performance of flavour tagging algorithms, exploiting the $b\\bar{b}$ pair production and the $b$ quark hadronization, is relatively low at the LHC due to the large amount of soft QCD background in inelastic proton-proton collisions. The standard approach is a cut-based selection of particles, whose charges are correlated to the production flavour of the $B$ meson. The novel tagging algorithm classifies the particles using an artificial neural network (ANN). It assigns higher weights to particles, which are likely to be correlated to the $b$ flavour. A second ANN combines the particles with the highest weights to derive the tagging decision. ...
Dahlhoff, Andrea
2006-01-01
At the LHC in Geneva the ATLAS experiment will start at 2007. The first part of the present work describes the implementation of trigger algorithms for the Jet/Energy Processor (JEP) as well as all other required features like controlling, diagnostics and read-out. The JEP is one of three processing units of the ATLAS Level-1 Calorimeter Trigger. It identifies and finds the location of jets, and sums total and missing transverse energy information from the trigger data. The Jet/Energy Module (JEM) is the main module of the JEP. The JEM prototype is designed to be functionally identical to the final production module for ATLAS. The thesis presents a description of the architecture, required functionality, and jet and energy summation algorithm of the JEM. Various input test vector patterns were used to check the performance of the comlete energy summation algorithm. The test results using two JEM prototypes are presented and discussed. The subject of the second part is a Monte-Carlo study which determines the ...
De Götzen , Amalia; Mion , Luca; Tache , Olivier
2007-01-01
International audience; We call sound algorithms the categories of algorithms that deal with digital sound signal. Sound algorithms appeared in the very infancy of computer. Sound algorithms present strong specificities that are the consequence of two dual considerations: the properties of the digital sound signal itself and its uses, and the properties of auditory perception.
Wang, Lui; Bayer, Steven E.
1991-01-01
Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.
Kamatani, Takashi; Fukunaga, Koichi; Miyata, Kaede; Shirasaki, Yoshitaka; Tanaka, Junji; Baba, Rie; Matsusaka, Masako; Kamatani, Naoyuki; Moro, Kazuyo; Betsuyaku, Tomoko; Uemura, Sotaro
2017-12-04
For single-cell experiments, it is important to accurately count the number of viable cells in a nanoliter well. We used a deep learning-based convolutional neural network (CNN) on a large amount of digital data obtained as microscopic images. The training set consisted of 103 019 samples, each representing a microscopic grayscale image. After extensive training, the CNN was able to classify the samples into four categories, i.e., 0, 1, 2, and more than 2 cells per well, with an accuracy of 98.3% when compared to determination by two trained technicians. By analyzing the samples for which judgments were discordant, we found that the judgment by technicians was relatively correct although cell counting was often difficult by the images of discordant samples. Based on the results, the system was further enhanced by introducing a new algorithm in which the highest outputs from CNN were used, increasing the accuracy to higher than 99%. Our system was able to classify the data even from wells with a different shape. No other tested machine learning algorithm showed a performance higher than that of our system. The presented CNN system is expected to be useful for various single-cell experiments, and for high-throughput and high-content screening.
International Nuclear Information System (INIS)
Kreplin, Katharina
2015-01-01
This thesis presents a novel flavour tagging algorithm using machine learning techniques and a precision measurement of the B 0 - anti B 0 oscillation frequency Δm d using semileptonic B 0 decays. The LHC Run I data set is used which corresponds to 3 fb -1 of data taken by the LHCb experiment at a center-of-mass energy of 7 TeV and 8 TeV. The performance of flavour tagging algorithms, exploiting the b anti b pair production and the b quark hadronization, is relatively low at the LHC due to the large amount of soft QCD background in inelastic proton-proton collisions. The standard approach is a cut-based selection of particles, whose charges are correlated to the production flavour of the B meson. The novel tagging algorithm classifies the particles using an artificial neural network (ANN). It assigns higher weights to particles, which are likely to be correlated to the b flavour. A second ANN combines the particles with the highest weights to derive the tagging decision. An increase of the opposite side kaon tagging performance of 50% and 30% is achieved on B + → J/ψK + data. The second number corresponds to a readjustment of the algorithm to the B 0 s production topology. This algorithm is employed in the precision measurement of Δm d . A data set of 3.2 x 10 6 semileptonic B 0 decays is analysed, where the B 0 decays into a D - (K + π - π - ) or D *- (π - anti D 0 (K + π - )) and a μ + ν μ pair. The ν μ is not reconstructed, therefore, the B 0 momentum needs to be statistically corrected for the missing momentum of the neutrino to compute the correct B 0 decay time. A result of Δm d =0.503±0.002(stat.)±0.001(syst.) ps -1 is obtained. This is the world's best measurement of this quantity.
Cominelli, A.; Acconcia, G.; Caldi, F.; Peronio, P.; Ghioni, M.; Rech, I.
2018-02-01
Time-Correlated Single Photon Counting (TCSPC) is a powerful tool that permits to record extremely fast optical signals with a precision down to few picoseconds. On the other hand, it is recognized as a relatively slow technique, especially when a large time-resolved image is acquired exploiting a single acquisition channel and a scanning system. During the last years, much effort has been made towards the parallelization of many acquisition and conversion chains. In particular, the exploitation of Single-Photon Avalanche Diodes in standard CMOS technology has paved the way to the integration of thousands of independent channels on the same chip. Unfortunately, the presence of a large number of detectors can give rise to a huge rate of events, which can easily lead to the saturation of the transfer rate toward the elaboration unit. As a result, a smart readout approach is needed to guarantee an efficient exploitation of the limited transfer bandwidth. We recently introduced a novel readout architecture, aimed at maximizing the counting efficiency of the system in typical TCSPC measurements. It features a limited number of high-performance converters, which are shared with a much larger array, while a smart routing logic provides a dynamic multiplexing between the two parts. Here we propose a novel routing algorithm, which exploits standard digital gates distributed among a large 32x32 array to ensure a dynamic connection between detectors and external time-measurement circuits.
McTiernan, James M.; Caspi, Amir; Warren, Harry
2015-04-01
In the soft X-ray energy range, solar flare spectra are typically dominated by thermal emission. The low energy extent of non-thermal emission can only be loosely quantified using currently available X-ray data. To address this issue, we combine observations from the EUV Variability Experiment (EVE) on-board the Solar Dynamics Observatory (SDO) with X-ray data from the Reuven Ramaty High Energy Spectroscopic Imager (RHESSI). The improvement over the isothermal approximation is intended to resolve the ambiguity in the range where the thermal and non-thermal components may have similar photon fluxes. This "crossover" range can extend up to 30 keV for medium to large solar flares.Previous work (Caspi et.al. 2014ApJ...788L..31C) has concentrated on obtaining DEM models that fit both instruments' observations well. Now we are interested in any breaks and cutoffs in the "residual" non-thermal spectrum; i.e., the RHESSI spectrum that is left over after the DEM has accounted for the bulk of the soft X-ray emission. Thermal emission is again modeled using a DEM that is parametrized as multiple gaussians in temperature; the non-thermal emission is modeled as a photon spectrum obtained using a thin-target emission model ('thin2' from the SolarSoft Xray IDL package). Spectra for both instruments are fit simultaneously in a self-consistent manner. The results for non-thermal parameters then are compared with those found using RHESSI data alone, with isothermal and double-thermal models.
Dzifčáková, Elena; Zemanová, Alena; Dudík, Jaroslav; Mackovjak, Šimon
2018-02-01
Spectroscopic observations made by the Extreme Ultraviolet Variability Experiment (EVE) on board the Solar Dynamics Observatory (SDO) during the 2012 March 7 X5.4-class flare (SOL2012-03-07T00:07) are analyzed for signatures of the non-Maxwellian κ-distributions. Observed spectra were averaged over 1 minute to increase photon statistics in weaker lines and the pre-flare spectrum was subtracted. Synthetic line intensities for the κ-distributions are calculated using the KAPPA database. We find strong departures (κ ≲ 2) during the early and impulsive phases of the flare, with subsequent thermalization of the flare plasma during the gradual phase. If the temperatures are diagnosed from a single line ratio, the results are strongly dependent on the value of κ. For κ = 2, we find temperatures about a factor of two higher than the commonly used Maxwellian ones. The non-Maxwellian effects could also cause the temperatures diagnosed from line ratios and from the ratio of GOES X-ray channels to be different. Multithermal analysis reveals the plasma to be strongly multithermal at all times with flat DEMs. For lower κ, the {{DEM}}κ are shifted toward higher temperatures. The only parameter that is nearly independent of κ is electron density, where we find log({n}{{e}} [{{cm}}-3]) ≈ 11.5 almost independently of time. We conclude that the non-Maxwellian effects are important and should be taken into account when analyzing solar flare observations, including spectroscopic and imaging ones.
Joux, Antoine
2009-01-01
Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic
Hougardy, Stefan
2016-01-01
Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.
Directory of Open Access Journals (Sweden)
Vassal Aurélien
2008-01-01
Full Text Available Abstract Background The huge amount of data generated by DNA chips is a powerful basis to classify various pathologies. However, constant evolution of microarray technology makes it difficult to mix data from different chip types for class prediction of limited sample populations. Affymetrix® technology provides both a quantitative fluorescence signal and a decision (detection call: absent or present based on signed-rank algorithms applied to several hybridization repeats of each gene, with a per-chip normalization. We developed a new prediction method for class belonging based on the detection call only from recent Affymetrix chip type. Biological data were obtained by hybridization on U133A, U133B and U133Plus 2.0 microarrays of purified normal B cells and cells from three independent groups of multiple myeloma (MM patients. Results After a call-based data reduction step to filter out non class-discriminative probe sets, the gene list obtained was reduced to a predictor with correction for multiple testing by iterative deletion of probe sets that sequentially improve inter-class comparisons and their significance. The error rate of the method was determined using leave-one-out and 5-fold cross-validation. It was successfully applied to (i determine a sex predictor with the normal donor group classifying gender with no error in all patient groups except for male MM samples with a Y chromosome deletion, (ii predict the immunoglobulin light and heavy chains expressed by the malignant myeloma clones of the validation group and (iii predict sex, light and heavy chain nature for every new patient. Finally, this method was shown powerful when compared to the popular classification method Prediction Analysis of Microarray (PAM. Conclusion This normalization-free method is routinely used for quality control and correction of collection errors in patient reports to clinicians. It can be easily extended to multiple class prediction suitable with
Energy Technology Data Exchange (ETDEWEB)
Weir, V [Baylor Scott and White Healthcare System, Dallas, TX (United States); Zhang, J [University of Kentucky, Lexington, KY (United States)
2016-06-15
Purpose: Iterative reconstruction (IR) algorithms have been adopted by medical centers in the past several years. IR has a potential to substantially reduce patient dose while maintaining or improving image quality. This study characterizes dose reductions in clinical settings for CT examinations using IR. Methods: We retrospectively analyzed dose information from patients who underwent abdomen/pelvis CT examinations with and without contrast media in multiple locations of our Healthcare system. A total of 743 patients scanned with ASIR on 64 slice GE lightspeed VCTs at three sites, and 30 patients scanned with SAFIRE on a Siemens 128 slice Definition Flash in one site was retrieved. For comparison, patient data (n=291) from a GE scanner and patient data (n=61) from two Siemens scanners where filtered back-projection (FBP) was used was collected retrospectively. 30% and 10% ASIR, and SAFIRE Level 2 was used. CTDIvol, Dose-length-product (DLP), weight and height from all patients was recorded. Body mass index (BMI) was calculated accordingly. To convert CTDIvol to SSDE, AP and lateral dimensions at the mid-liver level was measured for each patient. Results: Compared with FBP, 30% ASIR reduces dose by 44.1% (SSDE: 12.19mGy vs. 21.83mGy), while 10% ASIR reduced dose by 20.6% (SSDE 17.32mGy vs. 21.83). Use of SAFIRE reduced dose by 61.4% (SSDE: 8.77mGy vs. 22.7mGy). The geometric mean for patients scanned with ASIR was larger than for patients scanned with FBP (geometric mean is 297.48 mmm vs. 284.76 mm). The same trend was observed for the Siemens scanner where SAFIRE was used (geometric mean: 316 mm with SAFIRE vs. 239 mm with FBP). Patient size differences suggest that further dose reduction is possible. Conclusion: Our data confirmed that in clinical practice IR can significantly reduce dose to patients who undergo CT examinations, while meeting diagnostic requirements for image quality.
International Nuclear Information System (INIS)
Weir, V; Zhang, J
2016-01-01
Purpose: Iterative reconstruction (IR) algorithms have been adopted by medical centers in the past several years. IR has a potential to substantially reduce patient dose while maintaining or improving image quality. This study characterizes dose reductions in clinical settings for CT examinations using IR. Methods: We retrospectively analyzed dose information from patients who underwent abdomen/pelvis CT examinations with and without contrast media in multiple locations of our Healthcare system. A total of 743 patients scanned with ASIR on 64 slice GE lightspeed VCTs at three sites, and 30 patients scanned with SAFIRE on a Siemens 128 slice Definition Flash in one site was retrieved. For comparison, patient data (n=291) from a GE scanner and patient data (n=61) from two Siemens scanners where filtered back-projection (FBP) was used was collected retrospectively. 30% and 10% ASIR, and SAFIRE Level 2 was used. CTDIvol, Dose-length-product (DLP), weight and height from all patients was recorded. Body mass index (BMI) was calculated accordingly. To convert CTDIvol to SSDE, AP and lateral dimensions at the mid-liver level was measured for each patient. Results: Compared with FBP, 30% ASIR reduces dose by 44.1% (SSDE: 12.19mGy vs. 21.83mGy), while 10% ASIR reduced dose by 20.6% (SSDE 17.32mGy vs. 21.83). Use of SAFIRE reduced dose by 61.4% (SSDE: 8.77mGy vs. 22.7mGy). The geometric mean for patients scanned with ASIR was larger than for patients scanned with FBP (geometric mean is 297.48 mmm vs. 284.76 mm). The same trend was observed for the Siemens scanner where SAFIRE was used (geometric mean: 316 mm with SAFIRE vs. 239 mm with FBP). Patient size differences suggest that further dose reduction is possible. Conclusion: Our data confirmed that in clinical practice IR can significantly reduce dose to patients who undergo CT examinations, while meeting diagnostic requirements for image quality.
Reagan, Adrian C; Mallinson, Paul I; O'Connell, Timothy; McLaughlin, Patrick D; Krauss, Bernhard; Munk, Peter L; Nicolaou, Savvas; Ouellette, Hugue A
2014-01-01
Computed tomography (CT) is often used to assess the presence of occult fractures when plain radiographs are equivocal in the acute traumatic setting. While providing increased spatial resolution, conventional computed tomography is limited in the assessment of bone marrow edema, a finding that is readily detectable on magnetic resonance imaging (MRI).Dual-energy CT has recently been shown to demonstrate patterns of bone marrow edema similar to corresponding MRI studies. Dual-energy CT may therefore provide a convenient modality for further characterizing acute bony injury when MRI is not readily available. We report our initial experiences of 4 cases with imaging and clinical correlation.
Tel, G.
We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of
Directory of Open Access Journals (Sweden)
Wennan Liu
2013-03-01
Full Text Available Western knowledge about the injurious effects of cigarette smoking on smokers’ health appeared in the late nineteenth century and was shaped by both the Christian temperance movement and scientific developments in chemistry and physiology. Along with the increasing import of cigarettes into China, this new knowledge entered China through translations published at the turn of the twentieth century. It was reinterpreted and modified to dissuade the Chinese people from smoking cigarettes in two anti-cigarette campaigns: one launched by a former American missionary, Edward Thwing, in Tianjin, and a second by progressive social elites in Shanghai on the eve of the 1911 Revolution. By examining the rhetoric and practice of the campaigns, I argue that the discourse of hygiene they deployed moralized the individual habit of cigarette smoking as undermining national strength and endangering the future of the Chinese nation, thus helping to construct the idea of a nationalized body at this highly politically charged moment.
A surprise southern hemisphere meteor shower on New-Year's Eve 2015: the Volantids (IAU#758, VOL)
Jenniskens, P.; Baggaley, J.; Crumpton, I.; Aldous, P.; Gural, P. S.; Samuels, D.; Albers, J.; Soja, R.
2016-04-01
A new 32-camera CAMS network in New Zealand, spread over two stations on South Island, has detected a high southern declination shower that was active on New Year's Eve, 2015 December 31. During the observing interval from 09h12m - 15h45m UT, 21 out of 59 detected meteors radiated from the constellation of Volans, the flying fish, with a geocentric radiant at RA = 122.9 deg +- 4.7 deg, Dec = -71.9 deg +- 1.9 deg, and speed V_g = 28.4 +- 1.5 km/s. The new year arrived in New Zealand at 11h00m UT. Two more were detected the next night. No activity from this shower was observed the year prior. The meteoroids move in a 48 deg-inclined Jupiter-family comet orbit. The parent body has not yet been identified.
Salama, Amgad; Sun, Shuyu; Amin, Mohamed F. El
2015-01-01
In this work, the experimenting fields approach is applied to the numerical solution of the Navier-Stokes equation for incompressible viscous flow. In this work, the solution is sought for both the pressure and velocity fields in the same time. Apparently, the correct velocity and pressure fields satisfy the governing equations and the boundary conditions. In this technique a set of predefined fields are introduced to the governing equations and the residues are calculated. The flow according to these fields will not satisfy the governing equations and the boundary conditions. However, the residues are used to construct the matrix of coefficients. Although, in this setup it seems trivial constructing the global matrix of coefficients, in other setups it can be quite involved. This technique separates the solver routine from the physics routines and therefore makes easy the coding and debugging procedures. We compare with few examples that demonstrate the capability of this technique.
Salama, Amgad
2015-06-01
In this work, the experimenting fields approach is applied to the numerical solution of the Navier-Stokes equation for incompressible viscous flow. In this work, the solution is sought for both the pressure and velocity fields in the same time. Apparently, the correct velocity and pressure fields satisfy the governing equations and the boundary conditions. In this technique a set of predefined fields are introduced to the governing equations and the residues are calculated. The flow according to these fields will not satisfy the governing equations and the boundary conditions. However, the residues are used to construct the matrix of coefficients. Although, in this setup it seems trivial constructing the global matrix of coefficients, in other setups it can be quite involved. This technique separates the solver routine from the physics routines and therefore makes easy the coding and debugging procedures. We compare with few examples that demonstrate the capability of this technique.
Directory of Open Access Journals (Sweden)
Vesna Vučinić-Nešković
2016-03-01
Full Text Available The domestic burning of Yule logs on Christmas Eve is an archaic tradition characteristic of the Christian population in the central Balkans. In the fifty years following World War Two, the socialist state suppressed these and other popular religious practices. However, ethnographic research in Serbia and Montenegro in the late 1980s showed that many village households, nevertheless, preserved their traditional Christmas rituals at home, in contrast to the larger towns, in which they were practically eradicated. Even in the micro-regions, such as the Bay of Kotor, there were observable differences between more secluded rural communities, in which the open hearth is still the ritual center of the house (on which the Yule logs are burned as many as seven times during the Christmas season, and the towns in which only a few households continued with the rite (burning small logs in the wood-stove. In the early 1990s, however, a revival of domestic religious celebrations as well as their extension into the public realm has occurred. This study shows how on Christmas Eve, houses and churchyards (as well as townsquares are being transformed into sacred places. By analyzing the temporal and spatial aspects of this ritual event, the roles that the key actors play, the actions they undertake and artifacts they use, I attempt to demonstrate how the space of everyday life is transformed into a sacred home. In the end, the meanings and functions of homemaking are discussed in a way that confronts the classic distinction between private and public ritual environs.
Data quick filtration algorithm in an experiment on the search of μ-3e decay at ARES spectrometer
International Nuclear Information System (INIS)
Evtukhovich, P.G.
1985-01-01
The objective of the paper in the development of the program operating under a real-time mode. The program reduced drastically the initial information flow in the experiment on the μ-3e decay search due to quick identification of charged particle tracks conducted using the ARES spectrometer. The possibility of such identification is based on the comparison of event track (after each unit start) with reference tracks. Among the totality of wires being in action in all chambers, the program selects those wires which produce tracks similar to reference ones. After the ''sewing-in'' procedure, which enables to detect spiral tracks, the picture of the event is completely reconstructed and selection of particles is conducted through the criterion of particle number. The program testing on semulated μ-3e events allows to define the program speed and selection efficiency. The mean time of analys is of one event is about 5 ms, the selection efficiency is approximately 83% at a severe criterion of particle number
Entekhabi, Dara; Njoku, Eni E.; O'Neill, Peggy E.; Kellogg, Kent H.; Entin, Jared K.
2010-01-01
Talk outline 1. Derivation of SMAP basic and applied science requirements from the NRC Earth Science Decadal Survey applications 2. Data products and latencies 3. Algorithm highlights 4. SMAP Algorithm Testbed 5. SMAP Working Groups and community engagement
International Nuclear Information System (INIS)
Song, P.M.; Youssef, M.Z.; Abdou, M.A.
1993-01-01
A new approach for treating the sensitivity and uncertainty in the secondary energy distribution (SED) and the secondary angular distribution (SAD) has been developed, and the existing two-dimensional sensitivity/uncertainty analysis code, FORSS, was expanded to incorporate the new approach. The calculational algorithm was applied to the 9 Be(n,2n) cross section to study the effect of the current uncertainties in the SED and SAD of neutrons emitted from this reaction on the prediction accuracy of the tritium production rate from 6 Li(T 6 ) and 7 Li(T 7 ) in an engineering-oriented fusion integral experiment of the US Department of Energy/Japan Atomic Energy Research Institute Collaborative Program on Fusion Neutronics in which beryllium was used as a neutron multiplier. In addition, the analysis was extended to include the uncertainties in the integrated smooth cross sections of beryllium and other materials that constituted the test assembly used in the experiment. This comprehensive two-dimensional cross-section sensitivity/uncertainty analysis aimed at identifying the sources of discrepancies between calculated and measured values for T 6 and T 7
Inclusive Flavour Tagging Algorithm
International Nuclear Information System (INIS)
Likhomanenko, Tatiana; Derkach, Denis; Rogozhnikov, Alex
2016-01-01
Identifying the flavour of neutral B mesons production is one of the most important components needed in the study of time-dependent CP violation. The harsh environment of the Large Hadron Collider makes it particularly hard to succeed in this task. We present an inclusive flavour-tagging algorithm as an upgrade of the algorithms currently used by the LHCb experiment. Specifically, a probabilistic model which efficiently combines information from reconstructed vertices and tracks using machine learning is proposed. The algorithm does not use information about underlying physics process. It reduces the dependence on the performance of lower level identification capacities and thus increases the overall performance. The proposed inclusive flavour-tagging algorithm is applicable to tag the flavour of B mesons in any proton-proton experiment. (paper)
Directory of Open Access Journals (Sweden)
Dazhi Jiang
2015-01-01
Full Text Available At present there is a wide range of evolutionary algorithms available to researchers and practitioners. Despite the great diversity of these algorithms, virtually all of the algorithms share one feature: they have been manually designed. A fundamental question is “are there any algorithms that can design evolutionary algorithms automatically?” A more complete definition of the question is “can computer construct an algorithm which will generate algorithms according to the requirement of a problem?” In this paper, a novel evolutionary algorithm based on automatic designing of genetic operators is presented to address these questions. The resulting algorithm not only explores solutions in the problem space like most traditional evolutionary algorithms do, but also automatically generates genetic operators in the operator space. In order to verify the performance of the proposed algorithm, comprehensive experiments on 23 well-known benchmark optimization problems are conducted. The results show that the proposed algorithm can outperform standard differential evolution algorithm in terms of convergence speed and solution accuracy which shows that the algorithm designed automatically by computers can compete with the algorithms designed by human beings.
Petersen, Walter A.; Jensen, Michael P.
2011-01-01
The joint NASA Global Precipitation Measurement (GPM) -- DOE Atmospheric Radiation Measurement (ARM) Midlatitude Continental Convective Clouds Experiment (MC3E) was conducted from April 22-June 6, 2011, centered on the DOE-ARM Southern Great Plains Central Facility site in northern Oklahoma. GPM field campaign objectives focused on the collection of airborne and ground-based measurements of warm-season continental precipitation processes to support refinement of GPM retrieval algorithm physics over land, and to improve the fidelity of coupled cloud resolving and land-surface satellite simulator models. DOE ARM objectives were synergistically focused on relating observations of cloud microphysics and the surrounding environment to feedbacks on convective system dynamics, an effort driven by the need to better represent those interactions in numerical modeling frameworks. More specific topics addressed by MC3E include ice processes and ice characteristics as coupled to precipitation at the surface and radiometer signals measured in space, the correlation properties of rainfall and drop size distributions and impacts on dual-frequency radar retrieval algorithms, the transition of cloud water to rain water (e.g., autoconversion processes) and the vertical distribution of cloud water in precipitating clouds, and vertical draft structure statistics in cumulus convection. The MC3E observational strategy relied on NASA ER-2 high-altitude airborne multi-frequency radar (HIWRAP Ka-Ku band) and radiometer (AMPR, CoSMIR; 10-183 GHz) sampling (a GPM "proxy") over an atmospheric column being simultaneously profiled in situ by the University of North Dakota Citation microphysics aircraft, an array of ground-based multi-frequency scanning polarimetric radars (DOE Ka-W, X and C-band; NASA D3R Ka-Ku and NPOL S-bands) and wind-profilers (S/UHF bands), supported by a dense network of over 20 disdrometers and rain gauges, all nested in the coverage of a six-station mesoscale rawinsonde
International Nuclear Information System (INIS)
Dinev, D.
1996-01-01
Several new algorithms for sorting of dipole and/or quadrupole magnets in synchrotrons and storage rings are described. The algorithms make use of a combinatorial approach to the problem and belong to the class of random search algorithms. They use an appropriate metrization of the state space. The phase-space distortion (smear) is used as a goal function. Computational experiments for the case of the JINR-Dubna superconducting heavy ion synchrotron NUCLOTRON have shown a significant reduction of the phase-space distortion after the magnet sorting. (orig.)
Shelepov, A M; Ishutin, O S; Leonik, S I
2011-06-01
This article evaluates military and political situation in the world and operational-strategic environment on the West Theater of operations on the eve of the Great Patriotic War (1941-1945). We analyze structure and overall condition of sanitary service of West Special Military District of the Workers and Peasants Red Army and causes of failure of mobilization, organization and deployment of military units and establishments from the beginning of aggression of Fascist Germany to the Soviet Union.
Composite Differential Search Algorithm
Directory of Open Access Journals (Sweden)
Bo Liu
2014-01-01
Full Text Available Differential search algorithm (DS is a relatively new evolutionary algorithm inspired by the Brownian-like random-walk movement which is used by an organism to migrate. It has been verified to be more effective than ABC, JDE, JADE, SADE, EPSDE, GSA, PSO2011, and CMA-ES. In this paper, we propose four improved solution search algorithms, namely “DS/rand/1,” “DS/rand/2,” “DS/current to rand/1,” and “DS/current to rand/2” to search the new space and enhance the convergence rate for the global optimization problem. In order to verify the performance of different solution search methods, 23 benchmark functions are employed. Experimental results indicate that the proposed algorithm performs better than, or at least comparable to, the original algorithm when considering the quality of the solution obtained. However, these schemes cannot still achieve the best solution for all functions. In order to further enhance the convergence rate and the diversity of the algorithm, a composite differential search algorithm (CDS is proposed in this paper. This new algorithm combines three new proposed search schemes including “DS/rand/1,” “DS/rand/2,” and “DS/current to rand/1” with three control parameters using a random method to generate the offspring. Experiment results show that CDS has a faster convergence rate and better search ability based on the 23 benchmark functions.
Nagata, Kosei; Yamamoto, Shinichi; Miyoshi, Kota; Sato, Masaki; Arino, Yusuke; Mikami, Yoji
2016-08-01
Eosinophilic granulomatosis with polyangiitis (EGPA, Churg-Strauss syndrome) is a rare systemic vasculitis and is difficult to diagnose. EGPA has a number of symptoms including peripheral dysesthesia caused by mononeuropathy multiplex, which is similar to radiculopathy due to lumbar disc hernia or lumbar spinal stenosis. Therefore, EGPA patients with mononeuropathy multiplex often visit orthopedic clinics, but orthopedic doctors and spine neurosurgeons have limited experience in diagnosing EGPA because of its rarity. We report a consecutive series of patients who were initially diagnosed as having lumbar disc hernia or lumbar spinal stenosis by at least 2 medical institutions from March 2006 to April 2013 but whose final diagnosis was EGPA. All patients had past histories of asthma or eosinophilic pneumonia, and four out of five had peripheral edema. Laboratory data showed abnormally increased eosinophil counts, and nerve conduction studies of all patients revealed axonal damage patterns. All patients recovered from paralysis to a functional level after high-dose steroid treatment. We shortened the duration of diagnosis from 49 days to one day by adopting a diagnostic algorithm after experiencing the first case.
Arpinar, V E; Hamamura, M J; Degirmenci, E; Muftuler, L T
2012-07-07
Magnetic resonance electrical impedance tomography (MREIT) is a technique that produces images of conductivity in tissues and phantoms. In this technique, electrical currents are applied to an object and the resulting magnetic flux density is measured using magnetic resonance imaging (MRI) and the conductivity distribution is reconstructed using these MRI data. Currently, the technique is used in research environments, primarily studying phantoms and animals. In order to translate MREIT to clinical applications, strict safety standards need to be established, especially for safe current limits. However, there are currently no standards for safe current limits specific to MREIT. Until such standards are established, human MREIT applications need to conform to existing electrical safety standards in medical instrumentation, such as IEC601. This protocol limits patient auxiliary currents to 100 µA for low frequencies. However, published MREIT studies have utilized currents 10-400 times larger than this limit, bringing into question whether the clinical applications of MREIT are attainable under current standards. In this study, we investigated the feasibility of MREIT to accurately reconstruct the relative conductivity of a simple agarose phantom using 200 µA total injected current and tested the performance of two MREIT reconstruction algorithms. These reconstruction algorithms used are the iterative sensitivity matrix method (SMM) by Ider and Birgul (1998 Elektrik 6 215-25) with Tikhonov regularization and the harmonic B(Z) proposed by Oh et al (2003 Magn. Reason. Med. 50 875-8). The reconstruction techniques were tested at both 200 µA and 5 mA injected currents to investigate their noise sensitivity at low and high current conditions. It should be noted that 200 µA total injected current into a cylindrical phantom generates only 14.7 µA current in imaging slice. Similarly, 5 mA total injected current results in 367 µA in imaging slice. Total
Halyo, Nesim; Choi, Sang H.
1987-01-01
Two count conversion algorithms and the associated dynamic sensor model for the M/WFOV nonscanner radiometers are defined. The sensor model provides and updates the constants necessary for the conversion algorithms, though the frequency with which these updates were needed was uncertain. This analysis therefore develops mathematical models for the conversion of irradiance at the sensor field of view (FOV) limiter into data counts, derives from this model two algorithms for the conversion of data counts to irradiance at the sensor FOV aperture and develops measurement models which account for a specific target source together with a sensor. The resulting algorithms are of the gain/offset and Kalman filter types. The gain/offset algorithm was chosen since it provided sufficient accuracy using simpler computations.
Fluid structure coupling algorithm
International Nuclear Information System (INIS)
McMaster, W.H.; Gong, E.Y.; Landram, C.S.; Quinones, D.F.
1980-01-01
A fluid-structure-interaction algorithm has been developed and incorporated into the two-dimensional code PELE-IC. This code combines an Eulerian incompressible fluid algorithm with a Lagrangian finite element shell algorithm and incorporates the treatment of complex free surfaces. The fluid structure and coupling algorithms have been verified by the calculation of solved problems from the literature and from air and steam blowdown experiments. The code has been used to calculate loads and structural response from air blowdown and the oscillatory condensation of steam bubbles in water suppression pools typical of boiling water reactors. The techniques developed have been extended to three dimensions and implemented in the computer code PELE-3D
Children’s Day-Care Centre (EVE) and School kicked off the school year 2016-2017
Staff Association
2016-01-01
It has been 54 years already, ever since the Nursery school was founded in March 1961, that the Staff Association together with the teachers, the managerial and the administrative staff, welcomes your children at the start of the school year. On Tuesday, 30 August 2016, the Children’s Day-Care Centre (EVE) and School opened its doors again for children between four months and six years old. The start of the school year was carried out gradually and in small groups to allow quality interaction between children, professionals and parents. This year, our structure will accommodate about 130 children divided between the nursery, the kindergarten and the school. Throughout the school year, the children will work on the theme of colours, which will be the common thread linking all our activities. Our team is comprised of 38 people: the headmistress, the deputy headmistress, 2 secretaries, 13 educators, 4 teachers, 11 teaching assistants, 2 nursery assistants and 4 canteen workers. The team is delighted...
Poster — Thur Eve — 71: A 4D Multimodal Lung Phantom for Regmentation Evaluation
International Nuclear Information System (INIS)
Markel, D; Levesque, I R; El Naqa, I
2014-01-01
Segmentation and registration of medical imaging data are two processes that can be integrated (a process termed regmentation) to iteratively reinforce each other, potentially improving efficiency and overall accuracy. A significant challenge is presented when attempting to validate the joint process particularly with regards to minimizing geometric uncertainties associated with the ground truth while maintaining anatomical realism. This work demonstrates a 4D MRI, PET, and CT compatible tissue phantom with a known ground truth for evaluating registration and segmentation accuracy. The phantom consists of a preserved swine lung connected to an air pump via a PVC tube for inflation. Mock tumors were constructed from sea sponges contained within two vacuum-sealed compartments with catheters running into each one for injection of radiotracer solution. The phantom was scanned using a GE Discovery-ST PET/CT scanner and a 0.23T Phillips MRI, and resulted in anatomically realistic images. A bifurcation tracking algorithm was implemented to provide a ground truth for evaluating registration accuracy. This algorithm was validated using known deformations of up to 7.8 cm using a separate CT scan of a human thorax. Using the known deformation vectors to compare against, 76 bifurcation points were selected. The tracking accuracy was found to have maximum mean errors of −0.94, 0.79 and −0.57 voxels in the left-right, anterior-posterior and inferior-superior directions, respectively. A pneumatic control system is under development to match the respiratory profile of the lungs to a breathing trace from an individual patient
Unsupervised Classification Using Immune Algorithm
Al-Muallim, M. T.; El-Kouatly, R.
2012-01-01
Unsupervised classification algorithm based on clonal selection principle named Unsupervised Clonal Selection Classification (UCSC) is proposed in this paper. The new proposed algorithm is data driven and self-adaptive, it adjusts its parameters to the data to make the classification operation as fast as possible. The performance of UCSC is evaluated by comparing it with the well known K-means algorithm using several artificial and real-life data sets. The experiments show that the proposed U...
International Nuclear Information System (INIS)
Creutz, M.
1987-11-01
A large variety of Monte Carlo algorithms are being used for lattice gauge simulations. For purely bosonic theories, present approaches are generally adequate; nevertheless, overrelaxation techniques promise savings by a factor of about three in computer time. For fermionic fields the situation is more difficult and less clear. Algorithms which involve an extrapolation to a vanishing step size are all quite closely related. Methods which do not require such an approximation tend to require computer time which grows as the square of the volume of the system. Recent developments combining global accept/reject stages with Langevin or microcanonical updatings promise to reduce this growth to V/sup 4/3/
Hu, T C
2002-01-01
Newly enlarged, updated second edition of a valuable text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discusses binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. 153 black-and-white illus. 23 tables.Newly enlarged, updated second edition of a valuable, widely used text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discussed are binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. New to this edition: Chapter 9
Krocker, Georg Alexander
This thesis presents a so called same side kaon tagging algorithm, which is used in the determination of the production flavour of \\Bs mesons. The measurement of the $B_s^0$--$\\bar{B}_s^0$ oscillation frequency $\\Delta m_s$ in the decay $B_s^0\\rightarrow D_s^- \\pi^+$ is used to optimise and calibrate this algorithm. The presented studies are performed on a data set corresponding to an integrated luminosity of $\\mathcal{L} = 1.0 fb^{-1}$ collected by the LHCb experiment in 2011. The same side kaon tagging algorithm, based on multivariant classifiers, is developed, calibrated and tested using a sample of about $26,000$ reconstructed $B_s^0\\rightarrow D_s^- \\pi^+$ decays. An effective tagging power of $\\varepsilon_{\\rm eff}=\\varepsilon_{\\rm tag}(1-2\\omega)^2=2.42\\pm0.39\\%$ is achieved. Combining the same side kaon tagging algorithm with additional flavour tagging algorithms results in a combined tagging performance of $\\varepsilon_{\\rm eff}=\\varepsilon_{\\rm tag}(1-2\\omega)^2= 5.13\\pm0.54\\%$. With this combinat...
Directory of Open Access Journals (Sweden)
Anna Bourmistrova
2011-02-01
Full Text Available The autodriver algorithm is an intelligent method to eliminate the need of steering by a driver on a well-defined road. The proposed method performs best on a four-wheel steering (4WS vehicle, though it is also applicable to two-wheel-steering (TWS vehicles. The algorithm is based on coinciding the actual vehicle center of rotation and road center of curvature, by adjusting the kinematic center of rotation. The road center of curvature is assumed prior information for a given road, while the dynamic center of rotation is the output of dynamic equations of motion of the vehicle using steering angle and velocity measurements as inputs. We use kinematic condition of steering to set the steering angles in such a way that the kinematic center of rotation of the vehicle sits at a desired point. At low speeds the ideal and actual paths of the vehicle are very close. With increase of forward speed the road and tire characteristics, along with the motion dynamics of the vehicle cause the vehicle to turn about time-varying points. By adjusting the steering angles, our algorithm controls the dynamic turning center of the vehicle so that it coincides with the road curvature center, hence keeping the vehicle on a given road autonomously. The position and orientation errors are used as feedback signals in a closed loop control to adjust the steering angles. The application of the presented autodriver algorithm demonstrates reliable performance under different driving conditions.
Royston, Christine; Bardhan, Karna D
2017-06-01
We present demographic differences across the gastro-oesophageal reflux disease (GORD) spectrum in a UK District General Hospital. Data were prospectively collected over 37 years. At endoscopy patients were categorized as: erosive oesophagitis (EO), Barrett's oesophagus (BO) or nonerosive reflux disease (NER). Analysis 1: comparison of EO, BO and NER 1977-2001 when the database for GORD without BO closed. Analysis 2: demographic differences in oesophageal adenocarcinoma (OAC) in total BO population diagnosed 1977-2011. GORD 1977-2001 (n=11 944): sex, male predominance in EO and BO but not NER; male : female ratios, 1.81, 1.65, 0.87, respectively (P<0.0001); mean age at presentation, EO 54 years, BO 62 years, NER 50 years; women were older than men by 10, 7 and 6 years, respectively.BO 1977-2011: prevalent OAC, 87/1468 (6%); male : female ratio, 4.1 (P<0.0001); incident OAC, 54/1381 (3.9%); male : female ratio, 3.5 (P<0.0001). Among all BO, more men developed OAC (3 vs. 0.9%). Within each sex, proportion of OAC higher among men (4.9 vs. 2.3%); at OAC diagnosis women were slightly but not significantly older (69.9 vs. 72.3 years, P=0.322). Two views may explain our findings. First, women have either milder reflux, or reduced mucosal sensitivity hence reflux remains silent for longer. Alternatively, women genuinely develop reflux later, that is, are more protected and for longer from developing GORD and its complications. Early evidence is emerging that female sex hormones may indeed have a protective role in GORD during the reproductive period. We suggest reflux and its consequences may be an example of 'protection' conferred on Eve.
Black Edens, country Eves: Listening, performance, and black queer longing in country music.
Royster, Francesca T
2017-07-03
This article explores Black queer country music listening, performance, and fandom as a source of pleasure, nostalgia, and longing for Black listeners. Country music can be a space for alliance and community, as well as a way of accessing sometimes repressed cultural and personal histories of violence: lynching and other forms of racial terror, gender surveillance and disciplining, and continued racial and economic segregation. For many Black country music listeners and performers, the experience of being a closeted fan also fosters an experience of ideological hailing, as well as queer world-making. Royster suggests that through Black queer country music fandom and performance, fans construct risky and soulful identities. The article uses Tina Turner's solo album, Tina Turns the Country On! (1974) as an example of country music's power as a tool for resistance to racial, sexual, and class disciplining.
Detection of algorithmic trading
Bogoev, Dimitar; Karam, Arzé
2017-10-01
We develop a new approach to reflect the behavior of algorithmic traders. Specifically, we provide an analytical and tractable way to infer patterns of quote volatility and price momentum consistent with different types of strategies employed by algorithmic traders, and we propose two ratios to quantify these patterns. Quote volatility ratio is based on the rate of oscillation of the best ask and best bid quotes over an extremely short period of time; whereas price momentum ratio is based on identifying patterns of rapid upward or downward movement in prices. The two ratios are evaluated across several asset classes. We further run a two-stage Artificial Neural Network experiment on the quote volatility ratio; the first stage is used to detect the quote volatility patterns resulting from algorithmic activity, while the second is used to validate the quality of signal detection provided by our measure.
DEFF Research Database (Denmark)
Markham, Annette
This paper takes an actor network theory approach to explore some of the ways that algorithms co-construct identity and relational meaning in contemporary use of social media. Based on intensive interviews with participants as well as activity logging and data tracking, the author presents a richly...... layered set of accounts to help build our understanding of how individuals relate to their devices, search systems, and social network sites. This work extends critical analyses of the power of algorithms in implicating the social self by offering narrative accounts from multiple perspectives. It also...... contributes an innovative method for blending actor network theory with symbolic interaction to grapple with the complexity of everyday sensemaking practices within networked global information flows....
A Parallel Butterfly Algorithm
Poulson, Jack; Demanet, Laurent; Maxwell, Nicholas; Ying, Lexing
2014-01-01
The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.
A Parallel Butterfly Algorithm
Poulson, Jack
2014-02-04
The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.
Optimal Fungal Space Searching Algorithms.
Asenova, Elitsa; Lin, Hsin-Yu; Fu, Eileen; Nicolau, Dan V; Nicolau, Dan V
2016-10-01
Previous experiments have shown that fungi use an efficient natural algorithm for searching the space available for their growth in micro-confined networks, e.g., mazes. This natural "master" algorithm, which comprises two "slave" sub-algorithms, i.e., collision-induced branching and directional memory, has been shown to be more efficient than alternatives, with one, or the other, or both sub-algorithms turned off. In contrast, the present contribution compares the performance of the fungal natural algorithm against several standard artificial homologues. It was found that the space-searching fungal algorithm consistently outperforms uninformed algorithms, such as Depth-First-Search (DFS). Furthermore, while the natural algorithm is inferior to informed ones, such as A*, this under-performance does not importantly increase with the increase of the size of the maze. These findings suggest that a systematic effort of harvesting the natural space searching algorithms used by microorganisms is warranted and possibly overdue. These natural algorithms, if efficient, can be reverse-engineered for graph and tree search strategies.
Poster — Thur Eve — 49: Unexpected Output Drops: Pitted Blackholes in Tungsten
Energy Technology Data Exchange (ETDEWEB)
Hudson, A; Pierce, G [Tom Baker Cancer Centre, Calgary AB (Canada); University of Calgary, Department of Oncology, Calgary AB (Canada)
2014-08-15
During the daily measurement of radiation output of a 6 MV beam on a Varian Trilogy Linear Accelerator the output dropped below 2% and initiated a call to action by physics to determine the cause. Over the course of weeks the cause of the issue was diagnosed to be a defect in the target, resulting in a drop in output and an asymmetry of the beam. Steps were taken to return the machine to clinical service while parts were on order while ensuring the safety of patient treatment. The machine target was replaced and the machine continues to operate as expected. A drop in output is usually a rarity and a defect in the target is possibly more rare. This experience demonstrated the importance of routine QC measurement, recording and analyzing daily output and symmetry values. In addition, this event showcased the importance of a multi-disciplinary approach in a high-pressure situation to effectively troubleshoot unique events to ensure consistence, safety patient treatment.
Casanova, Henri; Robert, Yves
2008-01-01
""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi
DEFF Research Database (Denmark)
Gustavson, Fred G.; Reid, John K.; Wasniewski, Jerzy
2007-01-01
We present subroutines for the Cholesky factorization of a positive-definite symmetric matrix and for solving corresponding sets of linear equations. They exploit cache memory by using the block hybrid format proposed by the authors in a companion article. The matrix is packed into n(n + 1)/2 real...... variables, and the speed is usually better than that of the LAPACK algorithm that uses full storage (n2 variables). Included are subroutines for rearranging a matrix whose upper or lower-triangular part is packed by columns to this format and for the inverse rearrangement. Also included is a kernel...
An algorithm for reduct cardinality minimization
AbouEisha, Hassan M.
2013-12-01
This is devoted to the consideration of a new algorithm for reduct cardinality minimization. This algorithm transforms the initial table to a decision table of a special kind, simplify this table, and use a dynamic programming algorithm to finish the construction of an optimal reduct. Results of computer experiments with decision tables from UCI ML Repository are discussed. © 2013 IEEE.
An algorithm for reduct cardinality minimization
AbouEisha, Hassan M.; Al Farhan, Mohammed; Chikalov, Igor; Moshkov, Mikhail
2013-01-01
This is devoted to the consideration of a new algorithm for reduct cardinality minimization. This algorithm transforms the initial table to a decision table of a special kind, simplify this table, and use a dynamic programming algorithm to finish the construction of an optimal reduct. Results of computer experiments with decision tables from UCI ML Repository are discussed. © 2013 IEEE.
Automatic Algorithm Selection for Complex Simulation Problems
Ewald, Roland
2012-01-01
To select the most suitable simulation algorithm for a given task is often difficult. This is due to intricate interactions between model features, implementation details, and runtime environment, which may strongly affect the overall performance. An automated selection of simulation algorithms supports users in setting up simulation experiments without demanding expert knowledge on simulation. Roland Ewald analyzes and discusses existing approaches to solve the algorithm selection problem in the context of simulation. He introduces a framework for automatic simulation algorithm selection and
International Nuclear Information System (INIS)
Sellitto, P.; Del Frate, F.
2014-01-01
Atmospheric temperature profiles are inferred from passive satellite instruments, using thermal infrared or microwave observations. Here we investigate on the feasibility of the retrieval of height resolved temperature information in the ultraviolet spectral region. The temperature dependence of the absorption cross sections of ozone in the Huggins band, in particular in the interval 320–325 nm, is exploited. We carried out a sensitivity analysis and demonstrated that a non-negligible information on the temperature profile can be extracted from this small band. Starting from these results, we developed a neural network inversion algorithm, trained and tested with simulated nadir EnviSat-SCIAMACHY ultraviolet observations. The algorithm is able to retrieve the temperature profile with root mean square errors and biases comparable to existing retrieval schemes that use thermal infrared or microwave observations. This demonstrates, for the first time, the feasibility of temperature profiles retrieval from space-borne instruments operating in the ultraviolet. - Highlights: • A sensitivity analysis and an inversion scheme to retrieve temperature profiles from satellite UV observations (320–325 nm). • The exploitation of the temperature dependence of the absorption cross section of ozone in the Huggins band is proposed. • First demonstration of the feasibility of temperature profiles retrieval from satellite UV observations. • RMSEs and biases comparable with more established techniques involving TIR and MW observations
International Nuclear Information System (INIS)
Chang, Jonghwa
2014-01-01
Today, we can use a computer cluster consist of a few hundreds CPUs with reasonable budget. Such computer system enables us to do detailed modeling of reactor core. The detailed modeling will improve the safety and the economics of a nuclear reactor by eliminating un-necessary conservatism or missing consideration. To take advantage of such a cluster computer, efficient parallel algorithms must be developed. Mechanical structure analysis community has studied the domain decomposition method to solve the stress-strain equation using the finite element methods. One of the most successful domain decomposition method in terms of robustness is FETI-DP. We have modified the original FETI-DP to solve the eigenvalue problem for the multi-group diffusion problem in previous study. In this study, we report the result of recent modification to handle the three-dimensional subdomain partitioning, and the sub-domain multi-group problem. Modified FETI-DP algorithm has been successfully applied for the eigenvalue problem of multi-group neutron diffusion equation. The overall CPU time is decreasing as number of sub-domains (partitions) is increasing. However, there may be a limit in decrement due to increment of the number of primal points will increase the CPU time spent by the solution of the global equation. Even distribution of computational load (criterion a) is important to achieve fast computation. The subdomain partition can be effectively performed using suitable graph theory partition package such as MeTIS
Energy Technology Data Exchange (ETDEWEB)
Chang, Jonghwa [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2014-10-15
Today, we can use a computer cluster consist of a few hundreds CPUs with reasonable budget. Such computer system enables us to do detailed modeling of reactor core. The detailed modeling will improve the safety and the economics of a nuclear reactor by eliminating un-necessary conservatism or missing consideration. To take advantage of such a cluster computer, efficient parallel algorithms must be developed. Mechanical structure analysis community has studied the domain decomposition method to solve the stress-strain equation using the finite element methods. One of the most successful domain decomposition method in terms of robustness is FETI-DP. We have modified the original FETI-DP to solve the eigenvalue problem for the multi-group diffusion problem in previous study. In this study, we report the result of recent modification to handle the three-dimensional subdomain partitioning, and the sub-domain multi-group problem. Modified FETI-DP algorithm has been successfully applied for the eigenvalue problem of multi-group neutron diffusion equation. The overall CPU time is decreasing as number of sub-domains (partitions) is increasing. However, there may be a limit in decrement due to increment of the number of primal points will increase the CPU time spent by the solution of the global equation. Even distribution of computational load (criterion a) is important to achieve fast computation. The subdomain partition can be effectively performed using suitable graph theory partition package such as MeTIS.
International Nuclear Information System (INIS)
Sellitto, P.; Di Noia, A.; Del Frate, F.; Burini, A.; Casadio, S.; Solimini, D.
2012-01-01
Theoretical evidence has been given on the role of visible (VIS) radiation in enhancing the accuracy of ozone retrievals from satellite data, especially in the troposphere. However, at present, VIS is not being systematically used together with ultraviolet (UV) measurements, even when possible with one single instrument, e.g., the SCanning Imaging Absorption spectroMeter for Atmospheric CartograpHY (SCIAMACHY). Reasons mainly reside in the defective performance of optimal estimation and regularization algorithms caused by inaccurate modeling of VIS interaction with aerosols or clouds, as well as in inconsistent intercalibration between UV and VIS measurements. Here we intend to discuss the role of VIS radiation when it feeds a retrieval algorithm based on Neural Networks (NNs) that does not need a forward radiative transfer model and is robust with respect to calibration errors. The NN we designed was trained with a set of ozonesondes (OSs) data and tested over an independent set of OS measurements. We compared the ozone concentration profiles retrieved from UV-only with those retrieved from UV plus VIS nadir data taken by SCIAMACHY. We found that VIS radiation was able to yield more than 10% increase of accuracy and to substantially reduce biases of retrieved profiles at tropospheric levels.
García, Alicia; De la Cruz-Reyna, Servando; Marrero, José M.; Ortiz, Ramón
2016-05-01
Under certain conditions, volcano-tectonic (VT) earthquakes may pose significant hazards to people living in or near active volcanic regions, especially on volcanic islands; however, hazard arising from VT activity caused by localized volcanic sources is rarely addressed in the literature. The evolution of VT earthquakes resulting from a magmatic intrusion shows some orderly behaviour that may allow the occurrence and magnitude of major events to be forecast. Thus governmental decision makers can be supplied with warnings of the increased probability of larger-magnitude earthquakes on the short-term timescale. We present here a methodology for forecasting the occurrence of large-magnitude VT events during volcanic crises; it is based on a mean recurrence time (MRT) algorithm that translates the Gutenberg-Richter distribution parameter fluctuations into time windows of increased probability of a major VT earthquake. The MRT forecasting algorithm was developed after observing a repetitive pattern in the seismic swarm episodes occurring between July and November 2011 at El Hierro (Canary Islands). From then on, this methodology has been applied to the consecutive seismic crises registered at El Hierro, achieving a high success rate in the real-time forecasting, within 10-day time windows, of volcano-tectonic earthquakes.
International Nuclear Information System (INIS)
Baumann, Stefan; Wang, Rui; Schoepf, U.J.; Steinberg, Daniel H.; Spearman, James V.; Bayer, Richard R.; Hamm, Christian W.; Renker, Matthias
2015-01-01
The present study aimed to determine the feasibility of a novel fractional flow reserve (FFR) algorithm based on coronary CT angiography (cCTA) that permits point-of-care assessment, without data transfer to core laboratories, for the evaluation of potentially ischemia-causing stenoses. To obtain CT-based FFR, anatomical coronary information and ventricular mass extracted from cCTA datasets were integrated with haemodynamic parameters. CT-based FFR was assessed for 36 coronary artery stenoses in 28 patients in a blinded fashion and compared to catheter-based FFR. Haemodynamically relevant stenoses were defined by an invasive FFR ≤0.80. Time was measured for the processing of each cCTA dataset and CT-based FFR computation. Assessment of cCTA image quality was performed using a 5-point scale. Mean total time for CT-based FFR determination was 51.9 ± 9.0 min. Per-vessel analysis for the identification of lesion-specific myocardial ischemia demonstrated good correlation (Pearson's product-moment r = 0.74, p < 0.0001) between the prototype CT-based FFR algorithm and invasive FFR. Subjective image quality analysis resulted in a median score of 4 (interquartile ranges, 3-4). Our initial data suggest that the CT-based FFR method for the detection of haemodynamically significant stenoses evaluated in the selected population correlates well with invasive FFR and renders time-efficient point-of-care assessment possible. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Chang, Jonghwa [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2014-05-15
Parallelization of Monte Carlo simulation is widely adpoted. There are also several parallel algorithms developed for the SN transport theory using the parallel wave sweeping algorithm and for the CPM using parallel ray tracing. For practical purpose of reactor physics application, the thermal feedback and burnup effects on the multigroup cross section should be considered. In this respect, the domain decomposition method(DDM) is suitable for distributing the expensive cross section calculation work. Parallel transport code and diffusion code based on the Raviart-Thomas mixed finite element method was developed. However most of the developed methods rely on the heuristic convergence of flux and current at the domain interfaces. Convergence was not attained in some cases. Mechanical stress computation community has also work on the DDM to solve the stress-strain equation using the finite element methods. The most successful domain decomposition method in terms of robustness is FETI-DP. We have modified the original FETI-DP to solve the eigenvalue problem for the multigroup diffusion problem in this study.
Disainikaart : Portugal / Eve Arpo
Arpo, Eve
2009-01-01
Portugali arhitektidest, disaineritest, disainifirmadest. Arhitektide Eduardo Souto de Moura (sünd. 1952) ja Alvaro Joaquim de Meio Siza Vieira (sünd. 1933) looming. Firmadest Corque, Mytto, Pedroso & Os-rio, TemaHome, Mood, Munna, Mambo, Delightfulli
Disainikaart : Itaalia / Eve Arpo
Arpo, Eve
2009-01-01
Itaalia disaineritest, disainifirmadest, arhitektidest. Renzo Piano (1937), Aldo Rossi (1931-1997), Alessandro Mendini (1931), Giorgio de Chirico (1888-1978), Federico Fellini (1920-1993), Gaetano Pesce, Ferrari, Alessi, Dolce & Gabbana, Artemide
Arpo, Eve
2006-01-01
Rakvere Vallimäe trepist. Laste vestlus trepil. Projekt: Kavakava. Autorid: Heidi Urb, Siiri Vallner. Trepi valem: Taavi Vallner. Insener: Marika Stokkeby. Projekt 2004, valmis 2005. Ill.: joonis, 7 värv. fotot
Disainikaart : Island / Eve Arpo
Arpo, Eve
2009-01-01
Islandi disaineritest, disainifirmadest, arhitektidest. Katrin Olina, Margrét Hardardóttir ja Steve Christer (Studio Granda), Guđjón Samúelsson (1887-1950), Ingibjörg Hanna Bjarnadottir, Hrafnkell Birgisson, Studio Bility jt.
Disainikaart : Hispaania / Eve Arpo
Arpo, Eve
2009-01-01
Hispaania tuntumad arhitektid, disainerid, kunstnikud. Antoni Gaudí (1852-1926), Pablo Picasso (1881-1973), Félix Candela (1910-1997), Marti Guixe, Santiago Calatrava, Herme Ciscar, Monica Garcia, Roger Arquer, Patricia Urquiola, Jaime Hayón
Chicago arhitektuuribiennaal / Eve Komp
Komp, Eve, 1982-
2015-01-01
Chicagos esimest korda korraldatud arhitektuuribiennaali nimi oli "The State of the Art of Architecture", mis toimus 03. oktoober 2015.a. - 03. jaanuar 2016.a. Biennaali eesmärk oli pakkuda rahvusvahelist arhitektuurisündmust kohalikule arhitektuurikogukonnale.
Energy Technology Data Exchange (ETDEWEB)
Fontana, W.
1990-12-13
In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.
A compilation of jet finding algorithms
International Nuclear Information System (INIS)
Flaugher, B.; Meier, K.
1990-12-01
Technical descriptions of jet finding algorithms currently in use in p bar p collider experiments (CDF, UA1, UA2), e + e - experiments and Monte-Carlo event generators (LUND programs, ISAJET) have been collected. 20 refs
Energy Technology Data Exchange (ETDEWEB)
Kreplin, Katharina
2015-06-10
This thesis presents a novel flavour tagging algorithm using machine learning techniques and a precision measurement of the B{sup 0}- anti B{sup 0} oscillation frequency Δm{sub d} using semileptonic B{sup 0} decays. The LHC Run I data set is used which corresponds to 3 fb{sup -1} of data taken by the LHCb experiment at a center-of-mass energy of 7 TeV and 8 TeV. The performance of flavour tagging algorithms, exploiting the b anti b pair production and the b quark hadronization, is relatively low at the LHC due to the large amount of soft QCD background in inelastic proton-proton collisions. The standard approach is a cut-based selection of particles, whose charges are correlated to the production flavour of the B meson. The novel tagging algorithm classifies the particles using an artificial neural network (ANN). It assigns higher weights to particles, which are likely to be correlated to the b flavour. A second ANN combines the particles with the highest weights to derive the tagging decision. An increase of the opposite side kaon tagging performance of 50% and 30% is achieved on B{sup +} → J/ψK{sup +} data. The second number corresponds to a readjustment of the algorithm to the B{sup 0}{sub s} production topology. This algorithm is employed in the precision measurement of Δm{sub d}. A data set of 3.2 x 10{sup 6} semileptonic B{sup 0} decays is analysed, where the B{sup 0} decays into a D{sup -}(K{sup +}π{sup -}π{sup -}) or D{sup *-} (π{sup -} anti D{sup 0}(K{sup +} π{sup -})) and a μ{sup +}ν{sub μ} pair. The ν{sub μ} is not reconstructed, therefore, the B{sup 0} momentum needs to be statistically corrected for the missing momentum of the neutrino to compute the correct B{sup 0} decay time. A result of Δm{sub d}=0.503±0.002(stat.)±0.001(syst.) ps{sup -1} is obtained. This is the world's best measurement of this quantity.
Fluid-structure-coupling algorithm
International Nuclear Information System (INIS)
McMaster, W.H.; Gong, E.Y.; Landram, C.S.; Quinones, D.F.
1980-01-01
A fluid-structure-interaction algorithm has been developed and incorporated into the two dimensional code PELE-IC. This code combines an Eulerian incompressible fluid algorithm with a Lagrangian finite element shell algorithm and incorporates the treatment of complex free surfaces. The fluid structure, and coupling algorithms have been verified by the calculation of solved problems from the literature and from air and steam blowdown experiments. The code has been used to calculate loads and structural response from air blowdown and the oscillatory condensation of steam bubbles in water suppression pools typical of boiling water reactors. The techniques developed here have been extended to three dimensions and implemented in the computer code PELE-3D
Rählmann, Sebastian; Meis, Markus; Schulte, Michael; Kießling, Jürgen; Walger, Martin; Meister, Hartmut
2017-04-27
Model-based hearing aid development considers the assessment of speech recognition using a master hearing aid (MHA). It is known that aided speech recognition in noise is related to cognitive factors such as working memory capacity (WMC). This relationship might be mediated by hearing aid experience (HAE). The aim of this study was to examine the relationship of WMC and speech recognition with a MHA for listeners with different HAE. Using the MHA, unaided and aided 80% speech recognition thresholds in noise were determined. Individual WMC capacity was assed using the Verbal Learning and Memory Test (VLMT) and the Reading Span Test (RST). Forty-nine hearing aid users with mild to moderate sensorineural hearing loss divided into three groups differing in HAE. Whereas unaided speech recognition did not show a significant relationship with WMC, a significant correlation could be observed between WMC and aided speech recognition. However, this only applied to listeners with HAE of up to approximately three years, and a consistent weakening of the correlation could be observed with more experience. Speech recognition scores obtained in acute experiments with an MHA are less influenced by individual cognitive capacity when experienced HA users are taken into account.
Energy Technology Data Exchange (ETDEWEB)
Meyer, J.
2007-07-01
Several measurements of the top quark mass in the dilepton final states with the DOe experiment are presented. The theoretical and experimental properties of the top quark are described together with a brief introduction of the Standard Model of particle physics and the physics of hadron collisions. An overview over the experimental setup is given. The Tevatron at Fermilab is presently the highest-energy hadron collider in the world with a center-of-mass energy of 1.96 TeV. There are two main experiments called CDF and DOe, A description of the components of the multipurpose DOe detector is given. The reconstruction of simulated events and data events is explained and the criteria for the identification of electrons, muons, jets, and missing transverse energy is given. The kinematics in the dilepton final state is underconstraint. Therefore, the top quark mass is extracted by the so-called Neutrino Weighting method. This method is introduced and several different approaches are described, compared, and enhanced. Results for the international summer conferences 2006 and winter 2007 are presented. The top quark mass measurement for the combination of all three dilepton channels with a dataset of 1.05 1/fb yields: mtop=172.5{+-}5.5 (stat.) {+-} 5.8 (syst.) GeV. This result is presently the most precise top quark mass measurement of the DOe experiment in the dilepton chann el. It entered the top quark mass wold average from March 2007. (orig.)
Pseudo-deterministic Algorithms
Goldwasser , Shafi
2012-01-01
International audience; In this talk we describe a new type of probabilistic algorithm which we call Bellagio Algorithms: a randomized algorithm which is guaranteed to run in expected polynomial time, and to produce a correct and unique solution with high probability. These algorithms are pseudo-deterministic: they can not be distinguished from deterministic algorithms in polynomial time by a probabilistic polynomial time observer with black box access to the algorithm. We show a necessary an...
Status of the DAMIC Direct Dark Matter Search Experiment
Energy Technology Data Exchange (ETDEWEB)
Aguilar-Arevalo, A.; et al.
2015-09-30
The DAMIC experiment uses fully depleted, high resistivity CCDs to search for dark matter particles. With an energy threshold $\\sim$50 eV$_{ee}$, and excellent energy and spatial resolutions, the DAMIC CCDs are well-suited to identify and suppress radioactive backgrounds, having an unrivaled sensitivity to WIMPs with masses $<$6 GeV/$c^2$. Early results motivated the construction of a 100 g detector, DAMIC100, currently being installed at SNOLAB. This contribution discusses the installation progress, new calibration efforts near the threshold, a preliminary result with 2014 data, and the prospects for physics results after one year of data taking.
Ablikim, M.; Achasov, M.N.; Ahmed, S.; Albrecht, M.; Amoroso, A.; An, F. F.; An, Q.; Haddadi, Z.; Kalantar-Nayestanaki, N.; Kavatsyuk, M.; Messchendorp, J. G.; Tiemens, M.
2018-01-01
By analyzing 482 pb(-1) of e(+) e(-) collision data collected at the center-of-mass energy root s = 4.009 GeV with the BESIII detector, we measure the branching fractions for the semi-leptonic decays D-s(+) -> phi e(+)v(e), phi mu(+)v(mu), eta mu(+)v(mu) and eta'mu(+)v(mu) to be B(D-s(+) -> phi
Privacy preserving randomized gossip algorithms
Hanzely, Filip; Konečný , Jakub; Loizou, Nicolas; Richtarik, Peter; Grishchenko, Dmitry
2017-01-01
In this work we present three different randomized gossip algorithms for solving the average consensus problem while at the same time protecting the information about the initial private values stored at the nodes. We give iteration complexity bounds for all methods, and perform extensive numerical experiments.
Privacy preserving randomized gossip algorithms
Hanzely, Filip
2017-06-23
In this work we present three different randomized gossip algorithms for solving the average consensus problem while at the same time protecting the information about the initial private values stored at the nodes. We give iteration complexity bounds for all methods, and perform extensive numerical experiments.
FIREWORKS ALGORITHM FOR UNCONSTRAINED FUNCTION OPTIMIZATION PROBLEMS
Directory of Open Access Journals (Sweden)
Evans BAIDOO
2017-03-01
Full Text Available Modern real world science and engineering problems can be classified as multi-objective optimisation problems which demand for expedient and efficient stochastic algorithms to respond to the optimization needs. This paper presents an object-oriented software application that implements a firework optimization algorithm for function optimization problems. The algorithm, a kind of parallel diffuse optimization algorithm is based on the explosive phenomenon of fireworks. The algorithm presented promising results when compared to other population or iterative based meta-heuristic algorithm after it was experimented on five standard benchmark problems. The software application was implemented in Java with interactive interface which allow for easy modification and extended experimentation. Additionally, this paper validates the effect of runtime on the algorithm performance.
Multimodal Estimation of Distribution Algorithms.
Yang, Qiang; Chen, Wei-Neng; Li, Yun; Chen, C L Philip; Xu, Xiang-Min; Zhang, Jun
2016-02-15
Taking the advantage of estimation of distribution algorithms (EDAs) in preserving high diversity, this paper proposes a multimodal EDA. Integrated with clustering strategies for crowding and speciation, two versions of this algorithm are developed, which operate at the niche level. Then these two algorithms are equipped with three distinctive techniques: 1) a dynamic cluster sizing strategy; 2) an alternative utilization of Gaussian and Cauchy distributions to generate offspring; and 3) an adaptive local search. The dynamic cluster sizing affords a potential balance between exploration and exploitation and reduces the sensitivity to the cluster size in the niching methods. Taking advantages of Gaussian and Cauchy distributions, we generate the offspring at the niche level through alternatively using these two distributions. Such utilization can also potentially offer a balance between exploration and exploitation. Further, solution accuracy is enhanced through a new local search scheme probabilistically conducted around seeds of niches with probabilities determined self-adaptively according to fitness values of these seeds. Extensive experiments conducted on 20 benchmark multimodal problems confirm that both algorithms can achieve competitive performance compared with several state-of-the-art multimodal algorithms, which is supported by nonparametric tests. Especially, the proposed algorithms are very promising for complex problems with many local optima.
Hamiltonian Algorithm Sound Synthesis
大矢, 健一
2013-01-01
Hamiltonian Algorithm (HA) is an algorithm for searching solutions is optimization problems. This paper introduces a sound synthesis technique using Hamiltonian Algorithm and shows a simple example. "Hamiltonian Algorithm Sound Synthesis" uses phase transition effect in HA. Because of this transition effect, totally new waveforms are produced.
Progressive geometric algorithms
Alewijnse, S.P.A.; Bagautdinov, T.M.; de Berg, M.T.; Bouts, Q.W.; ten Brink, Alex P.; Buchin, K.A.; Westenberg, M.A.
2015-01-01
Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms
Progressive geometric algorithms
Alewijnse, S.P.A.; Bagautdinov, T.M.; Berg, de M.T.; Bouts, Q.W.; Brink, ten A.P.; Buchin, K.; Westenberg, M.A.
2014-01-01
Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms
DEFF Research Database (Denmark)
Bucher, Taina
2017-01-01
the notion of the algorithmic imaginary. It is argued that the algorithmic imaginary – ways of thinking about what algorithms are, what they should be and how they function – is not just productive of different moods and sensations but plays a generative role in moulding the Facebook algorithm itself...... of algorithms affect people's use of these platforms, if at all? To help answer these questions, this article examines people's personal stories about the Facebook algorithm through tweets and interviews with 25 ordinary users. To understand the spaces where people and algorithms meet, this article develops...
Energy Technology Data Exchange (ETDEWEB)
Geist, G.A. [Oak Ridge National Lab., TN (United States). Computer Science and Mathematics Div.; Howell, G.W. [Florida Inst. of Tech., Melbourne, FL (United States). Dept. of Applied Mathematics; Watkins, D.S. [Washington State Univ., Pullman, WA (United States). Dept. of Pure and Applied Mathematics
1997-11-01
The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.
Algorithmically specialized parallel computers
Snyder, Lawrence; Gannon, Dennis B
1985-01-01
Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster
The global Minmax k-means algorithm.
Wang, Xiaoyan; Bai, Yanping
2016-01-01
The global k -means algorithm is an incremental approach to clustering that dynamically adds one cluster center at a time through a deterministic global search procedure from suitable initial positions, and employs k -means to minimize the sum of the intra-cluster variances. However the global k -means algorithm sometimes results singleton clusters and the initial positions sometimes are bad, after a bad initialization, poor local optimal can be easily obtained by k -means algorithm. In this paper, we modified the global k -means algorithm to eliminate the singleton clusters at first, and then we apply MinMax k -means clustering error method to global k -means algorithm to overcome the effect of bad initialization, proposed the global Minmax k -means algorithm. The proposed clustering method is tested on some popular data sets and compared to the k -means algorithm, the global k -means algorithm and the MinMax k -means algorithm. The experiment results show our proposed algorithm outperforms other algorithms mentioned in the paper.
International Nuclear Information System (INIS)
Meyer, Joerg; Bonn U
2007-01-01
measurement of the top quark mass by the D0 experiment at Fermilab in the dilepton final states. The comparison of the measured top quark masses in different final states allows an important consistency check of the Standard Model. Inconsistent results would be a clear hint of a misinterpretation of the analyzed data set. With the exception of the Higgs boson, all particles predicted by the Standard Model have been found. The search for the Higgs boson is one of the main focuses in high energy physics. The theory section will discuss the close relationship between the physics of the Higgs boson and the top quark
Energy Technology Data Exchange (ETDEWEB)
Meyer, Joerg [Univ. of Bonn (Germany)
2007-01-01
measurement of the top quark mass by the D0 experiment at Fermilab in the dilepton final states. The comparison of the measured top quark masses in different final states allows an important consistency check of the Standard Model. Inconsistent results would be a clear hint of a misinterpretation of the analyzed data set. With the exception of the Higgs boson, all particles predicted by the Standard Model have been found. The search for the Higgs boson is one of the main focuses in high energy physics. The theory section will discuss the close relationship between the physics of the Higgs boson and the top quark.
Improved Collaborative Filtering Algorithm using Topic Model
Directory of Open Access Journals (Sweden)
Liu Na
2016-01-01
Full Text Available Collaborative filtering algorithms make use of interactions rates between users and items for generating recommendations. Similarity among users or items is calculated based on rating mostly, without considering explicit properties of users or items involved. In this paper, we proposed collaborative filtering algorithm using topic model. We describe user-item matrix as document-word matrix and user are represented as random mixtures over item, each item is characterized by a distribution over users. The experiments showed that the proposed algorithm achieved better performance compared the other state-of-the-art algorithms on Movie Lens data sets.
Eigenvalue Decomposition-Based Modified Newton Algorithm
Directory of Open Access Journals (Sweden)
Wen-jun Wang
2013-01-01
Full Text Available When the Hessian matrix is not positive, the Newton direction may not be the descending direction. A new method named eigenvalue decomposition-based modified Newton algorithm is presented, which first takes the eigenvalue decomposition of the Hessian matrix, then replaces the negative eigenvalues with their absolute values, and finally reconstructs the Hessian matrix and modifies the searching direction. The new searching direction is always the descending direction. The convergence of the algorithm is proven and the conclusion on convergence rate is presented qualitatively. Finally, a numerical experiment is given for comparing the convergence domains of the modified algorithm and the classical algorithm.
Tugnoli, Gregorio; Bianchi, Elisa; Biscardi, Andrea; Coniglio, Carlo; Isceri, Salvatore; Simonetti, Luigi; Gordini, Giovanni; Di Saverio, Salomone
2015-10-01
Non-operative management (NOM) of hemodynamically stable patients with blunt splenic injury (BSI) is the standard of care, although it is associated with a potential risk of failure. Hemodynamically unstable patients should always undergo immediate surgery and avoid unnecessary CT scans. Angioembolization might help to increase the NOM rates, as well as NOM success rates. The aim of this study was to review and critically analyze the data from BSI cases managed at the Maggiore Hospital Trauma Center during the past 5 years, with a focus on NOM, its success rates and outcomes. A further aim was to develop a proposed clinical practical algorithm for the management of BSI derived from Clinical Audit experience. During the period between January 1, 2009 and December 31, 2013 we managed 293 patients with splenic lesions at the Trauma Center of Maggiore Hospital of Bologna. The data analyzed included the demographics, clinical parameters and characteristics, diagnostic and therapeutic data, as well as the outcomes and follow-up data. A retrospective evaluation of the clinical outcomes through a clinical audit has been used to design a practical clinical algorithm. During the five-year period, 293 patients with BSI were admitted, 77 of whom underwent immediate surgical management. The majority (216) of the patients was initially managed non-operatively and 207 of these patients experienced a successful NOM, with an overall rate of successful NOM of 70 % among all BSI cases. The success rate of NOM was 95.8 % in this series. All patients presenting with stable hemodynamics underwent an immediate CT-scan; angiography with embolization was performed in 54 cases for active contrast extravasation or in cases with grade V lesions even in absence of active bleeding. Proximal embolization was preferentially used for high-grade injuries. After a critical review of the cases treated during the past 5 years during a monthly clinical audit meeting, a clinical algorithm has been
Universal algorithm of time sharing
International Nuclear Information System (INIS)
Silin, I.N.; Fedyun'kin, E.D.
1979-01-01
Timesharing system algorithm is proposed for the wide class of one- and multiprocessor computer configurations. Dynamical priority is the piece constant function of the channel characteristic and system time quantum. The interactive job quantum has variable length. Characteristic recurrent formula is received. The concept of the background job is introduced. Background job loads processor if high priority jobs are inactive. Background quality function is given on the base of the statistical data received in the timesharing process. Algorithm includes optimal trashing off procedure for the jobs replacements in the memory. Sharing of the system time in proportion to the external priorities is guaranteed for the all active enough computing channels (back-ground too). The fast answer is guaranteed for the interactive jobs, which use small time and memory. The external priority control is saved for the high level scheduler. The experience of the algorithm realization on the BESM-6 computer in JINR is discussed
Quantum Computation and Algorithms
International Nuclear Information System (INIS)
Biham, O.; Biron, D.; Biham, E.; Grassi, M.; Lidar, D.A.
1999-01-01
It is now firmly established that quantum algorithms provide a substantial speedup over classical algorithms for a variety of problems, including the factorization of large numbers and the search for a marked element in an unsorted database. In this talk I will review the principles of quantum algorithms, the basic quantum gates and their operation. The combination of superposition and interference, that makes these algorithms efficient, will be discussed. In particular, Grover's search algorithm will be presented as an example. I will show that the time evolution of the amplitudes in Grover's algorithm can be found exactly using recursion equations, for any initial amplitude distribution
WDM Multicast Tree Construction Algorithms and Their Comparative Evaluations
Makabe, Tsutomu; Mikoshi, Taiju; Takenaka, Toyofumi
We propose novel tree construction algorithms for multicast communication in photonic networks. Since multicast communications consume many more link resources than unicast communications, effective algorithms for route selection and wavelength assignment are required. We propose a novel tree construction algorithm, called the Weighted Steiner Tree (WST) algorithm and a variation of the WST algorithm, called the Composite Weighted Steiner Tree (CWST) algorithm. Because these algorithms are based on the Steiner Tree algorithm, link resources among source and destination pairs tend to be commonly used and link utilization ratios are improved. Because of this, these algorithms can accept many more multicast requests than other multicast tree construction algorithms based on the Dijkstra algorithm. However, under certain delay constraints, the blocking characteristics of the proposed Weighted Steiner Tree algorithm deteriorate since some light paths between source and destinations use many hops and cannot satisfy the delay constraint. In order to adapt the approach to the delay-sensitive environments, we have devised the Composite Weighted Steiner Tree algorithm comprising the Weighted Steiner Tree algorithm and the Dijkstra algorithm for use in a delay constrained environment such as an IPTV application. In this paper, we also give the results of simulation experiments which demonstrate the superiority of the proposed Composite Weighted Steiner Tree algorithm compared with the Distributed Minimum Hop Tree (DMHT) algorithm, from the viewpoint of the light-tree request blocking.
[An improved algorithm for electrohysterogram envelope extraction].
Lu, Yaosheng; Pan, Jie; Chen, Zhaoxia; Chen, Zhaoxia
2017-02-01
Extraction uterine contraction signal from abdominal uterine electromyogram(EMG) signal is considered as the most promising method to replace the traditional tocodynamometer(TOCO) for detecting uterine contractions activity. The traditional root mean square(RMS) algorithm has only some limited values in canceling the impulsive noise. In our study, an improved algorithm for uterine EMG envelope extraction was proposed to overcome the problem. Firstly, in our experiment, zero-crossing detection method was used to separate the burst of uterine electrical activity from the raw uterine EMG signal. After processing the separated signals by employing two filtering windows which have different width, we used the traditional RMS algorithm to extract uterus EMG envelope. To assess the performance of the algorithm, the improved algorithm was compared with two existing intensity of uterine electromyogram(IEMG) extraction algorithms. The results showed that the improved algorithm was better than the traditional ones in eliminating impulsive noise present in the uterine EMG signal. The measurement sensitivity and positive predictive value(PPV) of the improved algorithm were 0.952 and 0.922, respectively, which were not only significantly higher than the corresponding values(0.859 and 0.847) of the first comparison algorithm, but also higher than the values(0.928 and 0.877) of the second comparison algorithm. Thus the new method is reliable and effective.
Archimedean copula estimation of distribution algorithm based on artificial bee colony algorithm
Institute of Scientific and Technical Information of China (English)
Haidong Xu; Mingyan Jiang; Kun Xu
2015-01-01
The artificial bee colony (ABC) algorithm is a com-petitive stochastic population-based optimization algorithm. How-ever, the ABC algorithm does not use the social information and lacks the knowledge of the problem structure, which leads to in-sufficiency in both convergent speed and searching precision. Archimedean copula estimation of distribution algorithm (ACEDA) is a relatively simple, time-economic and multivariate correlated EDA. This paper proposes a novel hybrid algorithm based on the ABC algorithm and ACEDA cal ed Archimedean copula estima-tion of distribution based on the artificial bee colony (ACABC) algorithm. The hybrid algorithm utilizes ACEDA to estimate the distribution model and then uses the information to help artificial bees to search more efficiently in the search space. Six bench-mark functions are introduced to assess the performance of the ACABC algorithm on numerical function optimization. Experimen-tal results show that the ACABC algorithm converges much faster with greater precision compared with the ABC algorithm, ACEDA and the global best (gbest)-guided ABC (GABC) algorithm in most of the experiments.
International Nuclear Information System (INIS)
Chandrasekharan, Shailesh
2000-01-01
Cluster algorithms have been recently used to eliminate sign problems that plague Monte-Carlo methods in a variety of systems. In particular such algorithms can also be used to solve sign problems associated with the permutation of fermion world lines. This solution leads to the possibility of designing fermion cluster algorithms in certain cases. Using the example of free non-relativistic fermions we discuss the ideas underlying the algorithm
Autonomous Star Tracker Algorithms
DEFF Research Database (Denmark)
Betto, Maurizio; Jørgensen, John Leif; Kilsgaard, Søren
1998-01-01
Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances.......Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances....
Divasón, Jose; Joosten, Sebastiaan; Thiemann, René; Yamada, Akihisa
2018-01-01
The Lenstra-Lenstra-Lovász basis reduction algorithm, also known as LLL algorithm, is an algorithm to find a basis with short, nearly orthogonal vectors of an integer lattice. Thereby, it can also be seen as an approximation to solve the shortest vector problem (SVP), which is an NP-hard problem,
Shcherbakov, R. N.
2016-02-01
Whatever we think of the eminent Russian physicist P N Lebedev, whatever our understanding of how his work was affected by circumstances in and outside of Russia, whatever value is placed on the basic elements of his twenty-year career and personal life and of his great successes and, happily, not so great failures, and whatever the stories of his happy times and his countless misfortunes, one thing remains clear — P N Lebedev's skill and talent served well to foster the development of global science and to improve the reputation of Russia as a scientific nation.
Vanschoren, Joaquin; Blockeel, Hendrik
Next to running machine learning algorithms based on inductive queries, much can be learned by immediately querying the combined results of many prior studies. Indeed, all around the globe, thousands of machine learning experiments are being executed on a daily basis, generating a constant stream of empirical information on machine learning techniques. While the information contained in these experiments might have many uses beyond their original intent, results are typically described very concisely in papers and discarded afterwards. If we properly store and organize these results in central databases, they can be immediately reused for further analysis, thus boosting future research. In this chapter, we propose the use of experiment databases: databases designed to collect all the necessary details of these experiments, and to intelligently organize them in online repositories to enable fast and thorough analysis of a myriad of collected results. They constitute an additional, queriable source of empirical meta-data based on principled descriptions of algorithm executions, without reimplementing the algorithms in an inductive database. As such, they engender a very dynamic, collaborative approach to experimentation, in which experiments can be freely shared, linked together, and immediately reused by researchers all over the world. They can be set up for personal use, to share results within a lab or to create open, community-wide repositories. Here, we provide a high-level overview of their design, and use an existing experiment database to answer various interesting research questions about machine learning algorithms and to verify a number of recent studies.
Nature-inspired optimization algorithms
Yang, Xin-She
2014-01-01
Nature-Inspired Optimization Algorithms provides a systematic introduction to all major nature-inspired algorithms for optimization. The book's unified approach, balancing algorithm introduction, theoretical background and practical implementation, complements extensive literature with well-chosen case studies to illustrate how these algorithms work. Topics include particle swarm optimization, ant and bee algorithms, simulated annealing, cuckoo search, firefly algorithm, bat algorithm, flower algorithm, harmony search, algorithm analysis, constraint handling, hybrid methods, parameter tuning
VISUALIZATION OF PAGERANK ALGORITHM
Perhaj, Ervin
2013-01-01
The goal of the thesis is to develop a web application that help users understand the functioning of the PageRank algorithm. The thesis consists of two parts. First we develop an algorithm to calculate PageRank values of web pages. The input of algorithm is a list of web pages and links between them. The user enters the list through the web interface. From the data the algorithm calculates PageRank value for each page. The algorithm repeats the process, until the difference of PageRank va...
Akl, Selim G
1985-01-01
Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the
Modified Clipped LMS Algorithm
Directory of Open Access Journals (Sweden)
Lotfizad Mojtaba
2005-01-01
Full Text Available Abstract A new algorithm is proposed for updating the weights of an adaptive filter. The proposed algorithm is a modification of an existing method, namely, the clipped LMS, and uses a three-level quantization ( scheme that involves the threshold clipping of the input signals in the filter weight update formula. Mathematical analysis shows the convergence of the filter weights to the optimum Wiener filter weights. Also, it can be proved that the proposed modified clipped LMS (MCLMS algorithm has better tracking than the LMS algorithm. In addition, this algorithm has reduced computational complexity relative to the unmodified one. By using a suitable threshold, it is possible to increase the tracking capability of the MCLMS algorithm compared to the LMS algorithm, but this causes slower convergence. Computer simulations confirm the mathematical analysis presented.
Directory of Open Access Journals (Sweden)
Lovestone Simon
2007-12-01
Full Text Available Abstract Background Shedding of the Alzheimer amyloid precursor protein (APP ectodomain can be accelerated by phorbol esters, compounds that act via protein kinase C (PKC or through unconventional phorbol-binding proteins such as Munc13-1. We have previously demonstrated that application of phorbol esters or purified PKC potentiates budding of APP-bearing secretory vesicles at the trans-Golgi network (TGN and toward the plasma membrane where APP becomes a substrate for enzymes responsible for shedding, known collectively as α-secretase(s. However, molecular identification of the presumptive "phospho-state-sensitive modulators of ectodomain shedding" (PMES responsible for regulated shedding has been challenging. Here, we examined the effects on APP ectodomain shedding of four phorbol-sensitive proteins involved in regulation of vesicular membrane trafficking of APP: Munc13-1, Munc18, NSF, and Eve-1. Results Overexpression of either phorbol-sensitive wildtype Munc13-1 or phorbol-insensitive Munc13-1 H567K resulted in increased basal APP ectodomain shedding. However, in contrast to the report of Roßner et al (2004, phorbol ester-dependent APP ectodomain shedding from cells overexpressing APP and Munc13-1 wildtype was indistinguishable from that observed following application of phorbol to cells overexpressing APP and Munc13-1 H567K mutant. This pattern of similar effects on basal and stimulated APP shedding was also observed for Munc18 and NSF. Eve-1, an ADAM adaptor protein reported to be essential for PKC-regulated shedding of pro-EGF, was found to play no obvious role in regulated shedding of sAPPα. Conclusion Our results indicate that, in the HEK293 system, Munc13-1, Munc18, NSF, and EVE-1 fail to meet essential criteria for identity as PMES for APP.
Ikin, Annat F; Causevic, Mirsada; Pedrini, Steve; Benson, Lyndsey S; Buxbaum, Joseph D; Suzuki, Toshiharu; Lovestone, Simon; Higashiyama, Shigeki; Mustelin, Tomas; Burgoyne, Robert D; Gandy, Sam
2007-12-09
Shedding of the Alzheimer amyloid precursor protein (APP) ectodomain can be accelerated by phorbol esters, compounds that act via protein kinase C (PKC) or through unconventional phorbol-binding proteins such as Munc13-1. We have previously demonstrated that application of phorbol esters or purified PKC potentiates budding of APP-bearing secretory vesicles at the trans-Golgi network (TGN) and toward the plasma membrane where APP becomes a substrate for enzymes responsible for shedding, known collectively as alpha-secretase(s). However, molecular identification of the presumptive "phospho-state-sensitive modulators of ectodomain shedding" (PMES) responsible for regulated shedding has been challenging. Here, we examined the effects on APP ectodomain shedding of four phorbol-sensitive proteins involved in regulation of vesicular membrane trafficking of APP: Munc13-1, Munc18, NSF, and Eve-1. Overexpression of either phorbol-sensitive wildtype Munc13-1 or phorbol-insensitive Munc13-1 H567K resulted in increased basal APP ectodomain shedding. However, in contrast to the report of Rossner et al (2004), phorbol ester-dependent APP ectodomain shedding from cells overexpressing APP and Munc13-1 wildtype was indistinguishable from that observed following application of phorbol to cells overexpressing APP and Munc13-1 H567K mutant. This pattern of similar effects on basal and stimulated APP shedding was also observed for Munc18 and NSF. Eve-1, an ADAM adaptor protein reported to be essential for PKC-regulated shedding of pro-EGF, was found to play no obvious role in regulated shedding of sAPPalpha. Our results indicate that, in the HEK293 system, Munc13-1, Munc18, NSF, and EVE-1 fail to meet essential criteria for identity as PMES for APP.
New preconditioned conjugate gradient algorithms for nonlinear unconstrained optimization problems
International Nuclear Information System (INIS)
Al-Bayati, A.; Al-Asadi, N.
1997-01-01
This paper presents two new predilection conjugate gradient algorithms for nonlinear unconstrained optimization problems and examines their computational performance. Computational experience shows that the new proposed algorithms generally imp lone the efficiency of Nazareth's [13] preconditioned conjugate gradient algorithm. (authors). 16 refs., 1 tab
Directory of Open Access Journals (Sweden)
Alain Boillat
2011-08-01
Full Text Available L’auteur analyse dans une perspective narratologique trois films de Joseph L. Mankiewicz réalisés entre 1949 et 1954 (A Letter to Three Wives, All about Eve et The Barefoot Contessa afin d’observer les particularités qui résultent du recours à un ou plusieurs narrateurs (ou narratrices s’exprimant en voix over. L’accent est mis sur les implications de l’organisation énonciative complexe de ces productions cinématographiques en termes de relation du film au spectateur et de représentation des rapports de genre. Ces études de cas permettent de nuancer certains acquis issus du champ des théories de l’énonciation. Dans ces films où les voix over sont proférées par des acteurs qui incarnent par ailleurs un personnage visualisé, certaines interactions sont observables – interprétées notamment comme des relations de pouvoir – entre le statut des narrateurs et le niveau diégétique. La matérialité sonore est prise en compte dans le cas particulier de Letter to Three Wives, où la voix se transforme en bruit. Cet « effet spécial » est l’occasion de discuter un modèle « impersonnel » de l’énonciation filmique qui intègre, tout en tenant compte de la dimension technologique de l’enregistrement sonore, le pouvoir fondamentalement humanisant des manifestations vocales.
Zazzi, M; Kaiser, R; Sönnerborg, A; Struck, D; Altmann, A; Prosperi, M; Rosen-Zvi, M; Petroczi, A; Peres, Y; Schülter, E; Boucher, C A; Brun-Vezinet, F; Harrigan, P R; Morris, L; Obermeier, M; Perno, C-F; Phanuphak, P; Pillay, D; Shafer, R W; Vandamme, A-M; van Laethem, K; Wensing, A M J; Lengauer, T; Incardona, F
2011-04-01
The EuResist expert system is a novel data-driven online system for computing the probability of 8-week success for any given pair of HIV-1 genotype and combination antiretroviral therapy regimen plus optional patient information. The objective of this study was to compare the EuResist system vs. human experts (EVE) for the ability to predict response to treatment. The EuResist system was compared with 10 HIV-1 drug resistance experts for the ability to predict 8-week response to 25 treatment cases derived from the EuResist database validation data set. All current and past patient data were made available to simulate clinical practice. The experts were asked to provide a qualitative and quantitative estimate of the probability of treatment success. There were 15 treatment successes and 10 treatment failures. In the classification task, the number of mislabelled cases was six for EuResist and 6-13 for the human experts [mean±standard deviation (SD) 9.1±1.9]. The accuracy of EuResist was higher than the average for the experts (0.76 vs. 0.64, respectively). The quantitative estimates computed by EuResist were significantly correlated (Pearson r=0.695, Pexperts. However, the agreement among experts was only moderate (for the classification task, inter-rater κ=0.355; for the quantitative estimation, mean±SD coefficient of variation=55.9±22.4%). With this limited data set, the EuResist engine performed comparably to or better than human experts. The system warrants further investigation as a treatment-decision support tool in clinical practice. © 2010 British HIV Association.
Directory of Open Access Journals (Sweden)
Андрей Николаевич Балыш
2010-12-01
Full Text Available The role of Ammunition Industry in connection with action variation at the first part of 20th century is analyzed in the present paper. It is studied how the New Economic Policy (NEP influenced onto the development of ammunition industry and respective branches of heavy industry. For the first time several manufacturing peculiarities of ammunition main elements (by example of ammunition bodies in the eve of the Great Patriotic War are analyzed on the base of archival documents and the role of these peculiarities in fighting efficiency of the Soviet forces during the Second World War is studied.
Semioptimal practicable algorithmic cooling
International Nuclear Information System (INIS)
Elias, Yuval; Mor, Tal; Weinstein, Yossi
2011-01-01
Algorithmic cooling (AC) of spins applies entropy manipulation algorithms in open spin systems in order to cool spins far beyond Shannon's entropy bound. Algorithmic cooling of nuclear spins was demonstrated experimentally and may contribute to nuclear magnetic resonance spectroscopy. Several cooling algorithms were suggested in recent years, including practicable algorithmic cooling (PAC) and exhaustive AC. Practicable algorithms have simple implementations, yet their level of cooling is far from optimal; exhaustive algorithms, on the other hand, cool much better, and some even reach (asymptotically) an optimal level of cooling, but they are not practicable. We introduce here semioptimal practicable AC (SOPAC), wherein a few cycles (typically two to six) are performed at each recursive level. Two classes of SOPAC algorithms are proposed and analyzed. Both attain cooling levels significantly better than PAC and are much more efficient than the exhaustive algorithms. These algorithms are shown to bridge the gap between PAC and exhaustive AC. In addition, we calculated the number of spins required by SOPAC in order to purify qubits for quantum computation. As few as 12 and 7 spins are required (in an ideal scenario) to yield a mildly pure spin (60% polarized) from initial polarizations of 1% and 10%, respectively. In the latter case, about five more spins are sufficient to produce a highly pure spin (99.99% polarized), which could be relevant for fault-tolerant quantum computing.
Multidimensional Scaling Localization Algorithm in Wireless Sensor Networks
Directory of Open Access Journals (Sweden)
Zhang Dongyang
2014-02-01
Full Text Available Due to the localization algorithm in large-scale wireless sensor network exists shortcomings both in positioning accuracy and time complexity compared to traditional localization algorithm, this paper presents a fast multidimensional scaling location algorithm. By positioning algorithm for fast multidimensional scaling, fast mapping initialization, fast mapping and coordinate transform can get schematic coordinates of node, coordinates Initialize of MDS algorithm, an accurate estimate of the node coordinates and using the PRORUSTES to analysis alignment of the coordinate and final position coordinates of nodes etc. There are four steps, and the thesis gives specific implementation steps of the algorithm. Finally, compared with stochastic algorithms and classical MDS algorithm experiment, the thesis takes application of specific examples. Experimental results show that: the proposed localization algorithm has fast multidimensional scaling positioning accuracy in ensuring certain circumstances, but also greatly improves the speed of operation.
An Improved Perturb and Observe Algorithm for Photovoltaic Motion Carriers
Peng, Lele; Xu, Wei; Li, Liming; Zheng, Shubin
2018-03-01
An improved perturbation and observation algorithm for photovoltaic motion carriers is proposed in this paper. The model of the proposed algorithm is given by using Lambert W function and tangent error method. Moreover, by using matlab and experiment of photovoltaic system, the tracking performance of the proposed algorithm is tested. And the results demonstrate that the improved algorithm has fast tracking speed and high efficiency. Furthermore, the energy conversion efficiency by the improved method has increased by nearly 8.2%.
A new algorithm for coding geological terminology
Apon, W.
The Geological Survey of The Netherlands has developed an algorithm to convert the plain geological language of lithologic well logs into codes suitable for computer processing and link these to existing plotting programs. The algorithm is based on the "direct method" and operates in three steps: (1) searching for defined word combinations and assigning codes; (2) deleting duplicated codes; (3) correcting incorrect code combinations. Two simple auxiliary files are used. A simple PC demonstration program is included to enable readers to experiment with this algorithm. The Department of Quarternary Geology of the Geological Survey of The Netherlands possesses a large database of shallow lithologic well logs in plain language and has been using a program based on this algorithm for about 3 yr. Erroneous codes resulting from using this algorithm are less than 2%.
Introduction to Evolutionary Algorithms
Yu, Xinjie
2010-01-01
Evolutionary algorithms (EAs) are becoming increasingly attractive for researchers from various disciplines, such as operations research, computer science, industrial engineering, electrical engineering, social science, economics, etc. This book presents an insightful, comprehensive, and up-to-date treatment of EAs, such as genetic algorithms, differential evolution, evolution strategy, constraint optimization, multimodal optimization, multiobjective optimization, combinatorial optimization, evolvable hardware, estimation of distribution algorithms, ant colony optimization, particle swarm opti
Recursive forgetting algorithms
DEFF Research Database (Denmark)
Parkum, Jens; Poulsen, Niels Kjølstad; Holst, Jan
1992-01-01
In the first part of the paper, a general forgetting algorithm is formulated and analysed. It contains most existing forgetting schemes as special cases. Conditions are given ensuring that the basic convergence properties will hold. In the second part of the paper, the results are applied...... to a specific algorithm with selective forgetting. Here, the forgetting is non-uniform in time and space. The theoretical analysis is supported by a simulation example demonstrating the practical performance of this algorithm...
Explaining algorithms using metaphors
Forišek, Michal
2013-01-01
There is a significant difference between designing a new algorithm, proving its correctness, and teaching it to an audience. When teaching algorithms, the teacher's main goal should be to convey the underlying ideas and to help the students form correct mental models related to the algorithm. This process can often be facilitated by using suitable metaphors. This work provides a set of novel metaphors identified and developed as suitable tools for teaching many of the 'classic textbook' algorithms taught in undergraduate courses worldwide. Each chapter provides exercises and didactic notes fo
Algorithms in Algebraic Geometry
Dickenstein, Alicia; Sommese, Andrew J
2008-01-01
In the last decade, there has been a burgeoning of activity in the design and implementation of algorithms for algebraic geometric computation. Some of these algorithms were originally designed for abstract algebraic geometry, but now are of interest for use in applications and some of these algorithms were originally designed for applications, but now are of interest for use in abstract algebraic geometry. The workshop on Algorithms in Algebraic Geometry that was held in the framework of the IMA Annual Program Year in Applications of Algebraic Geometry by the Institute for Mathematics and Its
Woo, Andrew
2012-01-01
Digital shadow generation continues to be an important aspect of visualization and visual effects in film, games, simulations, and scientific applications. This resource offers a thorough picture of the motivations, complexities, and categorized algorithms available to generate digital shadows. From general fundamentals to specific applications, it addresses shadow algorithms and how to manage huge data sets from a shadow perspective. The book also examines the use of shadow algorithms in industrial applications, in terms of what algorithms are used and what software is applicable.
Spectral Decomposition Algorithm (SDA)
National Aeronautics and Space Administration — Spectral Decomposition Algorithm (SDA) is an unsupervised feature extraction technique similar to PCA that was developed to better distinguish spectral features in...
Quick fuzzy backpropagation algorithm.
Nikov, A; Stoeva, S
2001-03-01
A modification of the fuzzy backpropagation (FBP) algorithm called QuickFBP algorithm is proposed, where the computation of the net function is significantly quicker. It is proved that the FBP algorithm is of exponential time complexity, while the QuickFBP algorithm is of polynomial time complexity. Convergence conditions of the QuickFBP, resp. the FBP algorithm are defined and proved for: (1) single output neural networks in case of training patterns with different targets; and (2) multiple output neural networks in case of training patterns with equivalued target vector. They support the automation of the weights training process (quasi-unsupervised learning) establishing the target value(s) depending on the network's input values. In these cases the simulation results confirm the convergence of both algorithms. An example with a large-sized neural network illustrates the significantly greater training speed of the QuickFBP rather than the FBP algorithm. The adaptation of an interactive web system to users on the basis of the QuickFBP algorithm is presented. Since the QuickFBP algorithm ensures quasi-unsupervised learning, this implies its broad applicability in areas of adaptive and adaptable interactive systems, data mining, etc. applications.
Portfolios of quantum algorithms.
Maurer, S M; Hogg, T; Huberman, B A
2001-12-17
Quantum computation holds promise for the solution of many intractable problems. However, since many quantum algorithms are stochastic in nature they can find the solution of hard problems only probabilistically. Thus the efficiency of the algorithms has to be characterized by both the expected time to completion and the associated variance. In order to minimize both the running time and its uncertainty, we show that portfolios of quantum algorithms analogous to those of finance can outperform single algorithms when applied to the NP-complete problems such as 3-satisfiability.
An Adaptive Filtering Algorithm Based on Genetic Algorithm-Backpropagation Network
Directory of Open Access Journals (Sweden)
Kai Hu
2013-01-01
Full Text Available A new image filtering algorithm is proposed. GA-BPN algorithm uses genetic algorithm (GA to decide weights in a back propagation neural network (BPN. It has better global optimal characteristics than traditional optimal algorithm. In this paper, we used GA-BPN to do image noise filter researching work. Firstly, this paper uses training samples to train GA-BPN as the noise detector. Then, we utilize the well-trained GA-BPN to recognize noise pixels in target image. And at last, an adaptive weighted average algorithm is used to recover noise pixels recognized by GA-BPN. Experiment data shows that this algorithm has better performance than other filters.
Rules Extraction with an Immune Algorithm
Directory of Open Access Journals (Sweden)
Deqin Yan
2007-12-01
Full Text Available In this paper, a method of extracting rules with immune algorithms from information systems is proposed. Designing an immune algorithm is based on a sharing mechanism to extract rules. The principle of sharing and competing resources in the sharing mechanism is consistent with the relationship of sharing and rivalry among rules. In order to extract rules efficiently, a new concept of flexible confidence and rule measurement is introduced. Experiments demonstrate that the proposed method is effective.
ALGORITHMS FOR TETRAHEDRAL NETWORK (TEN) GENERATION
Institute of Scientific and Technical Information of China (English)
无
2000-01-01
The Tetrahedral Network(TEN) is a powerful 3-D vector structure in GIS, which has a lot of advantages such as simple structure, fast topological relation processing and rapid visualization. The difficulty of TEN application is automatic creating data structure. Al though a raster algorithm has been introduced by some authors, the problems in accuracy, memory requirement, speed and integrity are still existent. In this paper, the raster algorithm is completed and a vector algorithm is presented after a 3-D data model and structure of TEN have been introducted. Finally, experiment, conclusion and future work are discussed.
Kernel learning algorithms for face recognition
Li, Jun-Bao; Pan, Jeng-Shyang
2013-01-01
Kernel Learning Algorithms for Face Recognition covers the framework of kernel based face recognition. This book discusses the advanced kernel learning algorithms and its application on face recognition. This book also focuses on the theoretical deviation, the system framework and experiments involving kernel based face recognition. Included within are algorithms of kernel based face recognition, and also the feasibility of the kernel based face recognition method. This book provides researchers in pattern recognition and machine learning area with advanced face recognition methods and its new
Algorithmic and user study of an autocompletion algorithm on a large medical vocabulary.
Sevenster, Merlijn; van Ommering, Rob; Qian, Yuechen
2012-02-01
Autocompletion supports human-computer interaction in software applications that let users enter textual data. We will be inspired by the use case in which medical professionals enter ontology concepts, catering the ongoing demand for structured and standardized data in medicine. Goal is to give an algorithmic analysis of one particular autocompletion algorithm, called multi-prefix matching algorithm, which suggests terms whose words' prefixes contain all words in the string typed by the user, e.g., in this sense, opt ner me matches optic nerve meningioma. Second we aim to investigate how well it supports users entering concepts from a large and comprehensive medical vocabulary (snomed ct). We give a concise description of the multi-prefix algorithm, and sketch how it can be optimized to meet required response time. Performance will be compared to a baseline algorithm, which gives suggestions that extend the string typed by the user to the right, e.g. optic nerve m gives optic nerve meningioma, but opt ner me does not. We conduct a user experiment in which 12 participants are invited to complete 40 snomed ct terms with the baseline algorithm and another set of 40 snomed ct terms with the multi-prefix algorithm. Our results show that users need significantly fewer keystrokes when supported by the multi-prefix algorithm than when supported by the baseline algorithm. The proposed algorithm is a competitive candidate for searching and retrieving terms from a large medical ontology. Copyright © 2011 Elsevier Inc. All rights reserved.
Algorithm 426 : Merge sort algorithm [M1
Bron, C.
1972-01-01
Sorting by means of a two-way merge has a reputation of requiring a clerically complicated and cumbersome program. This ALGOL 60 procedure demonstrates that, using recursion, an elegant and efficient algorithm can be designed, the correctness of which is easily proved [2]. Sorting n objects gives
Innovations in lattice QCD algorithms
International Nuclear Information System (INIS)
Orginos, Konstantinos
2006-01-01
Lattice QCD calculations demand a substantial amount of computing power in order to achieve the high precision results needed to better understand the nature of strong interactions, assist experiment to discover new physics, and predict the behavior of a diverse set of physical systems ranging from the proton itself to astrophysical objects such as neutron stars. However, computer power alone is clearly not enough to tackle the calculations we need to be doing today. A steady stream of recent algorithmic developments has made an important impact on the kinds of calculations we can currently perform. In this talk I am reviewing these algorithms and their impact on the nature of lattice QCD calculations performed today
Directory of Open Access Journals (Sweden)
Claude LE FUSTEC
2011-03-01
characterized as outcasts. From Pecola, the alienated victim of the WASP definition of beauty to Consolata, the “revised Reverend Mother,” Morrison’s fiction appears to weave its way through the moral complexities of African American female resistance to white male rule—theologically based on the canonical reading of the Fall as supreme calamity caused by Eve, the arch-temptress and sinner—to hand authority back to the pariah and wrongdoer: the black woman. However, far from boiling down to this deconstructive strategy, Morrison’s fiction seems to oppose religious doctrine only so as to sound the ontological depth of Christianity: while challenging the theological basis of sexist and racist assumptions, Morrison poses as an authoritative spiritual force able to craft her own Gospel of Self, based on cathartic moments of revelation where mostly female characters experience a mystic sense of connectedness to self and other, time and place. An African American female variation on the New Testament’s “Kingdom within,” Morrison’s novels echo black abolitionist Sojourner Truth’s famous speech “Ain’t I a Woman?” and its call for theological revision and gender emancipation.
Algorithms and Their Explanations
Benini, M.; Gobbo, F.; Beckmann, A.; Csuhaj-Varjú, E.; Meer, K.
2014-01-01
By analysing the explanation of the classical heapsort algorithm via the method of levels of abstraction mainly due to Floridi, we give a concrete and precise example of how to deal with algorithmic knowledge. To do so, we introduce a concept already implicit in the method, the ‘gradient of
Finite lattice extrapolation algorithms
International Nuclear Information System (INIS)
Henkel, M.; Schuetz, G.
1987-08-01
Two algorithms for sequence extrapolation, due to von den Broeck and Schwartz and Bulirsch and Stoer are reviewed and critically compared. Applications to three states and six states quantum chains and to the (2+1)D Ising model show that the algorithm of Bulirsch and Stoer is superior, in particular if only very few finite lattice data are available. (orig.)
Recursive automatic classification algorithms
Energy Technology Data Exchange (ETDEWEB)
Bauman, E V; Dorofeyuk, A A
1982-03-01
A variational statement of the automatic classification problem is given. The dependence of the form of the optimal partition surface on the form of the classification objective functional is investigated. A recursive algorithm is proposed for maximising a functional of reasonably general form. The convergence problem is analysed in connection with the proposed algorithm. 8 references.
DEFF Research Database (Denmark)
Husfeldt, Thore
2015-01-01
This chapter presents an introduction to graph colouring algorithms. The focus is on vertex-colouring algorithms that work for general classes of graphs with worst-case performance guarantees in a sequential model of computation. The presentation aims to demonstrate the breadth of available...
8. Algorithm Design Techniques
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 8. Algorithms - Algorithm Design Techniques. R K Shyamasundar. Series Article Volume 2 ... Author Affiliations. R K Shyamasundar1. Computer Science Group, Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai 400 005, India ...
Geometric approximation algorithms
Har-Peled, Sariel
2011-01-01
Exact algorithms for dealing with geometric objects are complicated, hard to implement in practice, and slow. Over the last 20 years a theory of geometric approximation algorithms has emerged. These algorithms tend to be simple, fast, and more robust than their exact counterparts. This book is the first to cover geometric approximation algorithms in detail. In addition, more traditional computational geometry techniques that are widely used in developing such algorithms, like sampling, linear programming, etc., are also surveyed. Other topics covered include approximate nearest-neighbor search, shape approximation, coresets, dimension reduction, and embeddings. The topics covered are relatively independent and are supplemented by exercises. Close to 200 color figures are included in the text to illustrate proofs and ideas.
Group leaders optimization algorithm
Daskin, Anmer; Kais, Sabre
2011-03-01
We present a new global optimization algorithm in which the influence of the leaders in social groups is used as an inspiration for the evolutionary technique which is designed into a group architecture. To demonstrate the efficiency of the method, a standard suite of single and multi-dimensional optimization functions along with the energies and the geometric structures of Lennard-Jones clusters are given as well as the application of the algorithm on quantum circuit design problems. We show that as an improvement over previous methods, the algorithm scales as N 2.5 for the Lennard-Jones clusters of N-particles. In addition, an efficient circuit design is shown for a two-qubit Grover search algorithm which is a quantum algorithm providing quadratic speedup over the classical counterpart.
International Nuclear Information System (INIS)
Noga, M.T.
1984-01-01
This thesis addresses a number of important problems that fall within the framework of the new discipline of Computational Geometry. The list of topics covered includes sorting and selection, convex hull algorithms, the L 1 hull, determination of the minimum encasing rectangle of a set of points, the Euclidean and L 1 diameter of a set of points, the metric traveling salesman problem, and finding the superrange of star-shaped and monotype polygons. The main theme of all the work was to develop a set of very fast state-of-the-art algorithms that supersede any rivals in terms of speed and ease of implementation. In some cases existing algorithms were refined; for others new techniques were developed that add to the present database of fast adaptive geometric algorithms. What emerges is a collection of techniques that is successful at merging modern tools developed in analysis of algorithms with those of classical geometry
Totally parallel multilevel algorithms
Frederickson, Paul O.
1988-01-01
Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.
Directory of Open Access Journals (Sweden)
Francesca Musiani
2013-08-01
Full Text Available Algorithms are increasingly often cited as one of the fundamental shaping devices of our daily, immersed-in-information existence. Their importance is acknowledged, their performance scrutinised in numerous contexts. Yet, a lot of what constitutes 'algorithms' beyond their broad definition as “encoded procedures for transforming input data into a desired output, based on specified calculations” (Gillespie, 2013 is often taken for granted. This article seeks to contribute to the discussion about 'what algorithms do' and in which ways they are artefacts of governance, providing two examples drawing from the internet and ICT realm: search engine queries and e-commerce websites’ recommendations to customers. The question of the relationship between algorithms and rules is likely to occupy an increasingly central role in the study and the practice of internet governance, in terms of both institutions’ regulation of algorithms, and algorithms’ regulation of our society.
Where genetic algorithms excel.
Baum, E B; Boneh, D; Garrett, C
2001-01-01
We analyze the performance of a genetic algorithm (GA) we call Culling, and a variety of other algorithms, on a problem we refer to as the Additive Search Problem (ASP). We show that the problem of learning the Ising perceptron is reducible to a noisy version of ASP. Noisy ASP is the first problem we are aware of where a genetic-type algorithm bests all known competitors. We generalize ASP to k-ASP to study whether GAs will achieve "implicit parallelism" in a problem with many more schemata. GAs fail to achieve this implicit parallelism, but we describe an algorithm we call Explicitly Parallel Search that succeeds. We also compute the optimal culling point for selective breeding, which turns out to be independent of the fitness function or the population distribution. We also analyze a mean field theoretic algorithm performing similarly to Culling on many problems. These results provide insight into when and how GAs can beat competing methods.
DEFF Research Database (Denmark)
Bilardi, Gianfranco; Pietracaprina, Andrea; Pucci, Geppino
2016-01-01
A framework is proposed for the design and analysis of network-oblivious algorithms, namely algorithms that can run unchanged, yet efficiently, on a variety of machines characterized by different degrees of parallelism and communication capabilities. The framework prescribes that a network......-oblivious algorithm be specified on a parallel model of computation where the only parameter is the problem’s input size, and then evaluated on a model with two parameters, capturing parallelism granularity and communication latency. It is shown that for a wide class of network-oblivious algorithms, optimality...... of cache hierarchies, to the realm of parallel computation. Its effectiveness is illustrated by providing optimal network-oblivious algorithms for a number of key problems. Some limitations of the oblivious approach are also discussed....
Dynamic Inertia Weight Binary Bat Algorithm with Neighborhood Search
Directory of Open Access Journals (Sweden)
Xingwang Huang
2017-01-01
Full Text Available Binary bat algorithm (BBA is a binary version of the bat algorithm (BA. It has been proven that BBA is competitive compared to other binary heuristic algorithms. Since the update processes of velocity in the algorithm are consistent with BA, in some cases, this algorithm also faces the premature convergence problem. This paper proposes an improved binary bat algorithm (IBBA to solve this problem. To evaluate the performance of IBBA, standard benchmark functions and zero-one knapsack problems have been employed. The numeric results obtained by benchmark functions experiment prove that the proposed approach greatly outperforms the original BBA and binary particle swarm optimization (BPSO. Compared with several other heuristic algorithms on zero-one knapsack problems, it also verifies that the proposed algorithm is more able to avoid local minima.
Directory of Open Access Journals (Sweden)
Hans Schonemann
1996-12-01
Full Text Available Some algorithms for singularity theory and algebraic geometry The use of Grobner basis computations for treating systems of polynomial equations has become an important tool in many areas. This paper introduces of the concept of standard bases (a generalization of Grobner bases and the application to some problems from algebraic geometry. The examples are presented as SINGULAR commands. A general introduction to Grobner bases can be found in the textbook [CLO], an introduction to syzygies in [E] and [St1]. SINGULAR is a computer algebra system for computing information about singularities, for use in algebraic geometry. The basic algorithms in SINGULAR are several variants of a general standard basis algorithm for general monomial orderings (see [GG]. This includes wellorderings (Buchberger algorithm ([B1], [B2] and tangent cone orderings (Mora algorithm ([M1], [MPT] as special cases: It is able to work with non-homogeneous and homogeneous input and also to compute in the localization of the polynomial ring in 0. Recent versions include algorithms to factorize polynomials and a factorizing Grobner basis algorithm. For a complete description of SINGULAR see [Si].
A New Modified Firefly Algorithm
Directory of Open Access Journals (Sweden)
Medha Gupta
2016-07-01
Full Text Available Nature inspired meta-heuristic algorithms studies the emergent collective intelligence of groups of simple agents. Firefly Algorithm is one of the new such swarm-based metaheuristic algorithm inspired by the flashing behavior of fireflies. The algorithm was first proposed in 2008 and since then has been successfully used for solving various optimization problems. In this work, we intend to propose a new modified version of Firefly algorithm (MoFA and later its performance is compared with the standard firefly algorithm along with various other meta-heuristic algorithms. Numerical studies and results demonstrate that the proposed algorithm is superior to existing algorithms.
International Nuclear Information System (INIS)
Nicolini, G.; Clivio, A.; Vanetti, E.; Cozzi, L.; Fogliata, A.; Krauss, H.; Fenoglietto, P.
2013-01-01
Purpose: To demonstrate the feasibility of portal dosimetry with an amorphous silicon mega voltage imager for flattening filter free (FFF) photon beams by means of the GLAaS methodology and to validate it for pretreatment quality assurance of volumetric modulated arc therapy (RapidArc).Methods: The GLAaS algorithm, developed for flattened beams, was applied to FFF beams of nominal energy of 6 and 10 MV generated by a Varian TrueBeam (TB). The amorphous silicon electronic portal imager [named mega voltage imager (MVI) on TB] was used to generate integrated images that were converted into matrices of absorbed dose to water. To enable GLAaS use under the increased dose-per-pulse and dose-rate conditions of the FFF beams, new operational source-detector-distance (SDD) was identified to solve detector saturation issues. Empirical corrections were defined to account for the shape of the profiles of the FFF beams to expand the original methodology of beam profile and arm backscattering correction. GLAaS for FFF beams was validated on pretreatment verification of RapidArc plans for three different TB linacs. In addition, the first pretreatment results from clinical experience on 74 arcs were reported in terms of γ analysis.Results: MVI saturates at 100 cm SDD for FFF beams but this can be avoided if images are acquired at 150 cm for all nominal dose rates of FFF beams. Rotational stability of the gantry-imager system was tested and resulted in a minimal apparent imager displacement during rotation of 0.2 ± 0.2 mm at SDD = 150 cm. The accuracy of this approach was tested with three different Varian TrueBeam linacs from different institutes. Data were stratified per energy and machine and showed no dependence with beam quality and MLC model. The results from clinical pretreatment quality assurance, provided a gamma agreement index (GAI) in the field area for six and ten FFF beams of (99.8 ± 0.3)% and (99.5 ± 0.6)% with distance to agreement and dose difference criteria
Advancements to the planogram frequency–distance rebinning algorithm
International Nuclear Information System (INIS)
Champley, Kyle M; Kinahan, Paul E; Raylman, Raymond R
2010-01-01
In this paper we consider the task of image reconstruction in positron emission tomography (PET) with the planogram frequency–distance rebinning (PFDR) algorithm. The PFDR algorithm is a rebinning algorithm for PET systems with panel detectors. The algorithm is derived in the planogram coordinate system which is a native data format for PET systems with panel detectors. A rebinning algorithm averages over the redundant four-dimensional set of PET data to produce a three-dimensional set of data. Images can be reconstructed from this rebinned three-dimensional set of data. This process enables one to reconstruct PET images more quickly than reconstructing directly from the four-dimensional PET data. The PFDR algorithm is an approximate rebinning algorithm. We show that implementing the PFDR algorithm followed by the (ramp) filtered backprojection (FBP) algorithm in linogram coordinates from multiple views reconstructs a filtered version of our image. We develop an explicit formula for this filter which can be used to achieve exact reconstruction by means of a modified FBP algorithm applied to the stack of rebinned linograms and can also be used to quantify the errors introduced by the PFDR algorithm. This filter is similar to the filter in the planogram filtered backprojection algorithm derived by Brasse et al. The planogram filtered backprojection and exact reconstruction with the PFDR algorithm require complete projections which can be completed with a reprojection algorithm. The PFDR algorithm is similar to the rebinning algorithm developed by Kao et al. By expressing the PFDR algorithm in detector coordinates, we provide a comparative analysis between the two algorithms. Numerical experiments using both simulated data and measured data from a positron emission mammography/tomography (PEM/PET) system are performed. Images are reconstructed by PFDR+FBP (PFDR followed by 2D FBP reconstruction), PFDRX (PFDR followed by the modified FBP algorithm for exact
A decision algorithm for patch spraying
DEFF Research Database (Denmark)
Christensen, Svend; Heisel, Torben; Walter, Mette
2003-01-01
method that estimates an economic optimal herbicide dose according to site-specific weed composition and density is presented in this paper. The method was termed a ‘decision algorithm for patch spraying’ (DAPS) and was evaluated in a 5-year experiment, in Denmark. DAPS consists of a competition model......, a herbicide dose–response model and an algorithm that estimates the economically optimal doses. The experiment was designed to compare herbicide treatments with DAPS recommendations and the Danish decision support system PC-Plant Protection. The results did not show any significant grain yield difference...
Parallel algorithms for continuum dynamics
International Nuclear Information System (INIS)
Hicks, D.L.; Liebrock, L.M.
1987-01-01
Simply porting existing parallel programs to a new parallel processor may not achieve the full speedup possible; to achieve the maximum efficiency may require redesigning the parallel algorithms for the specific architecture. The authors discuss here parallel algorithms that were developed first for the HEP processor and then ported to the CRAY X-MP/4, the ELXSI/10, and the Intel iPSC/32. Focus is mainly on the most recent parallel processing results produced, i.e., those on the Intel Hypercube. The applications are simulations of continuum dynamics in which the momentum and stress gradients are important. Examples of these are inertial confinement fusion experiments, severe breaks in the coolant system of a reactor, weapons physics, shock-wave physics. Speedup efficiencies on the Intel iPSC Hypercube are very sensitive to the ratio of communication to computation. Great care must be taken in designing algorithms for this machine to avoid global communication. This is much more critical on the iPSC than it was on the three previous parallel processors
Improved hybrid optimization algorithm for 3D protein structure prediction.
Zhou, Changjun; Hou, Caixia; Wei, Xiaopeng; Zhang, Qiang
2014-07-01
A new improved hybrid optimization algorithm - PGATS algorithm, which is based on toy off-lattice model, is presented for dealing with three-dimensional protein structure prediction problems. The algorithm combines the particle swarm optimization (PSO), genetic algorithm (GA), and tabu search (TS) algorithms. Otherwise, we also take some different improved strategies. The factor of stochastic disturbance is joined in the particle swarm optimization to improve the search ability; the operations of crossover and mutation that are in the genetic algorithm are changed to a kind of random liner method; at last tabu search algorithm is improved by appending a mutation operator. Through the combination of a variety of strategies and algorithms, the protein structure prediction (PSP) in a 3D off-lattice model is achieved. The PSP problem is an NP-hard problem, but the problem can be attributed to a global optimization problem of multi-extremum and multi-parameters. This is the theoretical principle of the hybrid optimization algorithm that is proposed in this paper. The algorithm combines local search and global search, which overcomes the shortcoming of a single algorithm, giving full play to the advantage of each algorithm. In the current universal standard sequences, Fibonacci sequences and real protein sequences are certified. Experiments show that the proposed new method outperforms single algorithms on the accuracy of calculating the protein sequence energy value, which is proved to be an effective way to predict the structure of proteins.
Solving SAT Problem Based on Hybrid Differential Evolution Algorithm
Liu, Kunqi; Zhang, Jingmin; Liu, Gang; Kang, Lishan
Satisfiability (SAT) problem is an NP-complete problem. Based on the analysis about it, SAT problem is translated equally into an optimization problem on the minimum of objective function. A hybrid differential evolution algorithm is proposed to solve the Satisfiability problem. It makes full use of strong local search capacity of hill-climbing algorithm and strong global search capability of differential evolution algorithm, which makes up their disadvantages, improves the efficiency of algorithm and avoids the stagnation phenomenon. The experiment results show that the hybrid algorithm is efficient in solving SAT problem.
Improving Polyp Detection Algorithms for CT Colonography: Pareto Front Approach.
Huang, Adam; Li, Jiang; Summers, Ronald M; Petrick, Nicholas; Hara, Amy K
2010-03-21
We investigated a Pareto front approach to improving polyp detection algorithms for CT colonography (CTC). A dataset of 56 CTC colon surfaces with 87 proven positive detections of 53 polyps sized 4 to 60 mm was used to evaluate the performance of a one-step and a two-step curvature-based region growing algorithm. The algorithmic performance was statistically evaluated and compared based on the Pareto optimal solutions from 20 experiments by evolutionary algorithms. The false positive rate was lower (pPareto optimization process can effectively help in fine-tuning and redesigning polyp detection algorithms.
Star point centroid algorithm based on background forecast
Wang, Jin; Zhao, Rujin; Zhu, Nan
2014-09-01
The calculation of star point centroid is a key step of improving star tracker measuring error. A star map photoed by APS detector includes several noises which have a great impact on veracity of calculation of star point centroid. Through analysis of characteristic of star map noise, an algorithm of calculation of star point centroid based on background forecast is presented in this paper. The experiment proves the validity of the algorithm. Comparing with classic algorithm, this algorithm not only improves veracity of calculation of star point centroid, but also does not need calibration data memory. This algorithm is applied successfully in a certain star tracker.
Formal verification of algorithms for critical systems
Rushby, John M.; Von Henke, Friedrich
1993-01-01
We describe our experience with formal, machine-checked verification of algorithms for critical applications, concentrating on a Byzantine fault-tolerant algorithm for synchronizing the clocks in the replicated computers of a digital flight control system. First, we explain the problems encountered in unsynchronized systems and the necessity, and criticality, of fault-tolerant synchronization. We give an overview of one such algorithm, and of the arguments for its correctness. Next, we describe a verification of the algorithm that we performed using our EHDM system for formal specification and verification. We indicate the errors we found in the published analysis of the algorithm, and other benefits that we derived from the verification. Based on our experience, we derive some key requirements for a formal specification and verification system adequate to the task of verifying algorithms of the type considered. Finally, we summarize our conclusions regarding the benefits of formal verification in this domain, and the capabilities required of verification systems in order to realize those benefits.
Fast algorithm of adaptive Fourier series
Gao, You; Ku, Min; Qian, Tao
2018-05-01
Adaptive Fourier decomposition (AFD, precisely 1-D AFD or Core-AFD) was originated for the goal of positive frequency representations of signals. It achieved the goal and at the same time offered fast decompositions of signals. There then arose several types of AFDs. AFD merged with the greedy algorithm idea, and in particular, motivated the so-called pre-orthogonal greedy algorithm (Pre-OGA) that was proven to be the most efficient greedy algorithm. The cost of the advantages of the AFD type decompositions is, however, the high computational complexity due to the involvement of maximal selections of the dictionary parameters. The present paper offers one formulation of the 1-D AFD algorithm by building the FFT algorithm into it. Accordingly, the algorithm complexity is reduced, from the original $\\mathcal{O}(M N^2)$ to $\\mathcal{O}(M N\\log_2 N)$, where $N$ denotes the number of the discretization points on the unit circle and $M$ denotes the number of points in $[0,1)$. This greatly enhances the applicability of AFD. Experiments are carried out to show the high efficiency of the proposed algorithm.
Algorithms for parallel computers
International Nuclear Information System (INIS)
Churchhouse, R.F.
1985-01-01
Until relatively recently almost all the algorithms for use on computers had been designed on the (usually unstated) assumption that they were to be run on single processor, serial machines. With the introduction of vector processors, array processors and interconnected systems of mainframes, minis and micros, however, various forms of parallelism have become available. The advantage of parallelism is that it offers increased overall processing speed but it also raises some fundamental questions, including: (i) which, if any, of the existing 'serial' algorithms can be adapted for use in the parallel mode. (ii) How close to optimal can such adapted algorithms be and, where relevant, what are the convergence criteria. (iii) How can we design new algorithms specifically for parallel systems. (iv) For multi-processor systems how can we handle the software aspects of the interprocessor communications. Aspects of these questions illustrated by examples are considered in these lectures. (orig.)
Hockney, Roger
1987-01-01
Algorithmic phase diagrams are a neat and compact representation of the results of comparing the execution time of several algorithms for the solution of the same problem. As an example, the recent results are shown of Gannon and Van Rosendale on the solution of multiple tridiagonal systems of equations in the form of such diagrams. The act of preparing these diagrams has revealed an unexpectedly complex relationship between the best algorithm and the number and size of the tridiagonal systems, which was not evident from the algebraic formulae in the original paper. Even so, for a particular computer, one diagram suffices to predict the best algorithm for all problems that are likely to be encountered the prediction being read directly from the diagram without complex calculation.
Diagnostic Algorithm Benchmarking
Poll, Scott
2011-01-01
A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.
Unsupervised learning algorithms
Aydin, Kemal
2016-01-01
This book summarizes the state-of-the-art in unsupervised learning. The contributors discuss how with the proliferation of massive amounts of unlabeled data, unsupervised learning algorithms, which can automatically discover interesting and useful patterns in such data, have gained popularity among researchers and practitioners. The authors outline how these algorithms have found numerous applications including pattern recognition, market basket analysis, web mining, social network analysis, information retrieval, recommender systems, market research, intrusion detection, and fraud detection. They present how the difficulty of developing theoretically sound approaches that are amenable to objective evaluation have resulted in the proposal of numerous unsupervised learning algorithms over the past half-century. The intended audience includes researchers and practitioners who are increasingly using unsupervised learning algorithms to analyze their data. Topics of interest include anomaly detection, clustering,...
Solving the SAT problem using Genetic Algorithm
Directory of Open Access Journals (Sweden)
Arunava Bhattacharjee
2017-08-01
Full Text Available In this paper we propose our genetic algorithm for solving the SAT problem. We introduce various crossover and mutation techniques and then make a comparative analysis between them in order to find out which techniques are the best suited for solving a SAT instance. Before the genetic algorithm is applied to an instance it is better to seek for unit and pure literals in the given formula and then try to eradicate them. This can considerably reduce the search space, and to demonstrate this we tested our algorithm on some random SAT instances. However, to analyse the various crossover and mutation techniques and also to evaluate the optimality of our algorithm we performed extensive experiments on benchmark instances of the SAT problem. We also estimated the ideal crossover length that would maximise the chances to solve a given SAT instance.
Genetic algorithm solution for partial digest problem.
Ahrabian, Hayedeh; Ganjtabesh, Mohammad; Nowzari-Dalini, Abbas; Razaghi-Moghadam-Kashani, Zahra
2013-01-01
One of the fundamental problems in computational biology is the construction of physical maps of chromosomes from the hybridisation experiments between unique probes and clones of chromosome fragments. Before introducing the shotgun sequencing method, Partial Digest Problem (PDP) was an intractable problem used to construct the physical maps of DNA sequence in molecular biology. In this paper, we develop a novel Genetic Algorithm (GA) for solving the PDP. This algorithm is implemented and compared with well-known existing algorithms on different types of random and real instances data, and the obtained results show the efficiency of our algorithm. Also, our GA is adapted to handle the erroneous data and their efficiency is presented for the large instances of this problem.
Vector Network Coding Algorithms
Ebrahimi, Javad; Fragouli, Christina
2010-01-01
We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L x L coding matrices that play a similar role as coding c in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector coding, our algori...
Optimization algorithms and applications
Arora, Rajesh Kumar
2015-01-01
Choose the Correct Solution Method for Your Optimization ProblemOptimization: Algorithms and Applications presents a variety of solution techniques for optimization problems, emphasizing concepts rather than rigorous mathematical details and proofs. The book covers both gradient and stochastic methods as solution techniques for unconstrained and constrained optimization problems. It discusses the conjugate gradient method, Broyden-Fletcher-Goldfarb-Shanno algorithm, Powell method, penalty function, augmented Lagrange multiplier method, sequential quadratic programming, method of feasible direc
[An Algorithm to Eliminate Power Frequency Interference in ECG Using Template].
Shi, Guohua; Li, Jiang; Xu, Yan; Feng, Liang
2017-01-01
Researching an algorithm to eliminate power frequency interference in ECG. The algorithm first creates power frequency interference template, then, subtracts the template from the original ECG signals, final y, the algorithm gets the ECG signals without interference. Experiment shows the algorithm can eliminate interference effectively and has none side effect to normal signal. It’s efficient and suitable for practice.
From Genetics to Genetic Algorithms
Indian Academy of Sciences (India)
Genetic algorithms (GAs) are computational optimisation schemes with an ... The algorithms solve optimisation problems ..... Genetic Algorithms in Search, Optimisation and Machine. Learning, Addison-Wesley Publishing Company, Inc. 1989.
Algorithmic Principles of Mathematical Programming
Faigle, Ulrich; Kern, Walter; Still, Georg
2002-01-01
Algorithmic Principles of Mathematical Programming investigates the mathematical structures and principles underlying the design of efficient algorithms for optimization problems. Recent advances in algorithmic theory have shown that the traditionally separate areas of discrete optimization, linear
Directory of Open Access Journals (Sweden)
Wang Zi Min
2016-01-01
Full Text Available With the development of social services, people’s living standards improve further requirements, there is an urgent need for a way to adapt to the complex situation of the new positioning technology. In recent years, RFID technology have a wide range of applications in all aspects of life and production, such as logistics tracking, car alarm, security and other items. The use of RFID technology to locate, it is a new direction in the eyes of the various research institutions and scholars. RFID positioning technology system stability, the error is small and low-cost advantages of its location algorithm is the focus of this study.This article analyzes the layers of RFID technology targeting methods and algorithms. First, RFID common several basic methods are introduced; Secondly, higher accuracy to political network location method; Finally, LANDMARC algorithm will be described. Through this it can be seen that advanced and efficient algorithms play an important role in increasing RFID positioning accuracy aspects.Finally, the algorithm of RFID location technology are summarized, pointing out the deficiencies in the algorithm, and put forward a follow-up study of the requirements, the vision of a better future RFID positioning technology.
Directory of Open Access Journals (Sweden)
Surafel Luleseged Tilahun
2012-01-01
Full Text Available Firefly algorithm is one of the new metaheuristic algorithms for optimization problems. The algorithm is inspired by the flashing behavior of fireflies. In the algorithm, randomly generated solutions will be considered as fireflies, and brightness is assigned depending on their performance on the objective function. One of the rules used to construct the algorithm is, a firefly will be attracted to a brighter firefly, and if there is no brighter firefly, it will move randomly. In this paper we modify this random movement of the brighter firefly by generating random directions in order to determine the best direction in which the brightness increases. If such a direction is not generated, it will remain in its current position. Furthermore the assignment of attractiveness is modified in such a way that the effect of the objective function is magnified. From the simulation result it is shown that the modified firefly algorithm performs better than the standard one in finding the best solution with smaller CPU time.
International Nuclear Information System (INIS)
Schaart, Dennis R.; Jansen, Jan Th.M.; Zoetelief, Johannes; Leege, Piet F.A. de
2002-01-01
The condensed-history electron transport algorithms in the Monte Carlo code MCNP4C are derived from ITS 3.0, which is a well-validated code for coupled electron-photon simulations. This, combined with its user-friendliness and versatility, makes MCNP4C a promising code for medical physics applications. Such applications, however, require a high degree of accuracy. In this work, MCNP4C electron depth-dose distributions in water are compared with published ITS 3.0 results. The influences of voxel size, substeps and choice of electron energy indexing algorithm are investigated at incident energies between 100 keV and 20 MeV. Furthermore, previously published dose measurements for seven beta emitters are simulated. Since MCNP4C does not allow tally segmentation with the *F8 energy deposition tally, even a homogeneous phantom must be subdivided in cells to calculate the distribution of dose. The repeated interruption of the electron tracks at the cell boundaries significantly affects the electron transport. An electron track length estimator of absorbed dose is described which allows tally segmentation. In combination with the ITS electron energy indexing algorithm, this estimator appears to reproduce ITS 3.0 and experimental results well. If, however, cell boundaries are used instead of segments, or if the MCNP indexing algorithm is applied, the agreement is considerably worse. (author)
Improved multivariate polynomial factoring algorithm
International Nuclear Information System (INIS)
Wang, P.S.
1978-01-01
A new algorithm for factoring multivariate polynomials over the integers based on an algorithm by Wang and Rothschild is described. The new algorithm has improved strategies for dealing with the known problems of the original algorithm, namely, the leading coefficient problem, the bad-zero problem and the occurrence of extraneous factors. It has an algorithm for correctly predetermining leading coefficients of the factors. A new and efficient p-adic algorithm named EEZ is described. Bascially it is a linearly convergent variable-by-variable parallel construction. The improved algorithm is generally faster and requires less store then the original algorithm. Machine examples with comparative timing are included
Pedagoogiline praktika õpetajakoolituses / Eve Eisenschmidt
Eisenschmidt, Eve, 1963-
2003-01-01
Lahendused pedagoogilise praktika tõhustamiseks. Pedagoogilise praktika osatähtsuse hindamiseks üliõpilaste professionaalse enesekontseptsiooni kujunemisel ja juhendaja rolli väljaselgitamisel küsitleti Haapsalu Kolledži III kursuse klassiõpetaja eriala üliõpilasi, kes viibisid I kooliastme viienädalasel praktikal 2002.a. veebruaris-märtsis
Onu Bella kollektsiooniaed / Eve Veigel
Veigel, Eve, 1961-
2005-01-01
Laulja, Põlva raadiojaama Marta FM pühapäevaprogrammi "Vox Humana" juhi Onu Bella (Igor Maasik) koduaiast Tartus. Onu Bella on kasvatanud roose, okaspuuvorme, praegu tegeleb mini-dendropargi rajamisega. 12 ill
Põrand kannab sisustust / Eve Variksoo
Variksoo, Eve
1998-01-01
Vanadest ja uutest põrandakatetest, nagu laudpõrand, laudparkett, liistparkett, laminaatpõrand, linoleum, PVC põrandakatted, kivipõrand, korkkatted, vaipkatted, sisal, kookos, keraamilised plaadid. Põrandaküttega sobivatest ja sobimatutest materjalidest. Armsast puupõrandast.
Sotsiaalkapitali roll majandusarengus / Eve Parts
Parts, Eve, 1970-
2006-01-01
Sotsiaalkapitali lülitamise võimalustest iseseisva lisamuutujana traditsioonilistesse kasvumudelitesse, võimalikest sisulistest mõjumehhanismidest, mõjust inimkapitalile ning institutsionaalse efektiivsuse ja sotsiaalse heaoluga seotud arengueesmärkide saavutamisele. Skeemid. Tabelid
Kas saatejuht vastutab? / Eve Heinla
Heinla, Eve, 1966-
2004-01-01
Vt. ka Vesti Dnja 22. sept., lk. 6. Urmas Oti intervjuust 15. aastase koolipoisiga Tšetšeenia teemal telesaates "Happy Hour". Lastekaitse Liidu hinnangul ei lähtunud saatejuht lapse huvidest. Lisaks: Urmas Oti küsimused tšetšeeni poisile
Lõhe Res Publicas / Eve Heinla
Heinla, Eve, 1966-
2005-01-01
Tallinna linnapea Tõnis Palts on vastu Tallinna volikogu Res Publica fraktsiooni esimehe Toomas Tautsi abikaasa Kristina Tautsi tööleasumisele Lääne-Tallinna keskhaigla haldusjuhina. T. Paltsi kirjast Res Publica Tallinna piirkonna juhatusele. Lisa: Kuidas lahvatas pealinna võimutüli. Kommenteerivad: Maret Maripuu, Tõnis Palts, Vladimir Maslov
DEFF Research Database (Denmark)
Henriksen, Lars Skov; Zimmer, Annette; Smith, Steven Rathgeb
. Specifically, we will investigate whether and to what extent social services and health care in these three countries are affected by current changes. With a special focus on nonprofit organizations, we will particularly address the question whether a trend towards convergence of the very different welfare......Due to severe societal, economic and political changes, of which the financial crisis counts prominently, welfare states all over the world are under stress. In our comparative analysis, we will concentrate on specific segments of welfare state activity in Denmark, Germany, and the United States...
An empirical study on SAJQ (Sorting Algorithm for Join Queries
Directory of Open Access Journals (Sweden)
Hassan I. Mathkour
2010-06-01
Full Text Available Most queries that applied on database management systems (DBMS depend heavily on the performance of the used sorting algorithm. In addition to have an efficient sorting algorithm, as a primary feature, stability of such algorithms is a major feature that is needed in performing DBMS queries. In this paper, we study a new Sorting Algorithm for Join Queries (SAJQ that has both advantages of being efficient and stable. The proposed algorithm takes the advantage of using the m-way-merge algorithm in enhancing its time complexity. SAJQ performs the sorting operation in a time complexity of O(nlogm, where n is the length of the input array and m is number of sub-arrays used in sorting. An unsorted input array of length n is arranged into m sorted sub-arrays. The m-way-merge algorithm merges the sorted m sub-arrays into the final output sorted array. The proposed algorithm keeps the stability of the keys intact. An analytical proof has been conducted to prove that, in the worst case, the proposed algorithm has a complexity of O(nlogm. Also, a set of experiments has been performed to investigate the performance of the proposed algorithm. The experimental results have shown that the proposed algorithm outperforms other Stable–Sorting algorithms that are designed for join-based queries.
Directory of Open Access Journals (Sweden)
Hanns Holger Rutz
2016-11-01
Full Text Available Although the concept of algorithms has been established a long time ago, their current topicality indicates a shift in the discourse. Classical definitions based on logic seem to be inadequate to describe their aesthetic capabilities. New approaches stress their involvement in material practices as well as their incompleteness. Algorithmic aesthetics can no longer be tied to the static analysis of programs, but must take into account the dynamic and experimental nature of coding practices. It is suggested that the aesthetic objects thus produced articulate something that could be called algorithmicity or the space of algorithmic agency. This is the space or the medium – following Luhmann’s form/medium distinction – where human and machine undergo mutual incursions. In the resulting coupled “extimate” writing process, human initiative and algorithmic speculation cannot be clearly divided out any longer. An observation is attempted of defining aspects of such a medium by drawing a trajectory across a number of sound pieces. The operation of exchange between form and medium I call reconfiguration and it is indicated by this trajectory.
Institute of Scientific and Technical Information of China (English)
WANG ShunJin; ZHANG Hua
2007-01-01
Based on the exact analytical solution of ordinary differential equations,a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm.A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models.The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision,and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.
Institute of Scientific and Technical Information of China (English)
2007-01-01
Based on the exact analytical solution of ordinary differential equations, a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm. A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models. The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision, and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.
Trends in physics data analysis algorithms
International Nuclear Information System (INIS)
Denby, B.
2004-01-01
The paper provides a new look at algorithmic trends in modern physics experiments. Based on recently presented material, it attempts to draw conclusions in order to form a coherent historical picture of the past, present, and possible future of the field of data analysis techniques in physics. The importance of cross disciplinary approaches is stressed
Fireworks algorithm for mean-VaR/CVaR models
Zhang, Tingting; Liu, Zhifeng
2017-10-01
Intelligent algorithms have been widely applied to portfolio optimization problems. In this paper, we introduce a novel intelligent algorithm, named fireworks algorithm, to solve the mean-VaR/CVaR model for the first time. The results show that, compared with the classical genetic algorithm, fireworks algorithm not only improves the optimization accuracy and the optimization speed, but also makes the optimal solution more stable. We repeat our experiments at different confidence levels and different degrees of risk aversion, and the results are robust. It suggests that fireworks algorithm has more advantages than genetic algorithm in solving the portfolio optimization problem, and it is feasible and promising to apply it into this field.
A Multistrategy Optimization Improved Artificial Bee Colony Algorithm
Directory of Open Access Journals (Sweden)
Wen Liu
2014-01-01
Full Text Available Being prone to the shortcomings of premature and slow convergence rate of artificial bee colony algorithm, an improved algorithm was proposed. Chaotic reverse learning strategies were used to initialize swarm in order to improve the global search ability of the algorithm and keep the diversity of the algorithm; the similarity degree of individuals of the population was used to characterize the diversity of population; population diversity measure was set as an indicator to dynamically and adaptively adjust the nectar position; the premature and local convergence were avoided effectively; dual population search mechanism was introduced to the search stage of algorithm; the parallel search of dual population considerably improved the convergence rate. Through simulation experiments of 10 standard testing functions and compared with other algorithms, the results showed that the improved algorithm had faster convergence rate and the capacity of jumping out of local optimum faster.
Two New PRP Conjugate Gradient Algorithms for Minimization Optimization Models.
Directory of Open Access Journals (Sweden)
Gonglin Yuan
Full Text Available Two new PRP conjugate Algorithms are proposed in this paper based on two modified PRP conjugate gradient methods: the first algorithm is proposed for solving unconstrained optimization problems, and the second algorithm is proposed for solving nonlinear equations. The first method contains two aspects of information: function value and gradient value. The two methods both possess some good properties, as follows: 1 βk ≥ 0 2 the search direction has the trust region property without the use of any line search method 3 the search direction has sufficient descent property without the use of any line search method. Under some suitable conditions, we establish the global convergence of the two algorithms. We conduct numerical experiments to evaluate our algorithms. The numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems and that the second algorithm is effective for solving large-scale nonlinear equations.
Two New PRP Conjugate Gradient Algorithms for Minimization Optimization Models.
Yuan, Gonglin; Duan, Xiabin; Liu, Wenjie; Wang, Xiaoliang; Cui, Zengru; Sheng, Zhou
2015-01-01
Two new PRP conjugate Algorithms are proposed in this paper based on two modified PRP conjugate gradient methods: the first algorithm is proposed for solving unconstrained optimization problems, and the second algorithm is proposed for solving nonlinear equations. The first method contains two aspects of information: function value and gradient value. The two methods both possess some good properties, as follows: 1) βk ≥ 0 2) the search direction has the trust region property without the use of any line search method 3) the search direction has sufficient descent property without the use of any line search method. Under some suitable conditions, we establish the global convergence of the two algorithms. We conduct numerical experiments to evaluate our algorithms. The numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems and that the second algorithm is effective for solving large-scale nonlinear equations.
Selection of views to materialize using simulated annealing algorithms
Zhou, Lijuan; Liu, Chi; Wang, Hongfeng; Liu, Daixin
2002-03-01
A data warehouse contains lots of materialized views over the data provided by the distributed heterogeneous databases for the purpose of efficiently implementing decision-support or OLAP queries. It is important to select the right view to materialize that answer a given set of queries. The goal is the minimization of the combination of the query evaluation and view maintenance costs. In this paper, we have addressed and designed algorithms for selecting a set of views to be materialized so that the sum of processing a set of queries and maintaining the materialized views is minimized. We develop an approach using simulated annealing algorithms to solve it. First, we explore simulated annealing algorithms to optimize the selection of materialized views. Then we use experiments to demonstrate our approach. The results show that our algorithm works better. We implemented our algorithms and a performance study of the algorithms shows that the proposed algorithm gives an optimal solution.
Handbook of Memetic Algorithms
Cotta, Carlos; Moscato, Pablo
2012-01-01
Memetic Algorithms (MAs) are computational intelligence structures combining multiple and various operators in order to address optimization problems. The combination and interaction amongst operators evolves and promotes the diffusion of the most successful units and generates an algorithmic behavior which can handle complex objective functions and hard fitness landscapes. “Handbook of Memetic Algorithms” organizes, in a structured way, all the the most important results in the field of MAs since their earliest definition until now. A broad review including various algorithmic solutions as well as successful applications is included in this book. Each class of optimization problems, such as constrained optimization, multi-objective optimization, continuous vs combinatorial problems, uncertainties, are analysed separately and, for each problem, memetic recipes for tackling the difficulties are given with some successful examples. Although this book contains chapters written by multiple authors, ...
Algorithms in invariant theory
Sturmfels, Bernd
2008-01-01
J. Kung and G.-C. Rota, in their 1984 paper, write: "Like the Arabian phoenix rising out of its ashes, the theory of invariants, pronounced dead at the turn of the century, is once again at the forefront of mathematics". The book of Sturmfels is both an easy-to-read textbook for invariant theory and a challenging research monograph that introduces a new approach to the algorithmic side of invariant theory. The Groebner bases method is the main tool by which the central problems in invariant theory become amenable to algorithmic solutions. Students will find the book an easy introduction to this "classical and new" area of mathematics. Researchers in mathematics, symbolic computation, and computer science will get access to a wealth of research ideas, hints for applications, outlines and details of algorithms, worked out examples, and research problems.
CERN. Geneva; PUNZI, Giovanni
2015-01-01
Charge particle reconstruction is one of the most demanding computational tasks found in HEP, and it becomes increasingly important to perform it in real time. We envision that HEP would greatly benefit from achieving a long-term goal of making track reconstruction happen transparently as part of the detector readout ("detector-embedded tracking"). We describe here a track-reconstruction approach based on a massively parallel pattern-recognition algorithm, inspired by studies of the processing of visual images by the brain as it happens in nature ('RETINA algorithm'). It turns out that high-quality tracking in large HEP detectors is possible with very small latencies, when this algorithm is implemented in specialized processors, based on current state-of-the-art, high-speed/high-bandwidth digital devices.
A quantum algorithm for Viterbi decoding of classical convolutional codes
Grice, Jon R.; Meyer, David A.
2014-01-01
We present a quantum Viterbi algorithm (QVA) with better than classical performance under certain conditions. In this paper the proposed algorithm is applied to decoding classical convolutional codes, for instance; large constraint length $Q$ and short decode frames $N$. Other applications of the classical Viterbi algorithm where $Q$ is large (e.g. speech processing) could experience significant speedup with the QVA. The QVA exploits the fact that the decoding trellis is similar to the butter...
Photovoltaic Cells Mppt Algorithm and Design of Controller Monitoring System
Meng, X. Z.; Feng, H. B.
2017-10-01
This paper combined the advantages of each maximum power point tracking (MPPT) algorithm, put forward a kind of algorithm with higher speed and higher precision, based on this algorithm designed a maximum power point tracking controller with ARM. The controller, communication technology and PC software formed a control system. Results of the simulation and experiment showed that the process of maximum power tracking was effective, and the system was stable.
Named Entity Linking Algorithm
Directory of Open Access Journals (Sweden)
M. F. Panteleev
2017-01-01
Full Text Available In the tasks of processing text in natural language, Named Entity Linking (NEL represents the task to define and link some entity, which is found in the text, with some entity in the knowledge base (for example, Dbpedia. Currently, there is a diversity of approaches to solve this problem, but two main classes can be identified: graph-based approaches and machine learning-based ones. Graph and Machine Learning approaches-based algorithm is proposed accordingly to the stated assumptions about the interrelations of named entities in a sentence and in general.In the case of graph-based approaches, it is necessary to solve the problem of identifying an optimal set of the related entities according to some metric that characterizes the distance between these entities in a graph built on some knowledge base. Due to limitations in processing power, to solve this task directly is impossible. Therefore, its modification is proposed. Based on the algorithms of machine learning, an independent solution cannot be built due to small volumes of training datasets relevant to NEL task. However, their use can contribute to improving the quality of the algorithm. The adaptation of the Latent Dirichlet Allocation model is proposed in order to obtain a measure of the compatibility of attributes of various entities encountered in one context.The efficiency of the proposed algorithm was experimentally tested. A test dataset was independently generated. On its basis the performance of the model was compared using the proposed algorithm with the open source product DBpedia Spotlight, which solves the NEL problem.The mockup, based on the proposed algorithm, showed a low speed as compared to DBpedia Spotlight. However, the fact that it has shown higher accuracy, stipulates the prospects for work in this direction.The main directions of development were proposed in order to increase the accuracy of the system and its productivity.
Almagest, a new trackless ring finding algorithm
Energy Technology Data Exchange (ETDEWEB)
Lamanna, G., E-mail: gianluca.lamanna@cern.ch
2014-12-01
A fast ring finding algorithm is a crucial point to allow the use of RICH in on-line trigger selection. The present algorithms are either too slow (with respect to the incoming data rate) or need the information coming from a tracking system. Digital image techniques, assuming limited computing power (as for example Hough transform), are not perfectly robust for what concerns the noise immunity. We present a novel technique based on Ptolemy's theorem for multi-ring pattern recognition. Starting from purely geometrical considerations, this algorithm (also known as “Almagest”) allows fast and trackless rings reconstruction, with spatial resolution comparable with other offline techniques. Almagest is particularly suitable for parallel implementation on multi-cores machines. Preliminary tests on GPUs (multi-cores video card processors) show that, thanks to an execution time smaller than 10 μs per event, this algorithm could be employed for on-line selection in trigger systems. The user case of the NA62 RICH trigger, based on GPU, will be discussed. - Highlights: • A new algorithm for fast multiple ring searching in RICH detectors is presented. • The Almagest algorithm exploits the computing power of Graphics processers (GPUs). • A preliminary implementation for on-line triggering in the NA62 experiment shows encouraging results.
Algorithms for Monte Carlo calculations with fermions
International Nuclear Information System (INIS)
Weingarten, D.
1985-01-01
We describe a fermion Monte Carlo algorithm due to Petcher and the present author and another due to Fucito, Marinari, Parisi and Rebbi. For the first algorithm we estimate the number of arithmetic operations required to evaluate a vacuum expectation value grows as N 11 /msub(q) on an N 4 lattice with fixed periodicity in physical units and renormalized quark mass msub(q). For the second algorithm the rate of growth is estimated to be N 8 /msub(q) 2 . Numerical experiments are presented comparing the two algorithms on a lattice of size 2 4 . With a hopping constant K of 0.15 and β of 4.0 we find the number of operations for the second algorithm is about 2.7 times larger than for the first and about 13 000 times larger than for corresponding Monte Carlo calculations with a pure gauge theory. An estimate is given for the number of operations required for more realistic calculations by each algorithm on a larger lattice. (orig.)
New Parallel Algorithms for Landscape Evolution Model
Jin, Y.; Zhang, H.; Shi, Y.
2017-12-01
Most landscape evolution models (LEM) developed in the last two decades solve the diffusion equation to simulate the transportation of surface sediments. This numerical approach is difficult to parallelize due to the computation of drainage area for each node, which needs huge amount of communication if run in parallel. In order to overcome this difficulty, we developed two parallel algorithms for LEM with a stream net. One algorithm handles the partition of grid with traditional methods and applies an efficient global reduction algorithm to do the computation of drainage areas and transport rates for the stream net; the other algorithm is based on a new partition algorithm, which partitions the nodes in catchments between processes first, and then partitions the cells according to the partition of nodes. Both methods focus on decreasing communication between processes and take the advantage of massive computing techniques, and numerical experiments show that they are both adequate to handle large scale problems with millions of cells. We implemented the two algorithms in our program based on the widely used finite element library deal.II, so that it can be easily coupled with ASPECT.
Distribution agnostic structured sparsity recovery algorithms
Al-Naffouri, Tareq Y.
2013-05-01
We present an algorithm and its variants for sparse signal recovery from a small number of its measurements in a distribution agnostic manner. The proposed algorithm finds Bayesian estimate of a sparse signal to be recovered and at the same time is indifferent to the actual distribution of its non-zero elements. Termed Support Agnostic Bayesian Matching Pursuit (SABMP), the algorithm also has the capability of refining the estimates of signal and required parameters in the absence of the exact parameter values. The inherent feature of the algorithm of being agnostic to the distribution of the data grants it the flexibility to adapt itself to several related problems. Specifically, we present two important extensions to this algorithm. One extension handles the problem of recovering sparse signals having block structures while the other handles multiple measurement vectors to jointly estimate the related unknown signals. We conduct extensive experiments to show that SABMP and its variants have superior performance to most of the state-of-the-art algorithms and that too at low-computational expense. © 2013 IEEE.
Fokkinga, M.M.
1992-01-01
An algorithm is the input-output effect of a computer program; mathematically, the notion of algorithm comes close to the notion of function. Just as arithmetic is the theory and practice of calculating with numbers, so is ALGORITHMICS the theory and practice of calculating with algorithms. Just as
A cluster algorithm for graphs
S. van Dongen
2000-01-01
textabstractA cluster algorithm for graphs called the emph{Markov Cluster algorithm (MCL~algorithm) is introduced. The algorithm provides basically an interface to an algebraic process defined on stochastic matrices, called the MCL~process. The graphs may be both weighted (with nonnegative weight)
Algorithms for Reinforcement Learning
Szepesvari, Csaba
2010-01-01
Reinforcement learning is a learning paradigm concerned with learning to control a system so as to maximize a numerical performance measure that expresses a long-term objective. What distinguishes reinforcement learning from supervised learning is that only partial feedback is given to the learner about the learner's predictions. Further, the predictions may have long term effects through influencing the future state of the controlled system. Thus, time plays a special role. The goal in reinforcement learning is to develop efficient learning algorithms, as well as to understand the algorithms'
Animation of planning algorithms
Sun, Fan
2014-01-01
Planning is the process of creating a sequence of steps/actions that will satisfy a goal of a problem. The partial order planning (POP) algorithm is one of Artificial Intelligence approach for problem planning. By learning G52PAS module, I find that it is difficult for students to understand this planning algorithm by just reading its pseudo code and doing some exercise in writing. Students cannot know how each actual step works clearly and might miss some steps because of their confusion. ...
Secondary Vertex Finder Algorithm
Heer, Sebastian; The ATLAS collaboration
2017-01-01
If a jet originates from a b-quark, a b-hadron is formed during the fragmentation process. In its dominant decay modes, the b-hadron decays into a c-hadron via the electroweak interaction. Both b- and c-hadrons have lifetimes long enough, to travel a few millimetres before decaying. Thus displaced vertices from b- and subsequent c-hadron decays provide a strong signature for a b-jet. Reconstructing these secondary vertices (SV) and their properties is the aim of this algorithm. The performance of this algorithm is studied with tt̄ events, requiring at least one lepton, simulated at 13 TeV.
Parallel Algorithms and Patterns
Energy Technology Data Exchange (ETDEWEB)
Robey, Robert W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-06-16
This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.
Randomized Filtering Algorithms
DEFF Research Database (Denmark)
Katriel, Irit; Van Hentenryck, Pascal
2008-01-01
of AllDifferent and is generalization, the Global Cardinality Constraint. The first delayed filtering scheme is a Monte Carlo algorithm: its running time is superior, in the worst case, to that of enforcing are consistency after every domain event, while its filtering effectiveness is analyzed...... in the expected sense. The second scheme is a Las Vegas algorithm using filtering triggers: Its effectiveness is the same as enforcing are consistency after every domain event, while in the expected case it is faster by a factor of m/n, where n and m are, respectively, the number of nodes and edges...
An Ordering Linear Unification Algorithm
Institute of Scientific and Technical Information of China (English)
胡运发
1989-01-01
In this paper,we present an ordering linear unification algorithm(OLU).A new idea on substituteion of the binding terms is introduced to the algorithm,which is able to overcome some drawbacks of other algorithms,e.g.,MM algorithm[1],RG1 and RG2 algorithms[2],Particularly,if we use the directed eyclie graphs,the algoritm needs not check the binding order,then the OLU algorithm can also be aplied to the infinite tree data struceture,and a higher efficiency can be expected.The paper focuses upon the discussion of OLU algorithm and a partial order structure with respect to the unification algorithm.This algorithm has been implemented in the GKD-PROLOG/VAX 780 interpreting system.Experimental results have shown that the algorithm is very simple and efficient.
Fractal Landscape Algorithms for Environmental Simulations
Mao, H.; Moran, S.
2014-12-01
Natural science and geographical research are now able to take advantage of environmental simulations that more accurately test experimental hypotheses, resulting in deeper understanding. Experiments affected by the natural environment can benefit from 3D landscape simulations capable of simulating a variety of terrains and environmental phenomena. Such simulations can employ random terrain generation algorithms that dynamically simulate environments to test specific models against a variety of factors. Through the use of noise functions such as Perlin noise, Simplex noise, and diamond square algorithms, computers can generate simulations that model a variety of landscapes and ecosystems. This study shows how these algorithms work together to create realistic landscapes. By seeding values into the diamond square algorithm, one can control the shape of landscape. Perlin noise and Simplex noise are also used to simulate moisture and temperature. The smooth gradient created by coherent noise allows more realistic landscapes to be simulated. Terrain generation algorithms can be used in environmental studies and physics simulations. Potential studies that would benefit from simulations include the geophysical impact of flash floods or drought on a particular region and regional impacts on low lying area due to global warming and rising sea levels. Furthermore, terrain generation algorithms also serve as aesthetic tools to display landscapes (Google Earth), and simulate planetary landscapes. Hence, it can be used as a tool to assist science education. Algorithms used to generate these natural phenomena provide scientists a different approach in analyzing our world. The random algorithms used in terrain generation not only contribute to the generating the terrains themselves, but are also capable of simulating weather patterns.
Nurdiyanto, Heri; Rahim, Robbi; Wulan, Nur
2017-12-01
Symmetric type cryptography algorithm is known many weaknesses in encryption process compared with asymmetric type algorithm, symmetric stream cipher are algorithm that works on XOR process between plaintext and key, to improve the security of symmetric stream cipher algorithm done improvisation by using Triple Transposition Key which developed from Transposition Cipher and also use Base64 algorithm for encryption ending process, and from experiment the ciphertext that produced good enough and very random.
Delimata, Paweł; Marszał-Paszek, Barbara; Moshkov, Mikhail; Paszek, Piotr; Skowron, Andrzej; Suraj, Zbigniew
2010-01-01
the considered algorithms extract from a given decision table efficiently some information about the set of rules. Next, this information is used by a decision-making procedure. The reported results of experiments show that the algorithms based on inhibitory
New Optimization Algorithms in Physics
Hartmann, Alexander K
2004-01-01
Many physicists are not aware of the fact that they can solve their problems by applying optimization algorithms. Since the number of such algorithms is steadily increasing, many new algorithms have not been presented comprehensively until now. This presentation of recently developed algorithms applied in physics, including demonstrations of how they work and related results, aims to encourage their application, and as such the algorithms selected cover concepts and methods from statistical physics to optimization problems emerging in theoretical computer science.
A New Algorithm for System of Integral Equations
Directory of Open Access Journals (Sweden)
Abdujabar Rasulov
2014-01-01
Full Text Available We develop a new algorithm to solve the system of integral equations. In this new method no need to use matrix weights. Beacause of it, we reduce computational complexity considerable. Using the new algorithm it is also possible to solve an initial boundary value problem for system of parabolic equations. To verify the efficiency, the results of computational experiments are given.
Effectiveness of firefly algorithm based neural network in time series ...
African Journals Online (AJOL)
Effectiveness of firefly algorithm based neural network in time series forecasting. ... In the experiments, three well known time series were used to evaluate the performance. Results obtained were compared with ... Keywords: Time series, Artificial Neural Network, Firefly Algorithm, Particle Swarm Optimization, Overfitting ...
Disrupting the Dissertation: Linked Data, Enhanced Publication and Algorithmic Culture
Tracy, Frances; Carmichael, Patrick
2017-01-01
This article explores how the three aspects of Striphas' notion of algorithmic culture (information, crowds and algorithms) might influence and potentially disrupt established educational practices. We draw on our experience of introducing semantic web and linked data technologies into higher education settings, focussing on extended student…
A propositional CONEstrip algorithm
E. Quaeghebeur (Erik); A. Laurent; O. Strauss; B. Bouchon-Meunier; R.R. Yager (Ronald)
2014-01-01
textabstractWe present a variant of the CONEstrip algorithm for checking whether the origin lies in a finitely generated convex cone that can be open, closed, or neither. This variant is designed to deal efficiently with problems where the rays defining the cone are specified as linear combinations
Modular Regularization Algorithms
DEFF Research Database (Denmark)
Jacobsen, Michael
2004-01-01
The class of linear ill-posed problems is introduced along with a range of standard numerical tools and basic concepts from linear algebra, statistics and optimization. Known algorithms for solving linear inverse ill-posed problems are analyzed to determine how they can be decomposed into indepen...
Indian Academy of Sciences (India)
Shortest path problems. Road network on cities and we want to navigate between cities. . – p.8/30 ..... The rest of the talk... Computing connectivities between all pairs of vertices good algorithm wrt both space and time to compute the exact solution. . – p.15/30 ...
The Copenhagen Triage Algorithm
DEFF Research Database (Denmark)
Hasselbalch, Rasmus Bo; Plesner, Louis Lind; Pries-Heje, Mia
2016-01-01
is non-inferior to an existing triage model in a prospective randomized trial. METHODS: The Copenhagen Triage Algorithm (CTA) study is a prospective two-center, cluster-randomized, cross-over, non-inferiority trial comparing CTA to the Danish Emergency Process Triage (DEPT). We include patients ≥16 years...
de Casteljau's Algorithm Revisited
DEFF Research Database (Denmark)
Gravesen, Jens
1998-01-01
It is demonstrated how all the basic properties of Bezier curves can be derived swiftly and efficiently without any reference to the Bernstein polynomials and essentially with only geometric arguments. This is achieved by viewing one step in de Casteljau's algorithm as an operator (the de Casteljau...
Algorithms in ambient intelligence
Aarts, E.H.L.; Korst, J.H.M.; Verhaegh, W.F.J.; Weber, W.; Rabaey, J.M.; Aarts, E.
2005-01-01
We briefly review the concept of ambient intelligence and discuss its relation with the domain of intelligent algorithms. By means of four examples of ambient intelligent systems, we argue that new computing methods and quantification measures are needed to bridge the gap between the class of
General Algorithm (High level)
Indian Academy of Sciences (India)
First page Back Continue Last page Overview Graphics. General Algorithm (High level). Iteratively. Use Tightness Property to remove points of P1,..,Pi. Use random sampling to get a Random Sample (of enough points) from the next largest cluster, Pi+1. Use the Random Sampling Procedure to approximate ci+1 using the ...
Comprehensive eye evaluation algorithm
Agurto, C.; Nemeth, S.; Zamora, G.; Vahtel, M.; Soliz, P.; Barriga, S.
2016-03-01
In recent years, several research groups have developed automatic algorithms to detect diabetic retinopathy (DR) in individuals with diabetes (DM), using digital retinal images. Studies have indicated that diabetics have 1.5 times the annual risk of developing primary open angle glaucoma (POAG) as do people without DM. Moreover, DM patients have 1.8 times the risk for age-related macular degeneration (AMD). Although numerous investigators are developing automatic DR detection algorithms, there have been few successful efforts to create an automatic algorithm that can detect other ocular diseases, such as POAG and AMD. Consequently, our aim in the current study was to develop a comprehensive eye evaluation algorithm that not only detects DR in retinal images, but also automatically identifies glaucoma suspects and AMD by integrating other personal medical information with the retinal features. The proposed system is fully automatic and provides the likelihood of each of the three eye disease. The system was evaluated in two datasets of 104 and 88 diabetic cases. For each eye, we used two non-mydriatic digital color fundus photographs (macula and optic disc centered) and, when available, information about age, duration of diabetes, cataracts, hypertension, gender, and laboratory data. Our results show that the combination of multimodal features can increase the AUC by up to 5%, 7%, and 8% in the detection of AMD, DR, and glaucoma respectively. Marked improvement was achieved when laboratory results were combined with retinal image features.
Mitsutake, Ayori; Mori, Yoshiharu; Okamoto, Yuko
2013-01-01
In biomolecular systems (especially all-atom models) with many degrees of freedom such as proteins and nucleic acids, there exist an astronomically large number of local-minimum-energy states. Conventional simulations in the canonical ensemble are of little use, because they tend to get trapped in states of these energy local minima. Enhanced conformational sampling techniques are thus in great demand. A simulation in generalized ensemble performs a random walk in potential energy space and can overcome this difficulty. From only one simulation run, one can obtain canonical-ensemble averages of physical quantities as functions of temperature by the single-histogram and/or multiple-histogram reweighting techniques. In this article we review uses of the generalized-ensemble algorithms in biomolecular systems. Three well-known methods, namely, multicanonical algorithm, simulated tempering, and replica-exchange method, are described first. Both Monte Carlo and molecular dynamics versions of the algorithms are given. We then present various extensions of these three generalized-ensemble algorithms. The effectiveness of the methods is tested with short peptide and protein systems.
DEFF Research Database (Denmark)
This book constitutes the refereed proceedings of the 10th Scandinavian Workshop on Algorithm Theory, SWAT 2006, held in Riga, Latvia, in July 2006. The 36 revised full papers presented together with 3 invited papers were carefully reviewed and selected from 154 submissions. The papers address all...
Optimal Quadratic Programming Algorithms
Dostal, Zdenek
2009-01-01
Quadratic programming (QP) is one technique that allows for the optimization of a quadratic function in several variables in the presence of linear constraints. This title presents various algorithms for solving large QP problems. It is suitable as an introductory text on quadratic programming for graduate students and researchers
A hybrid artificial bee colony algorithm for numerical function optimization
Alqattan, Zakaria N.; Abdullah, Rosni
2015-02-01
Artificial Bee Colony (ABC) algorithm is one of the swarm intelligence algorithms; it has been introduced by Karaboga in 2005. It is a meta-heuristic optimization search algorithm inspired from the intelligent foraging behavior of the honey bees in nature. Its unique search process made it as one of the most competitive algorithm with some other search algorithms in the area of optimization, such as Genetic algorithm (GA) and Particle Swarm Optimization (PSO). However, the ABC performance of the local search process and the bee movement or the solution improvement equation still has some weaknesses. The ABC is good in avoiding trapping at the local optimum but it spends its time searching around unpromising random selected solutions. Inspired by the PSO, we propose a Hybrid Particle-movement ABC algorithm called HPABC, which adapts the particle movement process to improve the exploration of the original ABC algorithm. Numerical benchmark functions were used in order to experimentally test the HPABC algorithm. The results illustrate that the HPABC algorithm can outperform the ABC algorithm in most of the experiments (75% better in accuracy and over 3 times faster).
Speckle imaging algorithms for planetary imaging
Energy Technology Data Exchange (ETDEWEB)
Johansson, E. [Lawrence Livermore National Lab., CA (United States)
1994-11-15
I will discuss the speckle imaging algorithms used to process images of the impact sites of the collision of comet Shoemaker-Levy 9 with Jupiter. The algorithms use a phase retrieval process based on the average bispectrum of the speckle image data. High resolution images are produced by estimating the Fourier magnitude and Fourier phase of the image separately, then combining them and inverse transforming to achieve the final result. I will show raw speckle image data and high-resolution image reconstructions from our recent experiment at Lick Observatory.
Detection of Illegitimate Emails using Boosting Algorithm
DEFF Research Database (Denmark)
Nizamani, Sarwat; Memon, Nasrullah; Wiil, Uffe Kock
2011-01-01
and spam email detection. For our desired task, we have applied a boosting technique. With the use of boosting we can achieve high accuracy of traditional classification algorithms. When using boosting one has to choose a suitable weak learner as well as the number of boosting iterations. In this paper, we......In this paper, we report on experiments to detect illegitimate emails using boosting algorithm. We call an email illegitimate if it is not useful for the receiver or for the society. We have divided the problem into two major areas of illegitimate email detection: suspicious email detection...
Benchmarking monthly homogenization algorithms
Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.
2011-08-01
The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data
RES: Regularized Stochastic BFGS Algorithm
Mokhtari, Aryan; Ribeiro, Alejandro
2014-12-01
RES, a regularized stochastic version of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method is proposed to solve convex optimization problems with stochastic objectives. The use of stochastic gradient descent algorithms is widespread, but the number of iterations required to approximate optimal arguments can be prohibitive in high dimensional problems. Application of second order methods, on the other hand, is impracticable because computation of objective function Hessian inverses incurs excessive computational cost. BFGS modifies gradient descent by introducing a Hessian approximation matrix computed from finite gradient differences. RES utilizes stochastic gradients in lieu of deterministic gradients for both, the determination of descent directions and the approximation of the objective function's curvature. Since stochastic gradients can be computed at manageable computational cost RES is realizable and retains the convergence rate advantages of its deterministic counterparts. Convergence results show that lower and upper bounds on the Hessian egeinvalues of the sample functions are sufficient to guarantee convergence to optimal arguments. Numerical experiments showcase reductions in convergence time relative to stochastic gradient descent algorithms and non-regularized stochastic versions of BFGS. An application of RES to the implementation of support vector machines is developed.
An Agent-Based Co-Evolutionary Multi-Objective Algorithm for Portfolio Optimization
Directory of Open Access Journals (Sweden)
Rafał Dreżewski
2017-08-01
Full Text Available Algorithms based on the process of natural evolution are widely used to solve multi-objective optimization problems. In this paper we propose the agent-based co-evolutionary algorithm for multi-objective portfolio optimization. The proposed technique is compared experimentally to the genetic algorithm, co-evolutionary algorithm and a more classical approach—the trend-following algorithm. During the experiments historical data from the Warsaw Stock Exchange is used in order to assess the performance of the compared algorithms. Finally, we draw some conclusions from these experiments, showing the strong and weak points of all the techniques.
Python algorithms mastering basic algorithms in the Python language
Hetland, Magnus Lie
2014-01-01
Python Algorithms, Second Edition explains the Python approach to algorithm analysis and design. Written by Magnus Lie Hetland, author of Beginning Python, this book is sharply focused on classical algorithms, but it also gives a solid understanding of fundamental algorithmic problem-solving techniques. The book deals with some of the most important and challenging areas of programming and computer science in a highly readable manner. It covers both algorithmic theory and programming practice, demonstrating how theory is reflected in real Python programs. Well-known algorithms and data struc
Geometry correction Algorithm for UAV Remote Sensing Image Based on Improved Neural Network
Liu, Ruian; Liu, Nan; Zeng, Beibei; Chen, Tingting; Yin, Ninghao
2018-03-01
Aiming at the disadvantage of current geometry correction algorithm for UAV remote sensing image, a new algorithm is proposed. Adaptive genetic algorithm (AGA) and RBF neural network are introduced into this algorithm. And combined with the geometry correction principle for UAV remote sensing image, the algorithm and solving steps of AGA-RBF are presented in order to realize geometry correction for UAV remote sensing. The correction accuracy and operational efficiency is improved through optimizing the structure and connection weight of RBF neural network separately with AGA and LMS algorithm. Finally, experiments show that AGA-RBF algorithm has the advantages of high correction accuracy, high running rate and strong generalization ability.
A Double Evolutionary Pool Memetic Algorithm for Examination Timetabling Problems
Directory of Open Access Journals (Sweden)
Yu Lei
2014-01-01
Full Text Available A double evolutionary pool memetic algorithm is proposed to solve the examination timetabling problem. To improve the performance of the proposed algorithm, two evolutionary pools, that is, the main evolutionary pool and the secondary evolutionary pool, are employed. The genetic operators have been specially designed to fit the examination timetabling problem. A simplified version of the simulated annealing strategy is designed to speed the convergence of the algorithm. A clonal mechanism is introduced to preserve population diversity. Extensive experiments carried out on 12 benchmark examination timetabling instances show that the proposed algorithm is able to produce promising results for the uncapacitated examination timetabling problem.
A Dynamic Fuzzy Cluster Algorithm for Time Series
Directory of Open Access Journals (Sweden)
Min Ji
2013-01-01
clustering time series by introducing the definition of key point and improving FCM algorithm. The proposed algorithm works by determining those time series whose class labels are vague and further partitions them into different clusters over time. The main advantage of this approach compared with other existing algorithms is that the property of some time series belonging to different clusters over time can be partially revealed. Results from simulation-based experiments on geographical data demonstrate the excellent performance and the desired results have been obtained. The proposed algorithm can be applied to solve other clustering problems in data mining.
Image processing algorithm for robot tracking in reactor vessel
International Nuclear Information System (INIS)
Kim, Tae Won; Choi, Young Soo; Lee, Sung Uk; Jeong, Kyung Min; Kim, Nam Kyun
2011-01-01
In this paper, we proposed an image processing algorithm to find the position of an underwater robot in the reactor vessel. Proposed algorithm is composed of Modified SURF(Speeded Up Robust Feature) based on Mean-Shift and CAMSHIFT(Continuously Adaptive Mean Shift Algorithm) based on color tracking algorithm. Noise filtering using luminosity blend method and color clipping are preprocessed. Initial tracking area for the CAMSHIFT is determined by using modified SURF. And then extracting the contour and corner points in the area of target tracked by CAMSHIFT method. Experiments are performed at the reactor vessel mockup and verified to use in the control of robot by visual tracking
An efficient and fast detection algorithm for multimode FBG sensing
DEFF Research Database (Denmark)
Ganziy, Denis; Jespersen, O.; Rose, B.
2015-01-01
We propose a novel dynamic gate algorithm (DGA) for fast and accurate peak detection. The algorithm uses threshold determined detection window and Center of gravity algorithm with bias compensation. We analyze the wavelength fit resolution of the DGA for different values of signal to noise ratio...... and different typical peak shapes. Our simulations and experiments demonstrate that the DGA method is fast and robust with higher stability and accuracy compared to conventional algorithms. This makes it very attractive for future implementation in sensing systems especially based on multimode fiber Bragg...
ProxImaL: efficient image optimization using proximal algorithms
Heide, Felix; Diamond, Steven; Nieß ner, Matthias; Ragan-Kelley, Jonathan; Heidrich, Wolfgang; Wetzstein, Gordon
2016-01-01
domain-specific language and compiler for image optimization problems that makes it easy to experiment with different problem formulations and algorithm choices. The language uses proximal operators as the fundamental building blocks of a variety
The Slice Algorithm For Irreducible Decomposition of Monomial Ideals
DEFF Research Database (Denmark)
Roune, Bjarke Hammersholt
2009-01-01
Irreducible decomposition of monomial ideals has an increasing number of applications from biology to pure math. This paper presents the Slice Algorithm for computing irreducible decompositions, Alexander duals and socles of monomial ideals. The paper includes experiments showing good performance...
Reactive Collision Avoidance Algorithm
Scharf, Daniel; Acikmese, Behcet; Ploen, Scott; Hadaegh, Fred
2010-01-01
The reactive collision avoidance (RCA) algorithm allows a spacecraft to find a fuel-optimal trajectory for avoiding an arbitrary number of colliding spacecraft in real time while accounting for acceleration limits. In addition to spacecraft, the technology can be used for vehicles that can accelerate in any direction, such as helicopters and submersibles. In contrast to existing, passive algorithms that simultaneously design trajectories for a cluster of vehicles working to achieve a common goal, RCA is implemented onboard spacecraft only when an imminent collision is detected, and then plans a collision avoidance maneuver for only that host vehicle, thus preventing a collision in an off-nominal situation for which passive algorithms cannot. An example scenario for such a situation might be when a spacecraft in the cluster is approaching another one, but enters safe mode and begins to drift. Functionally, the RCA detects colliding spacecraft, plans an evasion trajectory by solving the Evasion Trajectory Problem (ETP), and then recovers after the collision is avoided. A direct optimization approach was used to develop the algorithm so it can run in real time. In this innovation, a parameterized class of avoidance trajectories is specified, and then the optimal trajectory is found by searching over the parameters. The class of trajectories is selected as bang-off-bang as motivated by optimal control theory. That is, an avoiding spacecraft first applies full acceleration in a constant direction, then coasts, and finally applies full acceleration to stop. The parameter optimization problem can be solved offline and stored as a look-up table of values. Using a look-up table allows the algorithm to run in real time. Given a colliding spacecraft, the properties of the collision geometry serve as indices of the look-up table that gives the optimal trajectory. For multiple colliding spacecraft, the set of trajectories that avoid all spacecraft is rapidly searched on
A Fast Algorithm of Cartographic Sounding Selection
Institute of Scientific and Technical Information of China (English)
SUI Haigang; HUA Li; ZHAO Haitao; ZHANG Yongli
2005-01-01
An effective strategy and framework that adequately integrate the automated and manual processes for fast cartographic sounding selection is presented. The important submarine topographic features are extracted for important soundings selection, and an improved "influence circle" algorithm is introduced for sounding selection. For automatic configuration of soundings distribution pattern, a special algorithm considering multi-factors is employed. A semi-automatic method for solving the ambiguous conflicts is described. On the basis of the algorithms and strategies a system named HGIS for fast cartographic sounding selection is developed and applied in Chinese Marine Safety Administration Bureau (CMSAB). The application experiments show that the system is effective and reliable. At last some conclusions and the future work are given.
An improved genetic algorithm with dynamic topology
International Nuclear Information System (INIS)
Cai Kai-Quan; Tang Yan-Wu; Zhang Xue-Jun; Guan Xiang-Min
2016-01-01
The genetic algorithm (GA) is a nature-inspired evolutionary algorithm to find optima in search space via the interaction of individuals. Recently, researchers demonstrated that the interaction topology plays an important role in information exchange among individuals of evolutionary algorithm. In this paper, we investigate the effect of different network topologies adopted to represent the interaction structures. It is found that GA with a high-density topology ends up more likely with an unsatisfactory solution, contrarily, a low-density topology can impede convergence. Consequently, we propose an improved GA with dynamic topology, named DT-GA, in which the topology structure varies dynamically along with the fitness evolution. Several experiments executed with 15 well-known test functions have illustrated that DT-GA outperforms other test GAs for making a balance of convergence speed and optimum quality. Our work may have implications in the combination of complex networks and computational intelligence. (paper)
Cooperated Bayesian algorithm for distributed scheduling problem
Institute of Scientific and Technical Information of China (English)
QIANG Lei; XIAO Tian-yuan
2006-01-01
This paper presents a new distributed Bayesian optimization algorithm (BOA) to overcome the efficiency problem when solving NP scheduling problems.The proposed approach integrates BOA into the co-evolutionary schema,which builds up a concurrent computing environment.A new search strategy is also introduced for local optimization process.It integrates the reinforcement learning(RL) mechanism into the BOA search processes,and then uses the mixed probability information from BOA (post-probability) and RL (pre-probability) to enhance the cooperation between different local controllers,which improves the optimization ability of the algorithm.The experiment shows that the new algorithm does better in both optimization (2.2%) and convergence (11.7%),compared with classic BOA.
Nonequilibrium molecular dynamics theory, algorithms and applications
Todd, Billy D
2017-01-01
Written by two specialists with over twenty-five years of experience in the field, this valuable text presents a wide range of topics within the growing field of nonequilibrium molecular dynamics (NEMD). It introduces theories which are fundamental to the field - namely, nonequilibrium statistical mechanics and nonequilibrium thermodynamics - and provides state-of-the-art algorithms and advice for designing reliable NEMD code, as well as examining applications for both atomic and molecular fluids. It discusses homogenous and inhomogenous flows and pays considerable attention to highly confined fluids, such as nanofluidics. In addition to statistical mechanics and thermodynamics, the book covers the themes of temperature and thermodynamic fluxes and their computation, the theory and algorithms for homogenous shear and elongational flows, response theory and its applications, heat and mass transport algorithms, applications in molecular rheology, highly confined fluids (nanofluidics), the phenomenon of slip and...
[Algorithms for treatment of complex hand injuries].
Pillukat, T; Prommersberger, K-J
2011-07-01
The primary treatment strongly influences the course and prognosis of hand injuries. Complex injuries which compromise functional recovery are especially challenging. Despite an apparently unlimited number of injury patterns it is possible to develop strategies which facilitate a standardized approach to operative treatment. In this situation algorithms can be important guidelines for a rational approach. The following algorithms have been proven in the treatment of complex injuries of the hand by our own experience. They were modified according to the current literature and refer to prehospital care, emergency room management, basic strategy in general and reconstruction of bone and joints, vessels, nerves, tendons and soft tissue coverage in detail. Algorithms facilitate the treatment of severe hand injuries. Applying simple yes/no decisions complex injury patterns are split into distinct partial problems which can be managed step by step.
Partitional clustering algorithms
2015-01-01
This book summarizes the state-of-the-art in partitional clustering. Clustering, the unsupervised classification of patterns into groups, is one of the most important tasks in exploratory data analysis. Primary goals of clustering include gaining insight into, classifying, and compressing data. Clustering has a long and rich history that spans a variety of scientific disciplines including anthropology, biology, medicine, psychology, statistics, mathematics, engineering, and computer science. As a result, numerous clustering algorithms have been proposed since the early 1950s. Among these algorithms, partitional (nonhierarchical) ones have found many applications, especially in engineering and computer science. This book provides coverage of consensus clustering, constrained clustering, large scale and/or high dimensional clustering, cluster validity, cluster visualization, and applications of clustering. Examines clustering as it applies to large and/or high-dimensional data sets commonly encountered in reali...
Treatment Algorithm for Ameloblastoma
Directory of Open Access Journals (Sweden)
Madhumati Singh
2014-01-01
Full Text Available Ameloblastoma is the second most common benign odontogenic tumour (Shafer et al. 2006 which constitutes 1–3% of all cysts and tumours of jaw, with locally aggressive behaviour, high recurrence rate, and a malignant potential (Chaine et al. 2009. Various treatment algorithms for ameloblastoma have been reported; however, a universally accepted approach remains unsettled and controversial (Chaine et al. 2009. The treatment algorithm to be chosen depends on size (Escande et al. 2009 and Sampson and Pogrel 1999, anatomical location (Feinberg and Steinberg 1996, histologic variant (Philipsen and Reichart 1998, and anatomical involvement (Jackson et al. 1996. In this paper various such treatment modalities which include enucleation and peripheral osteotomy, partial maxillectomy, segmental resection and reconstruction done with fibula graft, and radical resection and reconstruction done with rib graft and their recurrence rate are reviewed with study of five cases.
An Algorithmic Diversity Diet?
DEFF Research Database (Denmark)
Sørensen, Jannick Kirk; Schmidt, Jan-Hinrik
2016-01-01
With the growing influence of personalized algorithmic recommender systems on the exposure of media content to users, the relevance of discussing the diversity of recommendations increases, particularly as far as public service media (PSM) is concerned. An imagined implementation of a diversity...... diet system however triggers not only the classic discussion of the reach – distinctiveness balance for PSM, but also shows that ‘diversity’ is understood very differently in algorithmic recommender system communities than it is editorially and politically in the context of PSM. The design...... of a diversity diet system generates questions not just about editorial power, personal freedom and techno-paternalism, but also about the embedded politics of recommender systems as well as the human skills affiliated with PSM editorial work and the nature of PSM content....
Aydemir, Bahar
2017-01-01
The Trigger and Data Acquisition (TDAQ) system of the ATLAS detector at the Large Hadron Collider (LHC) at CERN is composed of a large number of distributed hardware and software components. TDAQ system consists of about 3000 computers and more than 25000 applications which, in a coordinated manner, provide the data-taking functionality of the overall system. There is a number of online services required to configure, monitor and control the ATLAS data taking. In particular, the configuration service is used to provide configuration of above components. The configuration of the ATLAS data acquisition system is stored in XML-based object database named OKS. DAL (Data Access Library) allowing to access it's information by C++, Java and Python clients in a distributed environment. Some information has quite complicated structure, so it's extraction requires writing special algorithms. Algorithms available on C++ programming language and partially reimplemented on Java programming language. The goal of the projec...
Kramer, Oliver
2017-01-01
This book introduces readers to genetic algorithms (GAs) with an emphasis on making the concepts, algorithms, and applications discussed as easy to understand as possible. Further, it avoids a great deal of formalisms and thus opens the subject to a broader audience in comparison to manuscripts overloaded by notations and equations. The book is divided into three parts, the first of which provides an introduction to GAs, starting with basic concepts like evolutionary operators and continuing with an overview of strategies for tuning and controlling parameters. In turn, the second part focuses on solution space variants like multimodal, constrained, and multi-objective solution spaces. Lastly, the third part briefly introduces theoretical tools for GAs, the intersections and hybridizations with machine learning, and highlights selected promising applications.
Boosting foundations and algorithms
Schapire, Robert E
2012-01-01
Boosting is an approach to machine learning based on the idea of creating a highly accurate predictor by combining many weak and inaccurate "rules of thumb." A remarkably rich theory has evolved around boosting, with connections to a range of topics, including statistics, game theory, convex optimization, and information geometry. Boosting algorithms have also enjoyed practical success in such fields as biology, vision, and speech processing. At various times in its history, boosting has been perceived as mysterious, controversial, even paradoxical.
Stochastic split determinant algorithms
International Nuclear Information System (INIS)
Horvatha, Ivan
2000-01-01
I propose a large class of stochastic Markov processes associated with probability distributions analogous to that of lattice gauge theory with dynamical fermions. The construction incorporates the idea of approximate spectral split of the determinant through local loop action, and the idea of treating the infrared part of the split through explicit diagonalizations. I suggest that exact algorithms of practical relevance might be based on Markov processes so constructed
Quantum gate decomposition algorithms.
Energy Technology Data Exchange (ETDEWEB)
Slepoy, Alexander
2006-07-01
Quantum computing algorithms can be conveniently expressed in a format of a quantum logical circuits. Such circuits consist of sequential coupled operations, termed ''quantum gates'', or quantum analogs of bits called qubits. We review a recently proposed method [1] for constructing general ''quantum gates'' operating on an qubits, as composed of a sequence of generic elementary ''gates''.
KAM Tori Construction Algorithms
Wiesel, W.
In this paper we evaluate and compare two algorithms for the calculation of KAM tori in Hamiltonian systems. The direct fitting of a torus Fourier series to a numerically integrated trajectory is the first method, while an accelerated finite Fourier transform is the second method. The finite Fourier transform, with Hanning window functions, is by far superior in both computational loading and numerical accuracy. Some thoughts on applications of KAM tori are offered.
Irregular Applications: Architectures & Algorithms
Energy Technology Data Exchange (ETDEWEB)
Feo, John T.; Villa, Oreste; Tumeo, Antonino; Secchi, Simone
2012-02-06
Irregular applications are characterized by irregular data structures, control and communication patterns. Novel irregular high performance applications which deal with large data sets and require have recently appeared. Unfortunately, current high performance systems and software infrastructures executes irregular algorithms poorly. Only coordinated efforts by end user, area specialists and computer scientists that consider both the architecture and the software stack may be able to provide solutions to the challenges of modern irregular applications.
Flocking algorithm for autonomous flying robots.
Virágh, Csaba; Vásárhelyi, Gábor; Tarcai, Norbert; Szörényi, Tamás; Somorjai, Gergő; Nepusz, Tamás; Vicsek, Tamás
2014-06-01
Animal swarms displaying a variety of typical flocking patterns would not exist without the underlying safe, optimal and stable dynamics of the individuals. The emergence of these universal patterns can be efficiently reconstructed with agent-based models. If we want to reproduce these patterns with artificial systems, such as autonomous aerial robots, agent-based models can also be used in their control algorithms. However, finding the proper algorithms and thus understanding the essential characteristics of the emergent collective behaviour requires thorough and realistic modeling of the robot and also the environment. In this paper, we first present an abstract mathematical model of an autonomous flying robot. The model takes into account several realistic features, such as time delay and locality of communication, inaccuracy of the on-board sensors and inertial effects. We present two decentralized control algorithms. One is based on a simple self-propelled flocking model of animal collective motion, the other is a collective target tracking algorithm. Both algorithms contain a viscous friction-like term, which aligns the velocities of neighbouring agents parallel to each other. We show that this term can be essential for reducing the inherent instabilities of such a noisy and delayed realistic system. We discuss simulation results on the stability of the control algorithms, and perform real experiments to show the applicability of the algorithms on a group of autonomous quadcopters. In our case, bio-inspiration works in two ways. On the one hand, the whole idea of trying to build and control a swarm of robots comes from the observation that birds tend to flock to optimize their behaviour as a group. On the other hand, by using a realistic simulation framework and studying the group behaviour of autonomous robots we can learn about the major factors influencing the flight of bird flocks.
ERGC: an efficient referential genome compression algorithm.
Saha, Subrata; Rajasekaran, Sanguthevar
2015-11-01
Genome sequencing has become faster and more affordable. Consequently, the number of available complete genomic sequences is increasing rapidly. As a result, the cost to store, process, analyze and transmit the data is becoming a bottleneck for research and future medical applications. So, the need for devising efficient data compression and data reduction techniques for biological sequencing data is growing by the day. Although there exists a number of standard data compression algorithms, they are not efficient in compressing biological data. These generic algorithms do not exploit some inherent properties of the sequencing data while compressing. To exploit statistical and information-theoretic properties of genomic sequences, we need specialized compression algorithms. Five different next-generation sequencing data compression problems have been identified and studied in the literature. We propose a novel algorithm for one of these problems known as reference-based genome compression. We have done extensive experiments using five real sequencing datasets. The results on real genomes show that our proposed algorithm is indeed competitive and performs better than the best known algorithms for this problem. It achieves compression ratios that are better than those of the currently best performing algorithms. The time to compress and decompress the whole genome is also very promising. The implementations are freely available for non-commercial purposes. They can be downloaded from http://engr.uconn.edu/∼rajasek/ERGC.zip. rajasek@engr.uconn.edu. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Large scale tracking algorithms
Energy Technology Data Exchange (ETDEWEB)
Hansen, Ross L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Love, Joshua Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Melgaard, David Kennett [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Karelitz, David B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pitts, Todd Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Zollweg, Joshua David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Anderson, Dylan Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Nandy, Prabal [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Whitlow, Gary L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bender, Daniel A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Byrne, Raymond Harry [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-01-01
Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.
NEUTRON ALGORITHM VERIFICATION TESTING
International Nuclear Information System (INIS)
COWGILL, M.; MOSBY, W.; ARGONNE NATIONAL LABORATORY-WEST
2000-01-01
Active well coincidence counter assays have been performed on uranium metal highly enriched in 235 U. The data obtained in the present program, together with highly enriched uranium (HEU) metal data obtained in other programs, have been analyzed using two approaches, the standard approach and an alternative approach developed at BNL. Analysis of the data with the standard approach revealed that the form of the relationship between the measured reals and the 235 U mass varied, being sometimes linear and sometimes a second-order polynomial. In contrast, application of the BNL algorithm, which takes into consideration the totals, consistently yielded linear relationships between the totals-corrected reals and the 235 U mass. The constants in these linear relationships varied with geometric configuration and level of enrichment. This indicates that, when the BNL algorithm is used, calibration curves can be established with fewer data points and with more certainty than if a standard algorithm is used. However, this potential advantage has only been established for assays of HEU metal. In addition, the method is sensitive to the stability of natural background in the measurement facility
Improved artificial bee colony algorithm based gravity matching navigation method.
Gao, Wei; Zhao, Bo; Zhou, Guang Tao; Wang, Qiu Ying; Yu, Chun Yang
2014-07-18
Gravity matching navigation algorithm is one of the key technologies for gravity aided inertial navigation systems. With the development of intelligent algorithms, the powerful search ability of the Artificial Bee Colony (ABC) algorithm makes it possible to be applied to the gravity matching navigation field. However, existing search mechanisms of basic ABC algorithms cannot meet the need for high accuracy in gravity aided navigation. Firstly, proper modifications are proposed to improve the performance of the basic ABC algorithm. Secondly, a new search mechanism is presented in this paper which is based on an improved ABC algorithm using external speed information. At last, modified Hausdorff distance is introduced to screen the possible matching results. Both simulations and ocean experiments verify the feasibility of the method, and results show that the matching rate of the method is high enough to obtain a precise matching position.
Human resource recommendation algorithm based on ensemble learning and Spark
Cong, Zihan; Zhang, Xingming; Wang, Haoxiang; Xu, Hongjie
2017-08-01
Aiming at the problem of “information overload” in the human resources industry, this paper proposes a human resource recommendation algorithm based on Ensemble Learning. The algorithm considers the characteristics and behaviours of both job seeker and job features in the real business circumstance. Firstly, the algorithm uses two ensemble learning methods-Bagging and Boosting. The outputs from both learning methods are then merged to form user interest model. Based on user interest model, job recommendation can be extracted for users. The algorithm is implemented as a parallelized recommendation system on Spark. A set of experiments have been done and analysed. The proposed algorithm achieves significant improvement in accuracy, recall rate and coverage, compared with recommendation algorithms such as UserCF and ItemCF.
Dynamic Vehicle Routing Using an Improved Variable Neighborhood Search Algorithm
Directory of Open Access Journals (Sweden)
Yingcheng Xu
2013-01-01
Full Text Available In order to effectively solve the dynamic vehicle routing problem with time windows, the mathematical model is established and an improved variable neighborhood search algorithm is proposed. In the algorithm, allocation customers and planning routes for the initial solution are completed by the clustering method. Hybrid operators of insert and exchange are used to achieve the shaking process, the later optimization process is presented to improve the solution space, and the best-improvement strategy is adopted, which make the algorithm can achieve a better balance in the solution quality and running time. The idea of simulated annealing is introduced to take control of the acceptance of new solutions, and the influences of arrival time, distribution of geographical location, and time window range on route selection are analyzed. In the experiment, the proposed algorithm is applied to solve the different sizes' problems of DVRP. Comparing to other algorithms on the results shows that the algorithm is effective and feasible.
Convex hull ranking algorithm for multi-objective evolutionary algorithms
Davoodi Monfrared, M.; Mohades, A.; Rezaei, J.
2012-01-01
Due to many applications of multi-objective evolutionary algorithms in real world optimization problems, several studies have been done to improve these algorithms in recent years. Since most multi-objective evolutionary algorithms are based on the non-dominated principle, and their complexity
KERNEL MAD ALGORITHM FOR RELATIVE RADIOMETRIC NORMALIZATION
Directory of Open Access Journals (Sweden)
Y. Bai
2016-06-01
Full Text Available The multivariate alteration detection (MAD algorithm is commonly used in relative radiometric normalization. This algorithm is based on linear canonical correlation analysis (CCA which can analyze only linear relationships among bands. Therefore, we first introduce a new version of MAD in this study based on the established method known as kernel canonical correlation analysis (KCCA. The proposed method effectively extracts the non-linear and complex relationships among variables. We then conduct relative radiometric normalization experiments on both the linear CCA and KCCA version of the MAD algorithm with the use of Landsat-8 data of Beijing, China, and Gaofen-1(GF-1 data derived from South China. Finally, we analyze the difference between the two methods. Results show that the KCCA-based MAD can be satisfactorily applied to relative radiometric normalization, this algorithm can well describe the nonlinear relationship between multi-temporal images. This work is the first attempt to apply a KCCA-based MAD algorithm to relative radiometric normalization.
Advanced metaheuristic algorithms for laser optimization
International Nuclear Information System (INIS)
Tomizawa, H.
2010-01-01
A laser is one of the most important experimental tools. In synchrotron radiation field, lasers are widely used for experiments with Pump-Probe techniques. Especially for Xray-FELs, a laser has important roles as a seed light source or photo-cathode-illuminating light source to generate a high brightness electron bunch. The controls of laser pulse characteristics are required for many kinds of experiments. However, the laser should be tuned and customized for each requirement by laser experts. The automatic tuning of laser is required to realize with some sophisticated algorithms. The metaheuristic algorithm is one of the useful candidates to find one of the best solutions as acceptable as possible. The metaheuristic laser tuning system is expected to save our human resources and time for the laser preparations. I have shown successful results on a metaheuristic algorithm based on a genetic algorithm to optimize spatial (transverse) laser profiles and a hill climbing method extended with a fuzzy set theory to choose one of the best laser alignments automatically for each experimental requirement. (author)
Foundations of genetic algorithms 1991
1991-01-01
Foundations of Genetic Algorithms 1991 (FOGA 1) discusses the theoretical foundations of genetic algorithms (GA) and classifier systems.This book compiles research papers on selection and convergence, coding and representation, problem hardness, deception, classifier system design, variation and recombination, parallelization, and population divergence. Other topics include the non-uniform Walsh-schema transform; spurious correlations and premature convergence in genetic algorithms; and variable default hierarchy separation in a classifier system. The grammar-based genetic algorithm; condition
THE APPROACHING TRAIN DETECTION ALGORITHM
S. V. Bibikov
2015-01-01
The paper deals with detection algorithm for rail vibroacoustic waves caused by approaching train on the background of increased noise. The urgency of algorithm development for train detection in view of increased rail noise, when railway lines are close to roads or road intersections is justified. The algorithm is based on the method of weak signals detection in a noisy environment. The information statistics ultimate expression is adjusted. We present the results of algorithm research and t...
Combinatorial optimization algorithms and complexity
Papadimitriou, Christos H
1998-01-01
This clearly written, mathematically rigorous text includes a novel algorithmic exposition of the simplex method and also discusses the Soviet ellipsoid algorithm for linear programming; efficient algorithms for network flow, matching, spanning trees, and matroids; the theory of NP-complete problems; approximation algorithms, local search heuristics for NP-complete problems, more. All chapters are supplemented by thought-provoking problems. A useful work for graduate-level students with backgrounds in computer science, operations research, and electrical engineering.
Essential algorithms a practical approach to computer algorithms
Stephens, Rod
2013-01-01
A friendly and accessible introduction to the most useful algorithms Computer algorithms are the basic recipes for programming. Professional programmers need to know how to use algorithms to solve difficult programming problems. Written in simple, intuitive English, this book describes how and when to use the most practical classic algorithms, and even how to create new algorithms to meet future needs. The book also includes a collection of questions that can help readers prepare for a programming job interview. Reveals methods for manipulating common data structures s
Efficient GPS Position Determination Algorithms
National Research Council Canada - National Science Library
Nguyen, Thao Q
2007-01-01
... differential GPS algorithm for a network of users. The stand-alone user GPS algorithm is a direct, closed-form, and efficient new position determination algorithm that exploits the closed-form solution of the GPS trilateration equations and works...
Algorithmic approach to diagram techniques
International Nuclear Information System (INIS)
Ponticopoulos, L.
1980-10-01
An algorithmic approach to diagram techniques of elementary particles is proposed. The definition and axiomatics of the theory of algorithms are presented, followed by the list of instructions of an algorithm formalizing the construction of graphs and the assignment of mathematical objects to them. (T.A.)
Selfish Gene Algorithm Vs Genetic Algorithm: A Review
Ariff, Norharyati Md; Khalid, Noor Elaiza Abdul; Hashim, Rathiah; Noor, Noorhayati Mohamed
2016-11-01
Evolutionary algorithm is one of the algorithms inspired by the nature. Within little more than a decade hundreds of papers have reported successful applications of EAs. In this paper, the Selfish Gene Algorithms (SFGA), as one of the latest evolutionary algorithms (EAs) inspired from the Selfish Gene Theory which is an interpretation of Darwinian Theory ideas from the biologist Richards Dawkins on 1989. In this paper, following a brief introduction to the Selfish Gene Algorithm (SFGA), the chronology of its evolution is presented. It is the purpose of this paper is to present an overview of the concepts of Selfish Gene Algorithm (SFGA) as well as its opportunities and challenges. Accordingly, the history, step involves in the algorithm are discussed and its different applications together with an analysis of these applications are evaluated.
Global and Local Page Replacement Algorithms on Virtual Memory Systems for Image Processing
WADA, Ben Tsutom
1985-01-01
Three virtual memory systems for image processing, different one another in frame allocation algorithms and page replacement algorithms, were examined experimentally upon their page-fault characteristics. The hypothesis, that global page replacement algorithms are susceptible to thrashing, held in the raster scan experiment, while it did not in another non raster-scan experiment. The results of the experiments may be useful also in making parallel image processors more efficient, while they a...
To develop a universal gamut mapping algorithm
International Nuclear Information System (INIS)
Morovic, J.
1998-10-01
When a colour image from one colour reproduction medium (e.g. nature, a monitor) needs to be reproduced on another (e.g. on a monitor or in print) and these media have different colour ranges (gamuts), it is necessary to have a method for mapping between them. If such a gamut mapping algorithm can be used under a wide range of conditions, it can also be incorporated in an automated colour reproduction system and considered to be in some sense universal. In terms of preliminary work, a colour reproduction system was implemented, for which a new printer characterisation model (including grey-scale correction) was developed. Methods were also developed for calculating gamut boundary descriptors and for calculating gamut boundaries along given lines from them. The gamut mapping solution proposed in this thesis is a gamut compression algorithm developed with the aim of being accurate and universally applicable. It was arrived at by way of an evolutionary gamut mapping development strategy for the purposes of which five test images were reproduced between a CRT and printed media obtained using an inkjet printer. Initially, a number of previously published algorithms were chosen and psychophysically evaluated whereby an important characteristic of this evaluation was that it also considered the performance of algorithms for individual colour regions within the test images used. New algorithms were then developed on their basis, subsequently evaluated and this process was repeated once more. In this series of experiments the new GCUSP algorithm, which consists of a chroma-dependent lightness compression followed by a compression towards the lightness of the reproduction cusp on the lightness axis, gave the most accurate and stable performance overall. The results of these experiments were also useful for improving the understanding of some gamut mapping factors - in particular gamut difference. In addition to looking at accuracy, the pleasantness of reproductions obtained
Honing process optimization algorithms
Kadyrov, Ramil R.; Charikov, Pavel N.; Pryanichnikova, Valeria V.
2018-03-01
This article considers the relevance of honing processes for creating high-quality mechanical engineering products. The features of the honing process are revealed and such important concepts as the task for optimization of honing operations, the optimal structure of the honing working cycles, stepped and stepless honing cycles, simulation of processing and its purpose are emphasized. It is noted that the reliability of the mathematical model determines the quality parameters of the honing process control. An algorithm for continuous control of the honing process is proposed. The process model reliably describes the machining of a workpiece in a sufficiently wide area and can be used to operate the CNC machine CC743.
Opposite Degree Algorithm and Its Applications
Directory of Open Access Journals (Sweden)
Xiao-Guang Yue
2015-12-01
Full Text Available The opposite (Opposite Degree, referred to as OD algorithm is an intelligent algorithm proposed by Yue Xiaoguang et al. Opposite degree algorithm is mainly based on the concept of opposite degree, combined with the idea of design of neural network and genetic algorithm and clustering analysis algorithm. The OD algorithm is divided into two sub algorithms, namely: opposite degree - numerical computation (OD-NC algorithm and opposite degree - Classification computation (OD-CC algorithm.
Acoustic simulation in architecture with parallel algorithm
Li, Xiaohong; Zhang, Xinrong; Li, Dan
2004-03-01
In allusion to complexity of architecture environment and Real-time simulation of architecture acoustics, a parallel radiosity algorithm was developed. The distribution of sound energy in scene is solved with this method. And then the impulse response between sources and receivers at frequency segment, which are calculated with multi-process, are combined into whole frequency response. The numerical experiment shows that parallel arithmetic can improve the acoustic simulating efficiency of complex scene.
Fast algorithm for Morphological Filters
International Nuclear Information System (INIS)
Lou Shan; Jiang Xiangqian; Scott, Paul J
2011-01-01
In surface metrology, morphological filters, which evolved from the envelope filtering system (E-system) work well for functional prediction of surface finish in the analysis of surfaces in contact. The naive algorithms are time consuming, especially for areal data, and not generally adopted in real practice. A fast algorithm is proposed based on the alpha shape. The hull obtained by rolling the alpha ball is equivalent to the morphological opening/closing in theory. The algorithm depends on Delaunay triangulation with time complexity O(nlogn). In comparison to the naive algorithms it generates the opening and closing envelope without combining dilation and erosion. Edge distortion is corrected by reflective padding for open profiles/surfaces. Spikes in the sample data are detected and points interpolated to prevent singularities. The proposed algorithm works well both for morphological profile and area filters. Examples are presented to demonstrate the validity and superiority on efficiency of this algorithm over the naive algorithm.
Recognition algorithms in knot theory
International Nuclear Information System (INIS)
Dynnikov, I A
2003-01-01
In this paper the problem of constructing algorithms for comparing knots and links is discussed. A survey of existing approaches and basic results in this area is given. In particular, diverse combinatorial methods for representing links are discussed, the Haken algorithm for recognizing a trivial knot (the unknot) and a scheme for constructing a general algorithm (using Haken's ideas) for comparing links are presented, an approach based on representing links by closed braids is described, the known algorithms for solving the word problem and the conjugacy problem for braid groups are described, and the complexity of the algorithms under consideration is discussed. A new method of combinatorial description of knots is given together with a new algorithm (based on this description) for recognizing the unknot by using a procedure for monotone simplification. In the conclusion of the paper several problems are formulated whose solution could help to advance towards the 'algorithmization' of knot theory
Hybrid Cryptosystem Using Tiny Encryption Algorithm and LUC Algorithm
Rachmawati, Dian; Sharif, Amer; Jaysilen; Andri Budiman, Mohammad
2018-01-01
Security becomes a very important issue in data transmission and there are so many methods to make files more secure. One of that method is cryptography. Cryptography is a method to secure file by writing the hidden code to cover the original file. Therefore, if the people do not involve in cryptography, they cannot decrypt the hidden code to read the original file. There are many methods are used in cryptography, one of that method is hybrid cryptosystem. A hybrid cryptosystem is a method that uses a symmetric algorithm to secure the file and use an asymmetric algorithm to secure the symmetric algorithm key. In this research, TEA algorithm is used as symmetric algorithm and LUC algorithm is used as an asymmetric algorithm. The system is tested by encrypting and decrypting the file by using TEA algorithm and using LUC algorithm to encrypt and decrypt the TEA key. The result of this research is by using TEA Algorithm to encrypt the file, the cipher text form is the character from ASCII (American Standard for Information Interchange) table in the form of hexadecimal numbers and the cipher text size increase by sixteen bytes as the plaintext length is increased by eight characters.
Rabideau, Gregg R.; Chien, Steve A.
2010-01-01
AVA v2 software selects goals for execution from a set of goals that oversubscribe shared resources. The term goal refers to a science or engineering request to execute a possibly complex command sequence, such as image targets or ground-station downlinks. Developed as an extension to the Virtual Machine Language (VML) execution system, the software enables onboard and remote goal triggering through the use of an embedded, dynamic goal set that can oversubscribe resources. From the set of conflicting goals, a subset must be chosen that maximizes a given quality metric, which in this case is strict priority selection. A goal can never be pre-empted by a lower priority goal, and high-level goals can be added, removed, or updated at any time, and the "best" goals will be selected for execution. The software addresses the issue of re-planning that must be performed in a short time frame by the embedded system where computational resources are constrained. In particular, the algorithm addresses problems with well-defined goal requests without temporal flexibility that oversubscribes available resources. By using a fast, incremental algorithm, goal selection can be postponed in a "just-in-time" fashion allowing requests to be changed or added at the last minute. Thereby enabling shorter response times and greater autonomy for the system under control.
Algorithmic Relative Complexity
Directory of Open Access Journals (Sweden)
Daniele Cerra
2011-04-01
Full Text Available Information content and compression are tightly related concepts that can be addressed through both classical and algorithmic information theories, on the basis of Shannon entropy and Kolmogorov complexity, respectively. The definition of several entities in Kolmogorov’s framework relies upon ideas from classical information theory, and these two approaches share many common traits. In this work, we expand the relations between these two frameworks by introducing algorithmic cross-complexity and relative complexity, counterparts of the cross-entropy and relative entropy (or Kullback-Leibler divergence found in Shannon’s framework. We define the cross-complexity of an object x with respect to another object y as the amount of computational resources needed to specify x in terms of y, and the complexity of x related to y as the compression power which is lost when adopting such a description for x, compared to the shortest representation of x. Properties of analogous quantities in classical information theory hold for these new concepts. As these notions are incomputable, a suitable approximation based upon data compression is derived to enable the application to real data, yielding a divergence measure applicable to any pair of strings. Example applications are outlined, involving authorship attribution and satellite image classification, as well as a comparison to similar established techniques.
Fatigue evaluation algorithms: Review
Energy Technology Data Exchange (ETDEWEB)
Passipoularidis, V.A.; Broendsted, P.
2009-11-15
A progressive damage fatigue simulator for variable amplitude loads named FADAS is discussed in this work. FADAS (Fatigue Damage Simulator) performs ply by ply stress analysis using classical lamination theory and implements adequate stiffness discount tactics based on the failure criterion of Puck, to model the degradation caused by failure events in ply level. Residual strength is incorporated as fatigue damage accumulation metric. Once the typical fatigue and static properties of the constitutive ply are determined,the performance of an arbitrary lay-up under uniaxial and/or multiaxial load time series can be simulated. The predictions are validated against fatigue life data both from repeated block tests at a single stress ratio as well as against spectral fatigue using the WISPER, WISPERX and NEW WISPER load sequences on a Glass/Epoxy multidirectional laminate typical of a wind turbine rotor blade construction. Two versions of the algorithm, the one using single-step and the other using incremental application of each load cycle (in case of ply failure) are implemented and compared. Simulation results confirm the ability of the algorithm to take into account load sequence effects. In general, FADAS performs well in predicting life under both spectral and block loading fatigue. (author)
NRGC: a novel referential genome compression algorithm.
Saha, Subrata; Rajasekaran, Sanguthevar
2016-11-15
Next-generation sequencing techniques produce millions to billions of short reads. The procedure is not only very cost effective but also can be done in laboratory environment. The state-of-the-art sequence assemblers then construct the whole genomic sequence from these reads. Current cutting edge computing technology makes it possible to build genomic sequences from the billions of reads within a minimal cost and time. As a consequence, we see an explosion of biological sequences in recent times. In turn, the cost of storing the sequences in physical memory or transmitting them over the internet is becoming a major bottleneck for research and future medical applications. Data compression techniques are one of the most important remedies in this context. We are in need of suitable data compression algorithms that can exploit the inherent structure of biological sequences. Although standard data compression algorithms are prevalent, they are not suitable to compress biological sequencing data effectively. In this article, we propose a novel referential genome compression algorithm (NRGC) to effectively and efficiently compress the genomic sequences. We have done rigorous experiments to evaluate NRGC by taking a set of real human genomes. The simulation results show that our algorithm is indeed an effective genome compression algorithm that performs better than the best-known algorithms in most of the cases. Compression and decompression times are also very impressive. The implementations are freely available for non-commercial purposes. They can be downloaded from: http://www.engr.uconn.edu/~rajasek/NRGC.zip CONTACT: rajasek@engr.uconn.edu. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Comparison analysis for classification algorithm in data mining and the study of model use
Chen, Junde; Zhang, Defu
2018-04-01
As a key technique in data mining, classification algorithm was received extensive attention. Through an experiment of classification algorithm in UCI data set, we gave a comparison analysis method for the different algorithms and the statistical test was used here. Than that, an adaptive diagnosis model for preventive electricity stealing and leakage was given as a specific case in the paper.
Robust and accurate detection algorithm for multimode polymer optical FBG sensor system
DEFF Research Database (Denmark)
Ganziy, Denis; Jespersen, O.; Rose, B.
2015-01-01
We propose a novel dynamic gate algorithm (DGA) for robust and fast peak detection. The algorithm uses a threshold determined detection window and center of gravity algorithm with bias compensation. Our experiment demonstrates that the DGA method is fast and robust with better stability and accur...
Demonstration of essentiality of entanglement in a Deutsch-like quantum algorithm
Huang, He-Liang; Goswami, Ashutosh K.; Bao, Wan-Su; Panigrahi, Prasanta K.
2018-06-01
Quantum algorithms can be used to efficiently solve certain classically intractable problems by exploiting quantum parallelism. However, the effectiveness of quantum entanglement in quantum computing remains a question of debate. This study presents a new quantum algorithm that shows entanglement could provide advantages over both classical algorithms and quantum algo- rithms without entanglement. Experiments are implemented to demonstrate the proposed algorithm using superconducting qubits. Results show the viability of the algorithm and suggest that entanglement is essential in obtaining quantum speedup for certain problems in quantum computing. The study provides reliable and clear guidance for developing useful quantum algorithms.
An Approximation Algorithm for the Facility Location Problem with Lexicographic Minimax Objective
Directory of Open Access Journals (Sweden)
Ľuboš Buzna
2014-01-01
Full Text Available We present a new approximation algorithm to the discrete facility location problem providing solutions that are close to the lexicographic minimax optimum. The lexicographic minimax optimum is a concept that allows to find equitable location of facilities serving a large number of customers. The algorithm is independent of general purpose solvers and instead uses algorithms originally designed to solve the p-median problem. By numerical experiments, we demonstrate that our algorithm allows increasing the size of solvable problems and provides high-quality solutions. The algorithm found an optimal solution for all tested instances where we could compare the results with the exact algorithm.
Zhang, Zheng
2017-10-01
Concept of radio direction finding systems, which use radio direction finding is based on digital signal processing algorithms. Thus, the radio direction finding system becomes capable to locate and track signals by the both. Performance of radio direction finding significantly depends on effectiveness of digital signal processing algorithms. The algorithm uses the Direction of Arrival (DOA) algorithms to estimate the number of incidents plane waves on the antenna array and their angle of incidence. This manuscript investigates implementation of the DOA algorithms (MUSIC) on the uniform linear array in the presence of white noise. The experiment results exhibit that MUSIC algorithm changed well with the radio direction.
An Extension of the Fuzzy Possibilistic Clustering Algorithm Using Type-2 Fuzzy Logic Techniques
Directory of Open Access Journals (Sweden)
Elid Rubio
2017-01-01
Full Text Available In this work an extension of the Fuzzy Possibilistic C-Means (FPCM algorithm using Type-2 Fuzzy Logic Techniques is presented, and this is done in order to improve the efficiency of FPCM algorithm. With the purpose of observing the performance of the proposal against the Interval Type-2 Fuzzy C-Means algorithm, several experiments were made using both algorithms with well-known datasets, such as Wine, WDBC, Iris Flower, Ionosphere, Abalone, and Cover type. In addition some experiments were performed using another set of test images to observe the behavior of both of the above-mentioned algorithms in image preprocessing. Some comparisons are performed between the proposed algorithm and the Interval Type-2 Fuzzy C-Means (IT2FCM algorithm to observe if the proposed approach has better performance than this algorithm.
STAR Algorithm Integration Team - Facilitating operational algorithm development
Mikles, V. J.
2015-12-01
The NOAA/NESDIS Center for Satellite Research and Applications (STAR) provides technical support of the Joint Polar Satellite System (JPSS) algorithm development and integration tasks. Utilizing data from the S-NPP satellite, JPSS generates over thirty Environmental Data Records (EDRs) and Intermediate Products (IPs) spanning atmospheric, ocean, cryosphere, and land weather disciplines. The Algorithm Integration Team (AIT) brings technical expertise and support to product algorithms, specifically in testing and validating science algorithms in a pre-operational environment. The AIT verifies that new and updated algorithms function in the development environment, enforces established software development standards, and ensures that delivered packages are functional and complete. AIT facilitates the development of new JPSS-1 algorithms by implementing a review approach based on the Enterprise Product Lifecycle (EPL) process. Building on relationships established during the S-NPP algorithm development process and coordinating directly with science algorithm developers, the AIT has implemented structured reviews with self-contained document suites. The process has supported algorithm improvements for products such as ozone, active fire, vegetation index, and temperature and moisture profiles.
Algorithm aversion: people erroneously avoid algorithms after seeing them err.
Dietvorst, Berkeley J; Simmons, Joseph P; Massey, Cade
2015-02-01
Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. In 5 studies, participants either saw an algorithm make forecasts, a human make forecasts, both, or neither. They then decided whether to tie their incentives to the future predictions of the algorithm or the human. Participants who saw the algorithm perform were less confident in it, and less likely to choose it over an inferior human forecaster. This was true even among those who saw the algorithm outperform the human.
The Texas Medication Algorithm Project (TMAP) schizophrenia algorithms.
Miller, A L; Chiles, J A; Chiles, J K; Crismon, M L; Rush, A J; Shon, S P
1999-10-01
In the Texas Medication Algorithm Project (TMAP), detailed guidelines for medication management of schizophrenia and related disorders, bipolar disorders, and major depressive disorders have been developed and implemented. This article describes the algorithms developed for medication treatment of schizophrenia and related disorders. The guidelines recommend a sequence of medications and discuss dosing, duration, and switch-over tactics. They also specify response criteria at each stage of the algorithm for both positive and negative symptoms. The rationale and evidence for each aspect of the algorithms are presented.
Algorithmic Reflections on Choreography
Directory of Open Access Journals (Sweden)
Pablo Ventura
2016-11-01
Full Text Available In 1996, Pablo Ventura turned his attention to the choreography software Life Forms to find out whether the then-revolutionary new tool could lead to new possibilities of expression in contemporary dance. During the next 2 decades, he devised choreographic techniques and custom software to create dance works that highlight the operational logic of computers, accompanied by computer-generated dance and media elements. This article provides a firsthand account of how Ventura’s engagement with algorithmic concepts guided and transformed his choreographic practice. The text describes the methods that were developed to create computer-aided dance choreographies. Furthermore, the text illustrates how choreography techniques can be applied to correlate formal and aesthetic aspects of movement, music, and video. Finally, the text emphasizes how Ventura’s interest in the wider conceptual context has led him to explore with choreographic means fundamental issues concerning the characteristics of humans and machines and their increasingly profound interdependencies.
Evaluation of Algorithms for Compressing Hyperspectral Data
Cook, Sid; Harsanyi, Joseph; Faber, Vance
2003-01-01
With EO-1 Hyperion in orbit NASA is showing their continued commitment to hyperspectral imaging (HSI). As HSI sensor technology continues to mature, the ever-increasing amounts of sensor data generated will result in a need for more cost effective communication and data handling systems. Lockheed Martin, with considerable experience in spacecraft design and developing special purpose onboard processors, has teamed with Applied Signal & Image Technology (ASIT), who has an extensive heritage in HSI spectral compression and Mapping Science (MSI) for JPEG 2000 spatial compression expertise, to develop a real-time and intelligent onboard processing (OBP) system to reduce HSI sensor downlink requirements. Our goal is to reduce the downlink requirement by a factor > 100, while retaining the necessary spectral and spatial fidelity of the sensor data needed to satisfy the many science, military, and intelligence goals of these systems. Our compression algorithms leverage commercial-off-the-shelf (COTS) spectral and spatial exploitation algorithms. We are currently in the process of evaluating these compression algorithms using statistical analysis and NASA scientists. We are also developing special purpose processors for executing these algorithms onboard a spacecraft.
Adaptive sampling algorithm for detection of superpoints
Institute of Scientific and Technical Information of China (English)
CHENG Guang; GONG Jian; DING Wei; WU Hua; QIANG ShiQiang
2008-01-01
The superpoints are the sources (or the destinations) that connect with a great deal of destinations (or sources) during a measurement time interval, so detecting the superpoints in real time is very important to network security and management. Previous algorithms are not able to control the usage of the memory and to deliver the desired accuracy, so it is hard to detect the superpoints on a high speed link in real time. In this paper, we propose an adaptive sampling algorithm to detect the superpoints in real time, which uses a flow sample and hold module to reduce the detection of the non-superpoints and to improve the measurement accuracy of the superpoints. We also design a data stream structure to maintain the flow records, which compensates for the flow Hash collisions statistically. An adaptive process based on different sampling probabilities is used to maintain the recorded IP ad dresses in the limited memory. This algorithm is compared with the other algo rithms by analyzing the real network trace data. Experiment results and mathematic analysis show that this algorithm has the advantages of both the limited memory requirement and high measurement accuracy.
Parallel algorithms for testing finite state machines:Generating UIO sequences
Hierons, RM; Turker, UC
2016-01-01
This paper describes an efficient parallel algorithm that uses many-core GPUs for automatically deriving Unique Input Output sequences (UIOs) from Finite State Machines. The proposed algorithm uses the global scope of the GPU's global memory through coalesced memory access and minimises the transfer between CPU and GPU memory. The results of experiments indicate that the proposed method yields considerably better results compared to a single core UIO construction algorithm. Our algorithm is s...
Algorithm for Public Electric Transport Schedule Control for Intelligent Embedded Devices
Alps, Ivars; Potapov, Andrey; Gorobetz, Mikhail; Levchenkov, Anatoly
2010-01-01
In this paper authors present heuristics algorithm for precise schedule fulfilment in city traffic conditions taking in account traffic lights. The algorithm is proposed for programmable controller. PLC is proposed to be installed in electric vehicle to control its motion speed and signals of traffic lights. Algorithm is tested using real controller connected to virtual devices and real functional models of real tram devices. Results of experiments show high precision of public transport schedule fulfilment using proposed algorithm.
Bourkland, Kristin L.; Liu, Kuo-Chia
2011-01-01
The Solar Dynamics Observatory (SDO) is a NASA spacecraft designed to study the Sun. It was launched on February 11, 2010 into a geosynchronous orbit, and uses a suite of attitude sensors and actuators to finely point the spacecraft at the Sun. SDO has three science instruments: the Atmospheric Imaging Assembly (AIA), the Helioseismic and Magnetic Imager (HMI), and the Extreme Ultraviolet Variability Experiment (EVE). SDO uses two High Gain Antennas (HGAs) to send science data to a dedicated ground station in White Sands, New Mexico. In order to meet the science data capture budget, the HGAs must be able to transmit data to the ground for a very large percentage of the time. Each HGA is a dual-axis antenna driven by stepper motors. Both antennas transmit data at all times, but only a single antenna is required in order to meet the transmission rate requirement. For portions of the year, one antenna or the other has an unobstructed view of the White Sands ground station. During other periods, however, the view from both antennas to the Earth is blocked for different portions of the day. During these times of blockage, the two HGAs take turns pointing to White Sands, with the other antenna pointing out to space. The HGAs handover White Sands transmission responsibilities to the unblocked antenna. There are two handover seasons per year, each lasting about 72 days, where the antennas hand off control every twelve hours. The non-tracking antenna slews back to the ground station by following a ground commanded trajectory and arrives approximately 5 minutes before the formerly tracking antenna slews away to point out into space. The SDO Attitude Control System (ACS) runs at 5 Hz, and the HGA Gimbal Control Electronics (GCE) run at 200 Hz. There are 40 opportunities for the gimbals to step each ACS cycle, with a hardware limitation of no more than one step every three GCE cycles. The ACS calculates the desired gimbal motion for tracking the ground station or for slewing
Congested Link Inference Algorithms in Dynamic Routing IP Network
Directory of Open Access Journals (Sweden)
Yu Chen
2017-01-01
Full Text Available The performance descending of current congested link inference algorithms is obviously in dynamic routing IP network, such as the most classical algorithm CLINK. To overcome this problem, based on the assumptions of Markov property and time homogeneity, we build a kind of Variable Structure Discrete Dynamic Bayesian (VSDDB network simplified model of dynamic routing IP network. Under the simplified VSDDB model, based on the Bayesian Maximum A Posteriori (BMAP and Rest Bayesian Network Model (RBNM, we proposed an Improved CLINK (ICLINK algorithm. Considering the concurrent phenomenon of multiple link congestion usually happens, we also proposed algorithm CLILRS (Congested Link Inference algorithm based on Lagrangian Relaxation Subgradient to infer the set of congested links. We validated our results by the experiments of analogy, simulation, and actual Internet.
Training Feedforward Neural Networks Using Symbiotic Organisms Search Algorithm
Directory of Open Access Journals (Sweden)
Haizhou Wu
2016-01-01
Full Text Available Symbiotic organisms search (SOS is a new robust and powerful metaheuristic algorithm, which stimulates the symbiotic interaction strategies adopted by organisms to survive and propagate in the ecosystem. In the supervised learning area, it is a challenging task to present a satisfactory and efficient training algorithm for feedforward neural networks (FNNs. In this paper, SOS is employed as a new method for training FNNs. To investigate the performance of the aforementioned method, eight different datasets selected from the UCI machine learning repository are employed for experiment and the results are compared among seven metaheuristic algorithms. The results show that SOS performs better than other algorithms for training FNNs in terms of converging speed. It is also proven that an FNN trained by the method of SOS has better accuracy than most algorithms compared.
Watermarking Algorithms for 3D NURBS Graphic Data
Directory of Open Access Journals (Sweden)
Jae Jun Lee
2004-10-01
Full Text Available Two watermarking algorithms for 3D nonuniform rational B-spline (NURBS graphic data are proposed: one is appropriate for the steganography, and the other for watermarking. Instead of directly embedding data into the parameters of NURBS, the proposed algorithms embed data into the 2D virtual images extracted by parameter sampling of 3D model. As a result, the proposed steganography algorithm can embed information into more places of the surface than the conventional algorithm, while preserving the data size of the model. Also, any existing 2D watermarking technique can be used for the watermarking of 3D NURBS surfaces. From the experiment, it is found that the algorithm for the watermarking is robust to the attacks on weights, control points, and knots. It is also found to be robust to the remodeling of NURBS models.
Multisensor data fusion algorithm development
Energy Technology Data Exchange (ETDEWEB)
Yocky, D.A.; Chadwick, M.D.; Goudy, S.P.; Johnson, D.K.
1995-12-01
This report presents a two-year LDRD research effort into multisensor data fusion. We approached the problem by addressing the available types of data, preprocessing that data, and developing fusion algorithms using that data. The report reflects these three distinct areas. First, the possible data sets for fusion are identified. Second, automated registration techniques for imagery data are analyzed. Third, two fusion techniques are presented. The first fusion algorithm is based on the two-dimensional discrete wavelet transform. Using test images, the wavelet algorithm is compared against intensity modulation and intensity-hue-saturation image fusion algorithms that are available in commercial software. The wavelet approach outperforms the other two fusion techniques by preserving spectral/spatial information more precisely. The wavelet fusion algorithm was also applied to Landsat Thematic Mapper and SPOT panchromatic imagery data. The second algorithm is based on a linear-regression technique. We analyzed the technique using the same Landsat and SPOT data.
The Research and Application of SURF Algorithm Based on Feature Point Selection Algorithm
Directory of Open Access Journals (Sweden)
Zhang Fang Hu
2014-04-01
Full Text Available As the pixel information of depth image is derived from the distance information, when implementing SURF algorithm with KINECT sensor for static sign language recognition, there can be some mismatched pairs in palm area. This paper proposes a feature point selection algorithm, by filtering the SURF feature points step by step based on the number of feature points within adaptive radius r and the distance between the two points, it not only greatly improves the recognition rate, but also ensures the robustness under the environmental factors, such as skin color, illumination intensity, complex background, angle and scale changes. The experiment results show that the improved SURF algorithm can effectively improve the recognition rate, has a good robustness.
Mao-Gilles Stabilization Algorithm
Jérôme Gilles
2013-01-01
Originally, the Mao-Gilles stabilization algorithm was designed to compensate the non-rigid deformations due to atmospheric turbulence. Given a sequence of frames affected by atmospheric turbulence, the algorithm uses a variational model combining optical flow and regularization to characterize the static observed scene. The optimization problem is solved by Bregman Iteration and the operator splitting method. The algorithm is simple, efficient, and can be easily generalized for different sce...
Mao-Gilles Stabilization Algorithm
Directory of Open Access Journals (Sweden)
Jérôme Gilles
2013-07-01
Full Text Available Originally, the Mao-Gilles stabilization algorithm was designed to compensate the non-rigid deformations due to atmospheric turbulence. Given a sequence of frames affected by atmospheric turbulence, the algorithm uses a variational model combining optical flow and regularization to characterize the static observed scene. The optimization problem is solved by Bregman Iteration and the operator splitting method. The algorithm is simple, efficient, and can be easily generalized for different scenarios involving non-rigid deformations.
One improved LSB steganography algorithm
Song, Bing; Zhang, Zhi-hong
2013-03-01
It is easy to be detected by X2 and RS steganalysis with high accuracy that using LSB algorithm to hide information in digital image. We started by selecting information embedded location and modifying the information embedded method, combined with sub-affine transformation and matrix coding method, improved the LSB algorithm and a new LSB algorithm was proposed. Experimental results show that the improved one can resist the X2 and RS steganalysis effectively.
Graph Algorithm Animation with Grrr
Rodgers, Peter; Vidal, Natalia
2000-01-01
We discuss geometric positioning, highlighting of visited nodes and user defined highlighting that form the algorithm animation facilities in the Grrr graph rewriting programming language. The main purpose of animation was initially for the debugging and profiling of Grrr code, but recently it has been extended for the purpose of teaching algorithms to undergraduate students. The animation is restricted to graph based algorithms such as graph drawing, list manipulation or more traditional gra...
Algorithms over partially ordered sets
DEFF Research Database (Denmark)
Baer, Robert M.; Østerby, Ole
1969-01-01
in partially ordered sets, answer the combinatorial question of how many maximal chains might exist in a partially ordered set withn elements, and we give an algorithm for enumerating all maximal chains. We give (in § 3) algorithms which decide whether a partially ordered set is a (lower or upper) semi......-lattice, and whether a lattice has distributive, modular, and Boolean properties. Finally (in § 4) we give Algol realizations of the various algorithms....
An overview of smart grid routing algorithms
Wang, Junsheng; OU, Qinghai; Shen, Haijuan
2017-08-01
This paper summarizes the typical routing algorithm in smart grid by analyzing the communication business and communication requirements of intelligent grid. Mainly from the two kinds of routing algorithm is analyzed, namely clustering routing algorithm and routing algorithm, analyzed the advantages and disadvantages of two kinds of typical routing algorithm in routing algorithm and applicability.
Algorithmic complexity of quantum capacity
Oskouei, Samad Khabbazi; Mancini, Stefano
2018-04-01
We analyze the notion of quantum capacity from the perspective of algorithmic (descriptive) complexity. To this end, we resort to the concept of semi-computability in order to describe quantum states and quantum channel maps. We introduce algorithmic entropies (like algorithmic quantum coherent information) and derive relevant properties for them. Then we show that quantum capacity based on semi-computable concept equals the entropy rate of algorithmic coherent information, which in turn equals the standard quantum capacity. Thanks to this, we finally prove that the quantum capacity, for a given semi-computable channel, is limit computable.
Machine Learning an algorithmic perspective
Marsland, Stephen
2009-01-01
Traditional books on machine learning can be divided into two groups - those aimed at advanced undergraduates or early postgraduates with reasonable mathematical knowledge and those that are primers on how to code algorithms. The field is ready for a text that not only demonstrates how to use the algorithms that make up machine learning methods, but also provides the background needed to understand how and why these algorithms work. Machine Learning: An Algorithmic Perspective is that text.Theory Backed up by Practical ExamplesThe book covers neural networks, graphical models, reinforcement le
DNABIT Compress - Genome compression algorithm.
Rajarajeswari, Pothuraju; Apparao, Allam
2011-01-22
Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.
Diversity-Guided Evolutionary Algorithms
DEFF Research Database (Denmark)
Ursem, Rasmus Kjær
2002-01-01
Population diversity is undoubtably a key issue in the performance of evolutionary algorithms. A common hypothesis is that high diversity is important to avoid premature convergence and to escape local optima. Various diversity measures have been used to analyze algorithms, but so far few...... algorithms have used a measure to guide the search. The diversity-guided evolutionary algorithm (DGEA) uses the wellknown distance-to-average-point measure to alternate between phases of exploration (mutation) and phases of exploitation (recombination and selection). The DGEA showed remarkable results...
FRAMEWORK FOR COMPARING SEGMENTATION ALGORITHMS
Directory of Open Access Journals (Sweden)
G. Sithole
2015-05-01
Full Text Available The notion of a ‘Best’ segmentation does not exist. A segmentation algorithm is chosen based on the features it yields, the properties of the segments (point sets it generates, and the complexity of its algorithm. The segmentation is then assessed based on a variety of metrics such as homogeneity, heterogeneity, fragmentation, etc. Even after an algorithm is chosen its performance is still uncertain because the landscape/scenarios represented in a point cloud have a strong influence on the eventual segmentation. Thus selecting an appropriate segmentation algorithm is a process of trial and error. Automating the selection of segmentation algorithms and their parameters first requires methods to evaluate segmentations. Three common approaches for evaluating segmentation algorithms are ‘goodness methods’, ‘discrepancy methods’ and ‘benchmarks’. Benchmarks are considered the most comprehensive method of evaluation. This paper shortcomings in current benchmark methods are identified and a framework is proposed that permits both a visual and numerical evaluation of segmentations for different algorithms, algorithm parameters and evaluation metrics. The concept of the framework is demonstrated on a real point cloud. Current results are promising and suggest that it can be used to predict the performance of segmentation algorithms.
Incoherent beam combining based on the momentum SPGD algorithm
Yang, Guoqing; Liu, Lisheng; Jiang, Zhenhua; Guo, Jin; Wang, Tingfeng
2018-05-01
Incoherent beam combining (ICBC) technology is one of the most promising ways to achieve high-energy, near-diffraction laser output. In this paper, the momentum method is proposed as a modification of the stochastic parallel gradient descent (SPGD) algorithm. The momentum method can improve the speed of convergence of the combining system efficiently. The analytical method is employed to interpret the principle of the momentum method. Furthermore, the proposed algorithm is testified through simulations as well as experiments. The results of the simulations and the experiments show that the proposed algorithm not only accelerates the speed of the iteration, but also keeps the stability of the combining process. Therefore the feasibility of the proposed algorithm in the beam combining system is testified.
Energy Technology Data Exchange (ETDEWEB)
Marants, R [Department of Medical Physics, Carleton University (Canada); Vandervoort, E [Department of Medical Physics, The Ottawa Hospital Cancer Centre (Canada); Cygler, J E [Department of Medical Physics, Carleton University, Department of Medical Physics, The Ottawa Hospital Cancer Centre (Canada)
2014-08-15
Introduction: RADPOS 4D dosimetry system consists of a microMOSFET dosimeter combined with an electromagnetic positioning sensor, which allows for performing real-time dose and position measurements simultaneously. In this report the use of RADPOS as an independent quality assurance (QA) tool during CyberKnife 4D radiotherapy treatment is described. In addition to RADPOS, GAFCHROMIC® films were used for simultaneous dose measurement. Methods: RADPOS and films were calibrated in a Solid Water® phantom at 1.5 cm depth, SAD= 80 cm, using 60 mm cone. CT based treatment plan was created for a Solid Water® breast phantom containing metal fiducials and RADPOS probe. Dose calculations were performed using iPlan pencil beam algorithm. Before the treatment delivery, GAFCHROMIC® film was inserted inside the breast phantom, next to the RADPOS probe. Then the phantom was positioned on the chest platform of the QUASAR, to which Synchrony LED optical markers were also attached. Position logging began for RADPOS and the Synchrony tracking system, the QUASAR motion was initiated and the treatment was delivered. Results: RADPOS position measurements very closely matched the LED marker positions recorded by the Synchrony camera tracking system. The RADPOS measured dose was 2.5% higher than the average film measured dose, which is within the experimental uncertainties. Treatment plan calculated dose was 4.1 and 1.6% lower than measured by RADPOS and film, respectively. This is most likely due to the inferior nature of the dose calculation algorithm. Conclusions: Our study demonstrates that RADPOS system is a useful tool for independent QA of CyberKnife treatments.
Discrete Hadamard transformation algorithm's parallelism analysis and achievement
Hu, Hui
2009-07-01
With respect to Discrete Hadamard Transformation (DHT) wide application in real-time signal processing while limitation in operation speed of DSP. The article makes DHT parallel research and its parallel performance analysis. Based on multiprocessor platform-TMS320C80 programming structure, the research is carried out to achieve two kinds of parallel DHT algorithms. Several experiments demonstrated the effectiveness of the proposed algorithms.
A Chinese text classification system based on Naive Bayes algorithm
Directory of Open Access Journals (Sweden)
Cui Wei
2016-01-01
Full Text Available In this paper, aiming at the characteristics of Chinese text classification, using the ICTCLAS(Chinese lexical analysis system of Chinese academy of sciences for document segmentation, and for data cleaning and filtering the Stop words, using the information gain and document frequency feature selection algorithm to document feature selection. Based on this, based on the Naive Bayesian algorithm implemented text classifier , and use Chinese corpus of Fudan University has carried on the experiment and analysis on the system.
An Algorithm for Investigating the Structure of Material Surfaces
Directory of Open Access Journals (Sweden)
M. Toman
2003-01-01
Full Text Available The aim of this paper is to summarize the algorithm and the experience that have been achieved in the investigation of grain structure of surfaces of certain materials, particularly from samples of gold. The main parts of the algorithm to be discussed are:1. acquisition of input data,2. localization of grain region,3. representation of grain size,4. representation of outputs (postprocessing.
Quantum computation and Shor close-quote s factoring algorithm
International Nuclear Information System (INIS)
Ekert, A.; Jozsa, R.
1996-01-01
Current technology is beginning to allow us to manipulate rather than just observe individual quantum phenomena. This opens up the possibility of exploiting quantum effects to perform computations beyond the scope of any classical computer. Recently Peter Shor discovered an efficient algorithm for factoring whole numbers, which uses characteristically quantum effects. The algorithm illustrates the potential power of quantum computation, as there is no known efficient classical method for solving this problem. The authors give an exposition of Shor close-quote s algorithm together with an introduction to quantum computation and complexity theory. They discuss experiments that may contribute to its practical implementation. copyright 1996 The American Physical Society
A Global Optimization Algorithm for Sum of Linear Ratios Problem
Directory of Open Access Journals (Sweden)
Yuelin Gao
2013-01-01
Full Text Available We equivalently transform the sum of linear ratios programming problem into bilinear programming problem, then by using the linear characteristics of convex envelope and concave envelope of double variables product function, linear relaxation programming of the bilinear programming problem is given, which can determine the lower bound of the optimal value of original problem. Therefore, a branch and bound algorithm for solving sum of linear ratios programming problem is put forward, and the convergence of the algorithm is proved. Numerical experiments are reported to show the effectiveness of the proposed algorithm.
Real-world experimentation of distributed DSA network algorithms
DEFF Research Database (Denmark)
Tonelli, Oscar; Berardinelli, Gilberto; Tavares, Fernando Menezes Leitão
2013-01-01
such as a dynamic propagation environment, human presence impact and terminals mobility. This chapter focuses on the practical aspects related to the real world-experimentation with distributed DSA network algorithms over a testbed network. Challenges and solutions are extensively discussed, from the testbed design......The problem of spectrum scarcity in uncoordinated and/or heterogeneous wireless networks is the key aspect driving the research in the field of flexible management of frequency resources. In particular, distributed dynamic spectrum access (DSA) algorithms enable an efficient sharing...... to the setup of experiments. A practical example of experimentation process with a DSA algorithm is also provided....
Comparison of greedy algorithms for α-decision tree construction
Alkhalid, Abdulaziz; Chikalov, Igor; Moshkov, Mikhail
2011-01-01
A comparison among different heuristics that are used by greedy algorithms which constructs approximate decision trees (α-decision trees) is presented. The comparison is conducted using decision tables based on 24 data sets from UCI Machine Learning Repository [2]. Complexity of decision trees is estimated relative to several cost functions: depth, average depth, number of nodes, number of nonterminal nodes, and number of terminal nodes. Costs of trees built by greedy algorithms are compared with minimum costs calculated by an algorithm based on dynamic programming. The results of experiments assign to each cost function a set of potentially good heuristics that minimize it. © 2011 Springer-Verlag.
A New Augmentation Based Algorithm for Extracting Maximal Chordal Subgraphs.
Bhowmick, Sanjukta; Chen, Tzu-Yi; Halappanavar, Mahantesh
2015-02-01
A graph is chordal if every cycle of length greater than three contains an edge between non-adjacent vertices. Chordal graphs are of interest both theoretically, since they admit polynomial time solutions to a range of NP-hard graph problems, and practically, since they arise in many applications including sparse linear algebra, computer vision, and computational biology. A maximal chordal subgraph is a chordal subgraph that is not a proper subgraph of any other chordal subgraph. Existing algorithms for computing maximal chordal subgraphs depend on dynamically ordering the vertices, which is an inherently sequential process and therefore limits the algorithms' parallelizability. In this paper we explore techniques to develop a scalable parallel algorithm for extracting a maximal chordal subgraph. We demonstrate that an earlier attempt at developing a parallel algorithm may induce a non-optimal vertex ordering and is therefore not guaranteed to terminate with a maximal chordal subgraph. We then give a new algorithm that first computes and then repeatedly augments a spanning chordal subgraph. After proving that the algorithm terminates with a maximal chordal subgraph, we then demonstrate that this algorithm is more amenable to parallelization and that the parallel version also terminates with a maximal chordal subgraph. That said, the complexity of the new algorithm is higher than that of the previous parallel algorithm, although the earlier algorithm computes a chordal subgraph which is not guaranteed to be maximal. We experimented with our augmentation-based algorithm on both synthetic and real-world graphs. We provide scalability results and also explore the effect of different choices for the initial spanning chordal subgraph on both the running time and on the number of edges in the maximal chordal subgraph.
Genetic Algorithms for Multiple-Choice Problems
Aickelin, Uwe
2010-04-01
This thesis investigates the use of problem-specific knowledge to enhance a genetic algorithm approach to multiple-choice optimisation problems.It shows that such information can significantly enhance performance, but that the choice of information and the way it is included are important factors for success.Two multiple-choice problems are considered.The first is constructing a feasible nurse roster that considers as many requests as possible.In the second problem, shops are allocated to locations in a mall subject to constraints and maximising the overall income.Genetic algorithms are chosen for their well-known robustness and ability to solve large and complex discrete optimisation problems.However, a survey of the literature reveals room for further research into generic ways to include constraints into a genetic algorithm framework.Hence, the main theme of this work is to balance feasibility and cost of solutions.In particular, co-operative co-evolution with hierarchical sub-populations, problem structure exploiting repair schemes and indirect genetic algorithms with self-adjusting decoder functions are identified as promising approaches.The research starts by applying standard genetic algorithms to the problems and explaining the failure of such approaches due to epistasis.To overcome this, problem-specific information is added in a variety of ways, some of which are designed to increase the number of feasible solutions found whilst others are intended to improve the quality of such solutions.As well as a theoretical discussion as to the underlying reasons for using each operator,extensive computational experiments are carried out on a variety of data.These show that the indirect approach relies less on problem structure and hence is easier to implement and superior in solution quality.
Stereo Matching Based On Election Campaign Algorithm
Directory of Open Access Journals (Sweden)
Xie Qing Hua
2016-01-01
Full Text Available Stereo matching is one of the significant problems in the study of the computer vision. By getting the distance information through pixels, it is possible to reproduce a three-dimensional stereo. In this paper, the edges are the primitives for matching, the grey values of the edges and the magnitude and direction of the edge gradient were figured out as the properties of the edge feature points, according to the constraints for stereo matching, the energy function was built for finding the route minimizing by election campaign optimization algorithm during the process of stereo matching was applied to this problem the energy function. Experiment results show that this algorithm is more stable and it can get the matching result with better accuracy.
Algorithm for the Stochastic Generalized Transportation Problem
Directory of Open Access Journals (Sweden)
Marcin Anholcer
2012-01-01
Full Text Available The equalization method for the stochastic generalized transportation problem has been presented. The algorithm allows us to find the optimal solution to the problem of minimizing the expected total cost in the generalized transportation problem with random demand. After a short introduction and literature review, the algorithm is presented. It is a version of the method proposed by the author for the nonlinear generalized transportation problem. It is shown that this version of the method generates a sequence of solutions convergent to the KKT point. This guarantees the global optimality of the obtained solution, as the expected cost functions are convex and twice differentiable. The computational experiments performed for test problems of reasonable size show that the method is fast. (original abstract
Self-Adaptive Step Firefly Algorithm
Directory of Open Access Journals (Sweden)
Shuhao Yu
2013-01-01
Full Text Available In the standard firefly algorithm, each firefly has the same step settings and its values decrease from iteration to iteration. Therefore, it may fall into the local optimum. Furthermore, the decreasing of step is restrained by the maximum of iteration, which has an influence on the convergence speed and precision. In order to avoid falling into the local optimum and reduce the impact of the maximum of iteration, a self-adaptive step firefly algorithm is proposed in the paper. Its core idea is setting the step of each firefly varying with the iteration, according to each firefly’s historical information and current situation. Experiments are made to show the performance of our approach compared with the standard FA, based on sixteen standard testing benchmark functions. The results reveal that our method can prevent the premature convergence and improve the convergence speed and accurateness.